[jira] [Created] (CASSANDRA-9586) ant eclipse-warnings fails in trunk

2015-06-11 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-9586:
-

 Summary: ant eclipse-warnings fails in trunk
 Key: CASSANDRA-9586
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9586
 Project: Cassandra
  Issue Type: Bug
Reporter: Michael Shuler
 Fix For: 3.x


{noformat}
eclipse-warnings:
[mkdir] Created dir: /home/mshuler/git/cassandra/build/ecj
 [echo] Running Eclipse Code Analysis.  Output logged to 
/home/mshuler/git/cassandra/build/ecj/eclipse_compiler_checks.txt
 [java] incorrect classpath: 
/home/mshuler/git/cassandra/build/cobertura/classes
 [java] --
 [java] 1. ERROR in 
/home/mshuler/git/cassandra/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
 (at line 81)
 [java] super(new ChannelProxy(file), DEFAULT_BUFFER_SIZE, -1L, 
BufferType.OFF_HEAP);
 [java]   ^^
 [java] Potential resource leak: '' may not be 
closed
 [java] --
 [java] 1 problem (1 error)

BUILD FAILED
{noformat}

(checked 2.2 and did not find this issue)
git blame on line 81 shows commit 17dd4cc for CASSANDRA-8897



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9586) ant eclipse-warnings fails in trunk

2015-06-11 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-9586:
--
Assignee: Stefania

> ant eclipse-warnings fails in trunk
> ---
>
> Key: CASSANDRA-9586
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9586
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Shuler
>Assignee: Stefania
> Fix For: 3.x
>
>
> {noformat}
> eclipse-warnings:
> [mkdir] Created dir: /home/mshuler/git/cassandra/build/ecj
>  [echo] Running Eclipse Code Analysis.  Output logged to 
> /home/mshuler/git/cassandra/build/ecj/eclipse_compiler_checks.txt
>  [java] incorrect classpath: 
> /home/mshuler/git/cassandra/build/cobertura/classes
>  [java] --
>  [java] 1. ERROR in 
> /home/mshuler/git/cassandra/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
>  (at line 81)
>  [java] super(new ChannelProxy(file), DEFAULT_BUFFER_SIZE, -1L, 
> BufferType.OFF_HEAP);
>  [java]   ^^
>  [java] Potential resource leak: '' may not 
> be closed
>  [java] --
>  [java] 1 problem (1 error)
> BUILD FAILED
> {noformat}
> (checked 2.2 and did not find this issue)
> git blame on line 81 shows commit 17dd4cc for CASSANDRA-8897



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6696) Partition sstables by token range

2015-06-11 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582838#comment-14582838
 ] 

Yuki Morishita commented on CASSANDRA-6696:
---

Besides the code review going on Marcus' branch on github, I have one question.

For non-vnode, current implementation splits local ranges from start to end 
evenly over disks. Looks like it assumes that local ranges are close each other.
But isn't there a situation that node's local ranges are very sparse(maybe NTS 
with multiple DCs/Racks)?
In that case, disks can be unbalanced.
Should we calculate more precise ownership for ranges and assign evenly to 
disks?


> Partition sstables by token range
> -
>
> Key: CASSANDRA-6696
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: sankalp kohli
>Assignee: Marcus Eriksson
>  Labels: compaction, correctness, dense-storage, performance
> Fix For: 3.x
>
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
> empty one and repair is run. 
> This can cause deleted data to come back in some cases. Also this is true for 
> corrupt stables in which we delete the corrupt stable and run repair. 
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
> row=sankalp col=sankalp is written 20 days back and successfully went to all 
> three nodes. 
> Then a delete/tombstone was written successfully for the same row column 15 
> days back. 
> Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
> since it got compacted with the actual data. So there is no trace of this row 
> column in node A and B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2. 
> Compaction has not yet reclaimed the data and tombstone.  
> Drive2 becomes corrupt and was replaced with new empty drive. 
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
> has come back to life. 
> Now after replacing the drive we run repair. This data will be propagated to 
> all nodes. 
> Note: This is still a problem even if we run repair every gc grace. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9424) 3.X Schema Improvements

2015-06-11 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582828#comment-14582828
 ] 

Yuki Morishita commented on CASSANDRA-9424:
---

Just throwing in my thought(hope).
Is it possible to make official way(API) to load schema offline? That is, the 
ability to read schema from stored SSTables without waking up unnecessary 
server components.

Right now, {{Schema#loadFromDisk(false)}} is used across offline tools, but due 
to the way it touches things, it creates Memtable, CommitLog, some Executors, 
etc, etc.
Most of the tools just need to get {{CFMetaData}} to open SSTables.

> 3.X Schema Improvements
> ---
>
> Key: CASSANDRA-9424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9424
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
> Fix For: 3.x
>
>
> C* schema code is both more brittle and less efficient than I'd like it to 
> be. This ticket will aggregate the improvement tickets to go into 3.X and 4.X 
> to improve the situation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9563) Rename class for DATE type in Java driver

2015-06-11 Thread Alex P (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582688#comment-14582688
 ] 

Alex P commented on CASSANDRA-9563:
---

[~snazy] I've added it https://datastax-oss.atlassian.net/browse/JAVA-810. How 
soon do you need this?

> Rename class for DATE type in Java driver
> -
>
> Key: CASSANDRA-9563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9563
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Olivier Michallat
>Priority: Minor
> Fix For: 2.2.x
>
>
> An early preview of the Java driver 2.2 was provided for inclusion in 
> Cassandra 2.2.0-rc1. It uses a custom Java type to represent CQL type 
> {{DATE}}. Currently that Java type is called {{DateWithoutTime}}.
> We'd like to rename it to {{LocalDate}}. This would be a breaking change for 
> Cassandra, because that type is visible from UDF implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9581) pig-tests spend time waiting on /dev/random for SecureRandom

2015-06-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582683#comment-14582683
 ] 

Ariel Weisberg commented on CASSANDRA-9581:
---

It's transparently handled on Windows. See the linked code.
{noformat}
/*
92  * Try the URL specifying the source (e.g. file:/dev/random)
93  *
94  * The URLs "file:/dev/random" or "file:/dev/urandom" are used to
95  * indicate the SeedGenerator should use OS support, if available.
96  *
97  * On Windows, this causes the MS CryptoAPI seeder to be used.
98  *
99  * On Solaris/Linux/MacOS, this is identical to using
100 * URLSeedGenerator to read from /dev/[u]random
101 */
{noformat}

> pig-tests spend time waiting on /dev/random for SecureRandom
> 
>
> Key: CASSANDRA-9581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9581
> Project: Cassandra
>  Issue Type: Test
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>
> We don't need secure random numbers (for unit tests) so waiting for entropy 
> doesn't make much sense. Luckily Java makes it easy to point to /dev/urandom 
> for entropy. It also transparently handles it correctly on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9293) Unit tests should fail if any LEAK DETECTED errors are printed

2015-06-11 Thread Sylvestor George (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582671#comment-14582671
 ] 

Sylvestor George commented on CASSANDRA-9293:
-

This will be done post 9528 which will Improve log output from unit tests. The 
output recorded from each unit test can be used to check whether that unit test 
had any leaks.

> Unit tests should fail if any LEAK DETECTED errors are printed
> --
>
> Key: CASSANDRA-9293
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9293
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Sylvestor George
>  Labels: test
> Attachments: 9293.txt
>
>
> We shouldn't depend on dtests to inform us of these problems (which have 
> error log monitoring) - they should be caught by unit tests, which may also 
> cover different failure conditions (besides being faster).
> There are a couple of ways we could do this, but probably the easiest is to 
> add a static flag that is set to true if we ever see a leak (in Ref), and to 
> just assert that this is false at the end of every test.
> [~enigmacurry] is this something TE can help with?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9581) pig-tests spend time waiting on /dev/random for SecureRandom

2015-06-11 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582604#comment-14582604
 ] 

Joshua McKenzie commented on CASSANDRA-9581:


What are the implications for Windows?
{code:title=proposed build.xml changes}


{code}

No /dev/urandom on the platform so I'm assuming this won't end well.

> pig-tests spend time waiting on /dev/random for SecureRandom
> 
>
> Key: CASSANDRA-9581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9581
> Project: Cassandra
>  Issue Type: Test
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>
> We don't need secure random numbers (for unit tests) so waiting for entropy 
> doesn't make much sense. Luckily Java makes it easy to point to /dev/urandom 
> for entropy. It also transparently handles it correctly on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9045) Deleted columns are resurrected after repair in wide rows

2015-06-11 Thread Roman Tkachenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582508#comment-14582508
 ] 

Roman Tkachenko edited comment on CASSANDRA-9045 at 6/11/15 9:02 PM:
-

Hey guys.

So I implemented writes at EACH_QUORUM several weeks ago and have been 
monitoring but it does not look like it fixed the issue.

Check this out. I pulled the logs from both datacenters for one of reappeared 
entries and correlated them with our repairs schedule. Reads (GETs) are done at 
LOCAL_QUORUM.

{code}
TimeDC  RESULT
==
2015-06-04T11:31:38 DC2 GET 200  --> record is present in both DCs
2015-06-04T15:25:01 DC1 GET 200
2015-06-04T19:24:06 DC1 DELETE 200  --> deleted in DC1
2015-06-04T19:45:16 DC2 GET 404  --> record disappeared from both DCs...
2015-06-05T07:10:32 DC1 GET 404
2015-06-05T10:16:28 DC2 GET 200  --> ... but somehow appeared back in 
DC2 (no POST requests happened for this record)
2015-06-07T18:59:57 DC1 GET 404
2AM NODE IN DC2 REPAIR
4AM NODE IN DC1 REPAIR
2015-06-08T08:27:36 DC1 GET 200  --> record is present in both DCs 
again, looks like DC2 "repaired" DC1
2015-06-09T15:29:50 DC2 GET 200
2015-06-09T16:05:30 DC1 DELETE 200
2015-06-09T16:05:30 DC1 GET 404
2015-06-09T21:08:24 DC2 GET 404
{code}

So the question is how the record managed to appear back in DC2... Do you have 
any suggestions on how we can investigate this?

Thanks,
Roman


was (Author: r0mant):
Hey guys.

So I implemented writes at EACH_QUORUM several weeks ago and has been 
monitoring but it does not looks like it fixed the issue.

Check this out. I pulled the logs from both datacenters for one of reappeared 
entries and correlated them with our repairs schedule. Reads (GETs) are done at 
LOCAL_QUORUM.

{code}
TimeDC  RESULT
==
2015-06-04T11:31:38 DC2 GET 200  --> record is present in both DCs
2015-06-04T15:25:01 DC1 GET 200
2015-06-04T19:24:06 DC1 DELETE 200  --> deleted in DC1
2015-06-04T19:45:16 DC2 GET 404  --> record disappeared from both DCs...
2015-06-05T07:10:32 DC1 GET 404
2015-06-05T10:16:28 DC2 GET 200  --> ... but somehow appeared back in 
DC2 (no POST requests happened for this record)
2015-06-07T18:59:57 DC1 GET 404
2AM NODE IN DC2 REPAIR
4AM NODE IN DC1 REPAIR
2015-06-08T08:27:36 DC1 GET 200  --> record is present in both DCs 
again, looks like DC2 "repaired" DC1
2015-06-09T15:29:50 DC2 GET 200
2015-06-09T16:05:30 DC1 DELETE 200
2015-06-09T16:05:30 DC1 GET 404
2015-06-09T21:08:24 DC2 GET 404
{code}

So the question is how the record managed to appear back in DC2... Do you have 
any suggestions on how we can investigate this?

Thanks,
Roman

> Deleted columns are resurrected after repair in wide rows
> -
>
> Key: CASSANDRA-9045
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9045
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Roman Tkachenko
>Assignee: Marcus Eriksson
>Priority: Critical
> Fix For: 2.0.x
>
> Attachments: 9045-debug-tracing.txt, another.txt, 
> apache-cassandra-2.0.13-SNAPSHOT.jar, cqlsh.txt, debug.txt, inconsistency.txt
>
>
> Hey guys,
> After almost a week of researching the issue and trying out multiple things 
> with (almost) no luck I was suggested (on the user@cass list) to file a 
> report here.
> h5. Setup
> Cassandra 2.0.13 (we had the issue with 2.0.10 as well and upgraded to see if 
> it goes away)
> Multi datacenter 12+6 nodes cluster.
> h5. Schema
> {code}
> cqlsh> describe keyspace blackbook;
> CREATE KEYSPACE blackbook WITH replication = {
>   'class': 'NetworkTopologyStrategy',
>   'IAD': '3',
>   'ORD': '3'
> };
> USE blackbook;
> CREATE TABLE bounces (
>   domainid text,
>   address text,
>   message text,
>   "timestamp" bigint,
>   PRIMARY KEY (domainid, address)
> ) WITH
>   bloom_filter_fp_chance=0.10 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.10 AND
>   gc_grace_seconds=864000 AND
>   index_interval=128 AND
>   read_repair_chance=0.00 AND
>   populate_io_cache_on_flush='false' AND
>   default_time_to_live=0 AND
>   speculative_retry='99.0PERCENTILE' AND
>   memtable_flush_period_in_ms=0 AND
>   compaction={'class': 'LeveledCompactionStrategy'} AND
>   compression={'sstable_compression': 'LZ4Compressor'};
> {code}
> h5. Use case
> Each row (defined by a domainid) can have many many columns (bounce entries) 
> so rows can get pretty wide. In practice, most of the rows are not that bi

[jira] [Commented] (CASSANDRA-9045) Deleted columns are resurrected after repair in wide rows

2015-06-11 Thread Roman Tkachenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582508#comment-14582508
 ] 

Roman Tkachenko commented on CASSANDRA-9045:


Hey guys.

So I implemented writes at EACH_QUORUM several weeks ago and has been 
monitoring but it does not looks like it fixed the issue.

Check this out. I pulled the logs from both datacenters for one of reappeared 
entries and correlated them with our repairs schedule. Reads (GETs) are done at 
LOCAL_QUORUM.

{code}
TimeDC  RESULT
==
2015-06-04T11:31:38 DC2 GET 200  --> record is present in both DCs
2015-06-04T15:25:01 DC1 GET 200
2015-06-04T19:24:06 DC1 DELETE 200  --> deleted in DC1
2015-06-04T19:45:16 DC2 GET 404  --> record disappeared from both DCs...
2015-06-05T07:10:32 DC1 GET 404
2015-06-05T10:16:28 DC2 GET 200  --> ... but somehow appeared back in 
DC2 (no POST requests happened for this record)
2015-06-07T18:59:57 DC1 GET 404
2AM NODE IN DC2 REPAIR
4AM NODE IN DC1 REPAIR
2015-06-08T08:27:36 DC1 GET 200  --> record is present in both DCs 
again, looks like DC2 "repaired" DC1
2015-06-09T15:29:50 DC2 GET 200
2015-06-09T16:05:30 DC1 DELETE 200
2015-06-09T16:05:30 DC1 GET 404
2015-06-09T21:08:24 DC2 GET 404
{code}

So the question is how the record managed to appear back in DC2... Do you have 
any suggestions on how we can investigate this?

Thanks,
Roman

> Deleted columns are resurrected after repair in wide rows
> -
>
> Key: CASSANDRA-9045
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9045
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Roman Tkachenko
>Assignee: Marcus Eriksson
>Priority: Critical
> Fix For: 2.0.x
>
> Attachments: 9045-debug-tracing.txt, another.txt, 
> apache-cassandra-2.0.13-SNAPSHOT.jar, cqlsh.txt, debug.txt, inconsistency.txt
>
>
> Hey guys,
> After almost a week of researching the issue and trying out multiple things 
> with (almost) no luck I was suggested (on the user@cass list) to file a 
> report here.
> h5. Setup
> Cassandra 2.0.13 (we had the issue with 2.0.10 as well and upgraded to see if 
> it goes away)
> Multi datacenter 12+6 nodes cluster.
> h5. Schema
> {code}
> cqlsh> describe keyspace blackbook;
> CREATE KEYSPACE blackbook WITH replication = {
>   'class': 'NetworkTopologyStrategy',
>   'IAD': '3',
>   'ORD': '3'
> };
> USE blackbook;
> CREATE TABLE bounces (
>   domainid text,
>   address text,
>   message text,
>   "timestamp" bigint,
>   PRIMARY KEY (domainid, address)
> ) WITH
>   bloom_filter_fp_chance=0.10 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.10 AND
>   gc_grace_seconds=864000 AND
>   index_interval=128 AND
>   read_repair_chance=0.00 AND
>   populate_io_cache_on_flush='false' AND
>   default_time_to_live=0 AND
>   speculative_retry='99.0PERCENTILE' AND
>   memtable_flush_period_in_ms=0 AND
>   compaction={'class': 'LeveledCompactionStrategy'} AND
>   compression={'sstable_compression': 'LZ4Compressor'};
> {code}
> h5. Use case
> Each row (defined by a domainid) can have many many columns (bounce entries) 
> so rows can get pretty wide. In practice, most of the rows are not that big 
> but some of them contain hundreds of thousands and even millions of columns.
> Columns are not TTL'ed but can be deleted using the following CQL3 statement:
> {code}
> delete from bounces where domainid = 'domain.com' and address = 
> 'al...@example.com';
> {code}
> All queries are performed using LOCAL_QUORUM CL.
> h5. Problem
> We weren't very diligent about running repairs on the cluster initially, but 
> shorty after we started doing it we noticed that some of previously deleted 
> columns (bounce entries) are there again, as if tombstones have disappeared.
> I have run this test multiple times via cqlsh, on the row of the customer who 
> originally reported the issue:
> * delete an entry
> * verify it's not returned even with CL=ALL
> * run repair on nodes that own this row's key
> * the columns reappear and are returned even with CL=ALL
> I tried the same test on another row with much less data and everything was 
> correctly deleted and didn't reappear after repair.
> h5. Other steps I've taken so far
> Made sure NTP is running on all servers and clocks are synchronized.
> Increased gc_grace_seconds to 100 days, ran full repair (on the affected 
> keyspace) on all nodes, then changed it back to the default 10 days again. 
> Didn't help.
> Performed one more test. Updated one of the resurrected columns, then deleted 
> it and ran repair again. This time the updated version of the col

[jira] [Assigned] (CASSANDRA-9585) Make "truncate table X" an alias for "truncate X"

2015-06-11 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-9585:
---

Assignee: Tyler Hobbs

> Make "truncate table X" an alias for "truncate X"
> -
>
> Key: CASSANDRA-9585
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9585
> Project: Cassandra
>  Issue Type: Bug
>Reporter: J.B. Langston
>Assignee: Tyler Hobbs
>Priority: Trivial
> Fix For: 2.1.x
>
>
> CQL syntax is inconsistent: it's "drop table X" but "truncate X". It used to 
> trip me up all the time until I wrapped my brain around this inconsistency 
> and it still triggers a tiny bout of OCD every time I type it.  I realize 
> it's too late to change it,  but why not have both? "truncate table X" is 
> also consistent with the syntax in SQL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9585) Make "truncate table X" an alias for "truncate X"

2015-06-11 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-9585:

Fix Version/s: 2.1.x

> Make "truncate table X" an alias for "truncate X"
> -
>
> Key: CASSANDRA-9585
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9585
> Project: Cassandra
>  Issue Type: Bug
>Reporter: J.B. Langston
>Priority: Trivial
> Fix For: 2.1.x
>
>
> CQL syntax is inconsistent: it's "drop table X" but "truncate X". It used to 
> trip me up all the time until I wrapped my brain around this inconsistency 
> and it still triggers a tiny bout of OCD every time I type it.  I realize 
> it's too late to change it,  but why not have both? "truncate table X" is 
> also consistent with the syntax in SQL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9585) Make "truncate table X" an alias for "truncate X"

2015-06-11 Thread J.B. Langston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.B. Langston updated CASSANDRA-9585:
-
Priority: Trivial  (was: Major)

> Make "truncate table X" an alias for "truncate X"
> -
>
> Key: CASSANDRA-9585
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9585
> Project: Cassandra
>  Issue Type: Bug
>Reporter: J.B. Langston
>Priority: Trivial
>
> CQL syntax is inconsistent: it's "drop table X" but "truncate X". It used to 
> trip me up all the time until I wrapped my brain around this inconsistency 
> and it still triggers a tiny bout of OCD every time I type it.  I realize 
> it's too late to change it,  but why not have both? "truncate table X" is 
> also consistent with the syntax in SQL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9585) Make "truncate table X" an alias for "truncate X"

2015-06-11 Thread J.B. Langston (JIRA)
J.B. Langston created CASSANDRA-9585:


 Summary: Make "truncate table X" an alias for "truncate X"
 Key: CASSANDRA-9585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9585
 Project: Cassandra
  Issue Type: Bug
Reporter: J.B. Langston


CQL syntax is inconsistent: it's "drop table X" but "truncate X". It used to 
trip me up all the time until I wrapped my brain around this inconsistency and 
it still triggers a tiny bout of OCD every time I type it.  I realize it's too 
late to change it,  but why not have both? "truncate table X" is also 
consistent with the syntax in SQL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7556) Update cqlsh for UDFs

2015-06-11 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582439#comment-14582439
 ] 

Robert Stupp commented on CASSANDRA-7556:
-

A lot of [dtests fail on 
cassci|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-7556v3-udf-cqlsh-dtest/2/testReport/]

With python-driver 2.6.0c1, dtests are fine - except one:
{code}
==
FAIL: test_client_warnings (cqlsh_tests.TestCqlsh)
--
Traceback (most recent call last):
  File "/Users/snazy/devel/cassandra/dtest/tools.py", line 192, in wrapped
f(obj)
  File "/Users/snazy/devel/cassandra/dtest/cqlsh_tests/cqlsh_tests.py", line 
870, in test_client_warnings
Unlogged batch covering 2 partitions detected against table 
[client_warnings.test]. You should use a logged batch for atomicity, or 
asynchronous writes for performance.""")
  File "/Users/snazy/devel/cassandra/dtest/cqlsh_tests/cqlsh_tests.py", line 
446, in verify_output
self.assertTrue(expected in output, "Output \n {%s} \n doesn't contain 
expected\n {%s}" % (output, expected))
AssertionError: Output 
 {} 
 doesn't contain expected
 {
Warnings :
Unlogged batch covering 2 partitions detected against table 
[client_warnings.test]. You should use a logged batch for atomicity, or 
asynchronous writes for performance.}
 >> begin captured stdout << -
[node1 ERROR] objc[90155]: Class JavaLaunchHelper is implemented in both 
/Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/bin/java and 
/Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/jre/lib/libinstrument.dylib.
 One of the two will be used. Which one is undefined.

- >> end captured stdout << --
{code}


> Update cqlsh for UDFs
> -
>
> Key: CASSANDRA-7556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Robert Stupp
>  Labels: udf
> Fix For: 2.2.x
>
> Attachments: 7556-2.txt, 7556.txt
>
>
> Once CASSANDRA-7395 and CASSANDRA-7526 are complete, we'll want to add cqlsh 
> support for user defined functions.
> This will include:
> * Completion for {{CREATE FUNCTION}} and {{DROP FUNCTION}}
> * Tolerating (almost) arbitrary text inside function bodies
> * {{DESCRIBE TYPE}} support
> * Including types in {{DESCRIBE KEYSPACE}} output
> * Possibly {{GRANT}} completion for any new privileges



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-06-11 Thread samt
Merge branch 'cassandra-2.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/887bbc14
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/887bbc14
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/887bbc14

Branch: refs/heads/trunk
Commit: 887bbc141e6c6b26fafb857ca21c00c79ba1e4cf
Parents: 2c360e6 b61da9b
Author: Sam Tunnicliffe 
Authored: Thu Jun 11 20:16:32 2015 +0100
Committer: Sam Tunnicliffe 
Committed: Thu Jun 11 20:16:32 2015 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/service/CassandraDaemon.java | 4 ++--
 src/java/org/apache/cassandra/service/StartupChecks.java   | 9 +
 3 files changed, 4 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/887bbc14/CHANGES.txt
--
diff --cc CHANGES.txt
index 27cc70c,020cb46..b80f272
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,13 -1,5 +1,14 @@@
 +3.0:
 + * Make file buffer cache independent of paths being read (CASSANDRA-8897)
 + * Remove deprecated legacy Hadoop code (CASSANDRA-9353)
 + * Decommissioned nodes will not rejoin the cluster (CASSANDRA-8801)
 + * Change gossip stabilization to use endpoit size (CASSANDRA-9401)
 + * Change default garbage collector to G1 (CASSANDRA-7486)
 + * Populate TokenMetadata early during startup (CASSANDRA-9317)
 +
 +
  2.2
+  * Mlockall before opening system sstables & remove boot_without_jna option 
(CASSANDRA-9573)
   * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
   * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
   * Fix deprecated repair JMX API (CASSANDRA-9570)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/887bbc14/src/java/org/apache/cassandra/service/CassandraDaemon.java
--



[2/3] cassandra git commit: Mlock before opening system keyspace

2015-06-11 Thread samt
Mlock before opening system keyspace


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b61da9b5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b61da9b5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b61da9b5

Branch: refs/heads/trunk
Commit: b61da9b56956929d9627e035b0d232b6b38bba91
Parents: cab33a6
Author: Sam Tunnicliffe 
Authored: Thu Jun 11 16:51:25 2015 +0100
Committer: Sam Tunnicliffe 
Committed: Thu Jun 11 20:12:59 2015 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/service/CassandraDaemon.java | 4 ++--
 src/java/org/apache/cassandra/service/StartupChecks.java   | 9 +
 3 files changed, 4 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b61da9b5/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 72da59f..020cb46 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2
+ * Mlockall before opening system sstables & remove boot_without_jna option 
(CASSANDRA-9573)
  * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
  * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
  * Fix deprecated repair JMX API (CASSANDRA-9570)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b61da9b5/src/java/org/apache/cassandra/service/CassandraDaemon.java
--
diff --git a/src/java/org/apache/cassandra/service/CassandraDaemon.java 
b/src/java/org/apache/cassandra/service/CassandraDaemon.java
index c1b4ad6..b8beafd 100644
--- a/src/java/org/apache/cassandra/service/CassandraDaemon.java
+++ b/src/java/org/apache/cassandra/service/CassandraDaemon.java
@@ -123,6 +123,8 @@ public class CassandraDaemon
 {
 logSystemInfo();
 
+CLibrary.tryMlockall();
+
 try
 {
 startupChecks.verify();
@@ -132,8 +134,6 @@ public class CassandraDaemon
 exitOrFail(e.returnCode, e.getMessage(), e.getCause());
 }
 
-CLibrary.tryMlockall();
-
 try
 {
 SystemKeyspace.snapshotOnVersionChange();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b61da9b5/src/java/org/apache/cassandra/service/StartupChecks.java
--
diff --git a/src/java/org/apache/cassandra/service/StartupChecks.java 
b/src/java/org/apache/cassandra/service/StartupChecks.java
index b6f233f..2d4686b 100644
--- a/src/java/org/apache/cassandra/service/StartupChecks.java
+++ b/src/java/org/apache/cassandra/service/StartupChecks.java
@@ -166,15 +166,8 @@ public class StartupChecks
 public void execute() throws StartupException
 {
 // Fail-fast if JNA is not available or failing to initialize 
properly
-// except with -Dcassandra.boot_without_jna=true. See 
CASSANDRA-6575.
 if (!CLibrary.jnaAvailable())
-{
-boolean jnaRequired = 
!Boolean.getBoolean("cassandra.boot_without_jna");
-
-if (jnaRequired)
-throw new StartupException(3, "JNA failing to initialize 
properly. " +
-  "Use 
-Dcassandra.boot_without_jna=true to bootstrap even so.");
-}
+throw new StartupException(3, "JNA failing to initialize 
properly. ");
 }
 };
 



[1/3] cassandra git commit: Mlock before opening system keyspace

2015-06-11 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 cab33a609 -> b61da9b56
  refs/heads/trunk 2c360e60c -> 887bbc141


Mlock before opening system keyspace


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b61da9b5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b61da9b5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b61da9b5

Branch: refs/heads/cassandra-2.2
Commit: b61da9b56956929d9627e035b0d232b6b38bba91
Parents: cab33a6
Author: Sam Tunnicliffe 
Authored: Thu Jun 11 16:51:25 2015 +0100
Committer: Sam Tunnicliffe 
Committed: Thu Jun 11 20:12:59 2015 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/service/CassandraDaemon.java | 4 ++--
 src/java/org/apache/cassandra/service/StartupChecks.java   | 9 +
 3 files changed, 4 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b61da9b5/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 72da59f..020cb46 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2
+ * Mlockall before opening system sstables & remove boot_without_jna option 
(CASSANDRA-9573)
  * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
  * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
  * Fix deprecated repair JMX API (CASSANDRA-9570)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b61da9b5/src/java/org/apache/cassandra/service/CassandraDaemon.java
--
diff --git a/src/java/org/apache/cassandra/service/CassandraDaemon.java 
b/src/java/org/apache/cassandra/service/CassandraDaemon.java
index c1b4ad6..b8beafd 100644
--- a/src/java/org/apache/cassandra/service/CassandraDaemon.java
+++ b/src/java/org/apache/cassandra/service/CassandraDaemon.java
@@ -123,6 +123,8 @@ public class CassandraDaemon
 {
 logSystemInfo();
 
+CLibrary.tryMlockall();
+
 try
 {
 startupChecks.verify();
@@ -132,8 +134,6 @@ public class CassandraDaemon
 exitOrFail(e.returnCode, e.getMessage(), e.getCause());
 }
 
-CLibrary.tryMlockall();
-
 try
 {
 SystemKeyspace.snapshotOnVersionChange();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b61da9b5/src/java/org/apache/cassandra/service/StartupChecks.java
--
diff --git a/src/java/org/apache/cassandra/service/StartupChecks.java 
b/src/java/org/apache/cassandra/service/StartupChecks.java
index b6f233f..2d4686b 100644
--- a/src/java/org/apache/cassandra/service/StartupChecks.java
+++ b/src/java/org/apache/cassandra/service/StartupChecks.java
@@ -166,15 +166,8 @@ public class StartupChecks
 public void execute() throws StartupException
 {
 // Fail-fast if JNA is not available or failing to initialize 
properly
-// except with -Dcassandra.boot_without_jna=true. See 
CASSANDRA-6575.
 if (!CLibrary.jnaAvailable())
-{
-boolean jnaRequired = 
!Boolean.getBoolean("cassandra.boot_without_jna");
-
-if (jnaRequired)
-throw new StartupException(3, "JNA failing to initialize 
properly. " +
-  "Use 
-Dcassandra.boot_without_jna=true to bootstrap even so.");
-}
+throw new StartupException(3, "JNA failing to initialize 
properly. ");
 }
 };
 



[jira] [Commented] (CASSANDRA-9573) OOM when loading sstables (system.hints)

2015-06-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582394#comment-14582394
 ] 

Aleksey Yeschenko commented on CASSANDRA-9573:
--

+1

> OOM when loading sstables (system.hints)
> 
>
> Key: CASSANDRA-9573
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9573
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alan Boudreault
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 2.2.0 rc2
>
> Attachments: 9573.txt, hs_err_pid11243.log, 
> java-hints-issue-2015-06-09.snapshot, system.log, yourkit.ss.tar.gz
>
>
> [~andrew.tolbert] discovered an issue while running endurance tests on 2.2. A 
> Node was not able to start and was killed by the OOM Killer.
> Briefly, Cassandra use an excessive amount of memory when loading compressed 
> sstables (off-heap?). We have initially seen the issue with system.hints 
> before knowing it was related to compression. system.hints use lz4 
> compression by default. If we have a sstable of, say 8-10G, Cassandra will be 
> killed by the OOM killer after 1-2 minutes. I can reproduce that bug 
> everytime locally. 
> * the issue also happens if we have 10G of data splitted in 13MB sstables.
> * I can reproduce the issue if I put a lot of data in the system.hints table.
> * I cannot reproduce the issue with a standard table using the same 
> compression (LZ4). Something seems to be different when it's hints?
> You wont see anything in the node system.log but you'll see this in 
> /var/log/syslog.log:
> {code}
> Out of memory: Kill process 30777 (java) score 600 or sacrifice child
> {code}
> The issue has been introduced in this commit but is not related to the 
> performance issue in CASSANDRA-9240: 
> https://github.com/apache/cassandra/commit/aedce5fc6ba46ca734e91190cfaaeb23ba47a846
> Here is the core dump and some yourkit snapshots in attachments. I am not 
> sure you will be able to get useful information from them.
> core dump: http://dl.alanb.ca/core.tar.gz
> Not sure if this is related, but all dumps and snapshot points to 
> EstimatedHistogramReservoir ... and we can see many 
> javax.management.InstanceAlreadyExistsException: 
> org.apache.cassandra.metrics:... exceptions in system.log before it hangs 
> then crash.
> To reproduce the issue: 
> 1. created a cluster of 3 nodes
> 2. start the whole cluster
> 3. shutdown node2 and node3
> 4. writes 10-15G of data on node1 with replication factor 3. You should see a 
> lot of hints.
> 5. stop node1
> 6. start node2 and node3
> 7. start node1, you should OOM.
> //cc [~tjake] [~benedict] [~andrew.tolbert]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9565) 'WITH WITH' in alter keyspace statements causes NPE

2015-06-11 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-9565:

Assignee: Benjamin Lerer

> 'WITH WITH' in alter keyspace statements causes NPE
> ---
>
> Key: CASSANDRA-9565
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9565
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
>
> Running any of these statements:
> {code}
> ALTER KEYSPACE WITH WITH DURABLE_WRITES = true;
> ALTER KEYSPACE ks WITH WITH DURABLE_WRITES = true;
> CREATE KEYSPACE WITH WITH DURABLE_WRITES = true;
> CREATE KEYSPACE ks WITH WITH DURABLE_WRITES = true;
> {code}
> Fails, raising a {{SyntaxException}} and giving a {{NullPointerException}} as 
> the reason for failure. This happens in all versions I tried, including 
> 2.0.15, 2.1.5, and HEAD on cassandra-2.0, cassandra-2.1, cassandra-2.2, and 
> trunk.
> EDIT: A dtest is 
> [here|https://github.com/mambocab/cassandra-dtest/commit/da3785e25cce505183e0ebc8dd21340f3a3ea3a4#diff-dcb0fc3aff201fd7eeea6cbf1181f921R5300],
>  but it would probably be more appropriate to test with unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9573) OOM when loading sstables (system.hints)

2015-06-11 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582393#comment-14582393
 ] 

Sam Tunnicliffe commented on CASSANDRA-9573:


[unit|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-9573-testall/1/testReport/]
 and 
[dtests|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-9573-dtest/1/testReport/]
 look reasonable, only 1 unexpected failure in the bootstrap dtest, but it 
looks unrelated to me.

> OOM when loading sstables (system.hints)
> 
>
> Key: CASSANDRA-9573
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9573
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alan Boudreault
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 2.2.0 rc2
>
> Attachments: 9573.txt, hs_err_pid11243.log, 
> java-hints-issue-2015-06-09.snapshot, system.log, yourkit.ss.tar.gz
>
>
> [~andrew.tolbert] discovered an issue while running endurance tests on 2.2. A 
> Node was not able to start and was killed by the OOM Killer.
> Briefly, Cassandra use an excessive amount of memory when loading compressed 
> sstables (off-heap?). We have initially seen the issue with system.hints 
> before knowing it was related to compression. system.hints use lz4 
> compression by default. If we have a sstable of, say 8-10G, Cassandra will be 
> killed by the OOM killer after 1-2 minutes. I can reproduce that bug 
> everytime locally. 
> * the issue also happens if we have 10G of data splitted in 13MB sstables.
> * I can reproduce the issue if I put a lot of data in the system.hints table.
> * I cannot reproduce the issue with a standard table using the same 
> compression (LZ4). Something seems to be different when it's hints?
> You wont see anything in the node system.log but you'll see this in 
> /var/log/syslog.log:
> {code}
> Out of memory: Kill process 30777 (java) score 600 or sacrifice child
> {code}
> The issue has been introduced in this commit but is not related to the 
> performance issue in CASSANDRA-9240: 
> https://github.com/apache/cassandra/commit/aedce5fc6ba46ca734e91190cfaaeb23ba47a846
> Here is the core dump and some yourkit snapshots in attachments. I am not 
> sure you will be able to get useful information from them.
> core dump: http://dl.alanb.ca/core.tar.gz
> Not sure if this is related, but all dumps and snapshot points to 
> EstimatedHistogramReservoir ... and we can see many 
> javax.management.InstanceAlreadyExistsException: 
> org.apache.cassandra.metrics:... exceptions in system.log before it hangs 
> then crash.
> To reproduce the issue: 
> 1. created a cluster of 3 nodes
> 2. start the whole cluster
> 3. shutdown node2 and node3
> 4. writes 10-15G of data on node1 with replication factor 3. You should see a 
> lot of hints.
> 5. stop node1
> 6. start node2 and node3
> 7. start node1, you should OOM.
> //cc [~tjake] [~benedict] [~andrew.tolbert]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9581) pig-tests spend time waiting on /dev/random for SecureRandom

2015-06-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582379#comment-14582379
 ] 

Ariel Weisberg edited comment on CASSANDRA-9581 at 6/11/15 7:05 PM:


This seems to shave anywhere between 1 and 4 minutes off of pig-test. At it's 
fastest pig-test is 4 minutes 30 seconds so worth the one line IMO. We also 
benefit down the road if any other unit tests end up needing to get seed data.

You can see the JDK plumbing behind this here
http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/sun/security/provider/SeedGenerator.java?av=f#91
http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/sun/security/provider/SunEntries.java?av=f#309

Proposed change 
https://github.com/apache/cassandra/compare/trunk...aweisberg:C-9581


was (Author: aweisberg):
This seems to shave anywhere between 1 and 4 minutes off of pig-test. At it's 
fastest pig-test is 4 minutes 30 seconds so worth the one line IMO. We also 
benefit down the road if any other unit tests end up needing to get seed data.

> pig-tests spend time waiting on /dev/random for SecureRandom
> 
>
> Key: CASSANDRA-9581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9581
> Project: Cassandra
>  Issue Type: Test
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>
> We don't need secure random numbers (for unit tests) so waiting for entropy 
> doesn't make much sense. Luckily Java makes it easy to point to /dev/urandom 
> for entropy. It also transparently handles it correctly on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7556) Update cqlsh for UDFs

2015-06-11 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582358#comment-14582358
 ] 

Robert Stupp edited comment on CASSANDRA-7556 at 6/11/15 6:44 PM:
--

Reverted my change and fixed the NPE (one-line-fix in {{Cql.g}}).

EDIT: force-pushed my updated branches for review. (cassci running)


was (Author: snazy):
Reverted my change and fixed the NPE (one-line-fix in {{Cql.g}}).

> Update cqlsh for UDFs
> -
>
> Key: CASSANDRA-7556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Robert Stupp
>  Labels: udf
> Fix For: 2.2.x
>
> Attachments: 7556-2.txt, 7556.txt
>
>
> Once CASSANDRA-7395 and CASSANDRA-7526 are complete, we'll want to add cqlsh 
> support for user defined functions.
> This will include:
> * Completion for {{CREATE FUNCTION}} and {{DROP FUNCTION}}
> * Tolerating (almost) arbitrary text inside function bodies
> * {{DESCRIBE TYPE}} support
> * Including types in {{DESCRIBE KEYSPACE}} output
> * Possibly {{GRANT}} completion for any new privileges



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7556) Update cqlsh for UDFs

2015-06-11 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582358#comment-14582358
 ] 

Robert Stupp commented on CASSANDRA-7556:
-

Reverted my change and fixed the NPE (one-line-fix in {{Cql.g}}).

> Update cqlsh for UDFs
> -
>
> Key: CASSANDRA-7556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Robert Stupp
>  Labels: udf
> Fix For: 2.2.x
>
> Attachments: 7556-2.txt, 7556.txt
>
>
> Once CASSANDRA-7395 and CASSANDRA-7526 are complete, we'll want to add cqlsh 
> support for user defined functions.
> This will include:
> * Completion for {{CREATE FUNCTION}} and {{DROP FUNCTION}}
> * Tolerating (almost) arbitrary text inside function bodies
> * {{DESCRIBE TYPE}} support
> * Including types in {{DESCRIBE KEYSPACE}} output
> * Possibly {{GRANT}} completion for any new privileges



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9584) Decommissioning a node on Windows sends the wrong schema change event

2015-06-11 Thread Kishan Karunaratne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582318#comment-14582318
 ] 

Kishan Karunaratne edited comment on CASSANDRA-9584 at 6/11/15 6:25 PM:


When I stopped and started the cluster via CCM, the decommissioned node 
returned to the ring. Is this expected behavior?

EDIT: Nevermind, it's expected behavior, fixed in 
https://issues.apache.org/jira/browse/CASSANDRA-8801


was (Author: kishkaru):
When I stopped and started the cluster via CCM, the decommissioned node 
returned to the ring. Is this expected behavior?

> Decommissioning a node on Windows sends the wrong schema change event
> -
>
> Key: CASSANDRA-9584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9584
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.2.0-rc1 | python-driver 2.6.0-rc1 | Windows Server 
> 2012 R2 64-bit
>Reporter: Kishan Karunaratne
>Assignee: Joshua McKenzie
> Fix For: 2.2.x
>
>
> Decommissioning a node on Windows sends the wrong schema change event:
> {noformat}
> cassandra.connection: DEBUG: Message pushed from server: 
>  _args={'change_type': u'DOWN', 'address': ('127.0.0.2', 9042)}, stream_id=-1)>
> {noformat}
> On Linux I get the correct event:
> {noformat}
> cassandra.connection: DEBUG: Message pushed from server: 
>  event_args={'change_type': u'REMOVED_NODE', 'address': ('127.0.0.2', 9042)}, 
> stream_id=-1)>
> {noformat}
> We are using ccmlib node.py.decommission() which calls nodetool decommission:
> {noformat}
> def decommission(self):
> self.nodetool("decommission")
> self.status = Status.DECOMMISIONNED
> self._update_config()
> {noformat}
> Interestingly, it does seem to work (correctly?) on CCM CLI:
> {noformat}
> PS C:\Users\Administrator> ccm status
> Cluster: '2.2'
> --
> node1: UP
> node3: UP
> node2: UP
> PS C:\Users\Administrator> ccm node1 ring
> Starting NodeTool
> Datacenter: datacenter1
> ==
> AddressRackStatus State   LoadOwns
> Token
>   
>   3074457345618258602
> 127.0.0.1  rack1   Up Normal  62.43 KB?   
> -9223372036854775808
> 127.0.0.2  rack1   Up Normal  104.87 KB   ?   
> -3074457345618258603
> 127.0.0.3  rack1   Up Normal  83.67 KB?   
> 3074457345618258602
>   Note: Non-system keyspaces don't have the same replication settings, 
> effective ownership information is meaningless
> PS C:\Users\Administrator> ccm node2 decommission
> PS C:\Users\Administrator> ccm status
> Cluster: '2.2'
> --
> node1: UP
> node3: UP
> node2: DECOMMISIONNED
> PS C:\Users\Administrator> ccm node1 ring
> Starting NodeTool
> Datacenter: datacenter1
> ==
> AddressRackStatus State   LoadOwns
> Token
>   
>   3074457345618258602
> 127.0.0.1  rack1   Up Normal  67.11 KB?   
> -9223372036854775808
> 127.0.0.3  rack1   Up Normal  88.35 KB?   
> 3074457345618258602
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9584) Decommissioning a node on Windows sends the wrong schema change event

2015-06-11 Thread Kishan Karunaratne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582318#comment-14582318
 ] 

Kishan Karunaratne commented on CASSANDRA-9584:
---

When I stopped and started the cluster via CCM, the decommissioned node 
returned to the ring. Is this expected behavior?

> Decommissioning a node on Windows sends the wrong schema change event
> -
>
> Key: CASSANDRA-9584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9584
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.2.0-rc1 | python-driver 2.6.0-rc1 | Windows Server 
> 2012 R2 64-bit
>Reporter: Kishan Karunaratne
>Assignee: Joshua McKenzie
> Fix For: 2.2.x
>
>
> Decommissioning a node on Windows sends the wrong schema change event:
> {noformat}
> cassandra.connection: DEBUG: Message pushed from server: 
>  _args={'change_type': u'DOWN', 'address': ('127.0.0.2', 9042)}, stream_id=-1)>
> {noformat}
> On Linux I get the correct event:
> {noformat}
> cassandra.connection: DEBUG: Message pushed from server: 
>  event_args={'change_type': u'REMOVED_NODE', 'address': ('127.0.0.2', 9042)}, 
> stream_id=-1)>
> {noformat}
> We are using ccmlib node.py.decommission() which calls nodetool decommission:
> {noformat}
> def decommission(self):
> self.nodetool("decommission")
> self.status = Status.DECOMMISIONNED
> self._update_config()
> {noformat}
> Interestingly, it does seem to work (correctly?) on CCM CLI:
> {noformat}
> PS C:\Users\Administrator> ccm status
> Cluster: '2.2'
> --
> node1: UP
> node3: UP
> node2: UP
> PS C:\Users\Administrator> ccm node1 ring
> Starting NodeTool
> Datacenter: datacenter1
> ==
> AddressRackStatus State   LoadOwns
> Token
>   
>   3074457345618258602
> 127.0.0.1  rack1   Up Normal  62.43 KB?   
> -9223372036854775808
> 127.0.0.2  rack1   Up Normal  104.87 KB   ?   
> -3074457345618258603
> 127.0.0.3  rack1   Up Normal  83.67 KB?   
> 3074457345618258602
>   Note: Non-system keyspaces don't have the same replication settings, 
> effective ownership information is meaningless
> PS C:\Users\Administrator> ccm node2 decommission
> PS C:\Users\Administrator> ccm status
> Cluster: '2.2'
> --
> node1: UP
> node3: UP
> node2: DECOMMISIONNED
> PS C:\Users\Administrator> ccm node1 ring
> Starting NodeTool
> Datacenter: datacenter1
> ==
> AddressRackStatus State   LoadOwns
> Token
>   
>   3074457345618258602
> 127.0.0.1  rack1   Up Normal  67.11 KB?   
> -9223372036854775808
> 127.0.0.3  rack1   Up Normal  88.35 KB?   
> 3074457345618258602
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9582) MarshalException after upgrading to 2.1.6

2015-06-11 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9582:
---
Description: 
I've upgraded a node from 2.0.10 to 2.1.6. Before taking down the node, I've 
run nodetool upgradesstables and nodetool scrub.

When starting up the node with 2.1.6, I'm getting a MarshalException 
(stacktrace included below). For some reason, it seems that C* is trying to 
convert a text value from the column 'currencyCode' to a UUID, which it isn't.
I've had similar errors for two other columns as well, which I could work 
around by dropping the table, since it wasn't used anymore.

The only thing I could do was restoring a snapshot and starting up the old 
2.0.10 again.

The schema of the table (I've got only one table containing a column named 
'currencyCode') is:
{code}
CREATE TABLE "InvoiceItem" (
  key blob,
  column1 uuid,
  "currencyCode" text,
  description text,
  "priceGross" bigint,
  "priceNett" bigint,
  quantity varint,
  sku text,
  "unitPriceGross" bigint,
  "unitPriceNett" bigint,
  vat bigint,
  "vatRateBasisPoints" varint,
  PRIMARY KEY ((key), column1)
) WITH COMPACT STORAGE AND
  bloom_filter_fp_chance=0.01 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.00 AND
  gc_grace_seconds=864000 AND
  index_interval=128 AND
  read_repair_chance=1.00 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  default_time_to_live=0 AND
  speculative_retry='99.0PERCENTILE' AND
  memtable_flush_period_in_ms=0 AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={};{code}


The stack trace when starting up:
{code}
ERROR 13:51:57 Exception encountered during startup
org.apache.cassandra.serializers.MarshalException: unable to make version 1 
UUID from 'currencyCode'
at 
org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:188) 
~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.fromString(AbstractCompositeType.java:242)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.ColumnDefinition.fromSchema(ColumnDefinition.java:397)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1750)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1860) 
~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:321)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:302) 
~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.db.DefsTables.loadFromKeyspace(DefsTables.java:133) 
~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:696)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:672)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:293) 
[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:536) 
[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:625) 
[apache-cassandra-2.1.6.jar:2.1.6]
Caused by: org.apache.cassandra.serializers.MarshalException: unable to coerce 
'currencyCode' to a  formatted date (long)
at 
org.apache.cassandra.serializers.TimestampSerializer.dateStringToTimestamp(TimestampSerializer.java:111)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:184) 
~[apache-cassandra-2.1.6.jar:2.1.6]
... 12 common frames omitted
Caused by: java.text.ParseException: Unable to parse the date: currencyCode
at 
org.apache.commons.lang3.time.DateUtils.parseDateWithLeniency(DateUtils.java:336)
 ~[commons-lang3-3.1.jar:3.1]
at 
org.apache.commons.lang3.time.DateUtils.parseDateStrictly(DateUtils.java:286) 
~[commons-lang3-3.1.jar:3.1]
at 
org.apache.cassandra.serializers.TimestampSerializer.dateStringToTimestamp(TimestampSerializer.java:107)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
... 13 common frames omitted
org.apache.cassandra.serializers.MarshalException: unable to make version 1 
UUID from 'currencyCode'
at 
org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:188)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.fromString(AbstractCompositeType.java:242)
at 
org.apache.cassandra.config.ColumnDefinition.fromSchema(ColumnDefinition.java:397)
at 
org.apache.cassandra.config.CFMetaData.fromSchemaNoTrigger

[jira] [Updated] (CASSANDRA-9577) Cassandra not performing GC on stale SStables after compaction

2015-06-11 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9577:
---
Assignee: Marcus Eriksson

> Cassandra not performing GC on stale SStables after compaction
> --
>
> Key: CASSANDRA-9577
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9577
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.0.12.200 / DSE 4.6.1.
>Reporter: Jeff Ferland
>Assignee: Marcus Eriksson
>
>   Space used (live), bytes:   878681716067
>   Space used (total), bytes: 2227857083852
> jbf@ip-10-0-2-98:/ebs/cassandra/data/trends/trends$ sudo lsof *-Data.db 
> COMMAND  PID  USER   FD   TYPE DEVICE SIZE/OFF  NODE NAME
> java4473 cassandra  446r   REG   0,26  17582559172 39241 
> trends-trends-jb-144864-Data.db
> java4473 cassandra  448r   REG   0,26 62040962 37431 
> trends-trends-jb-144731-Data.db
> java4473 cassandra  449r   REG   0,26 829935047545 21150 
> trends-trends-jb-143581-Data.db
> java4473 cassandra  452r   REG   0,26  8980406 39503 
> trends-trends-jb-144882-Data.db
> java4473 cassandra  454r   REG   0,26  8980406 39503 
> trends-trends-jb-144882-Data.db
> java4473 cassandra  462r   REG   0,26  9487703 39542 
> trends-trends-jb-144883-Data.db
> java4473 cassandra  463r   REG   0,26 36158226 39629 
> trends-trends-jb-144889-Data.db
> java4473 cassandra  468r   REG   0,26105693505 39447 
> trends-trends-jb-144881-Data.db
> java4473 cassandra  530r   REG   0,26  17582559172 39241 
> trends-trends-jb-144864-Data.db
> java4473 cassandra  535r   REG   0,26105693505 39447 
> trends-trends-jb-144881-Data.db
> java4473 cassandra  542r   REG   0,26  9487703 39542 
> trends-trends-jb-144883-Data.db
> java4473 cassandra  553u   REG   0,26   6431729821 39556 
> trends-trends-tmp-jb-144884-Data.db
> jbf@ip-10-0-2-98:/ebs/cassandra/data/trends/trends$ ls *-Data.db
> trends-trends-jb-142631-Data.db  trends-trends-jb-143562-Data.db  
> trends-trends-jb-143581-Data.db  trends-trends-jb-144731-Data.db  
> trends-trends-jb-144883-Data.db
> trends-trends-jb-142633-Data.db  trends-trends-jb-143563-Data.db  
> trends-trends-jb-144530-Data.db  trends-trends-jb-144864-Data.db  
> trends-trends-jb-144889-Data.db
> trends-trends-jb-143026-Data.db  trends-trends-jb-143564-Data.db  
> trends-trends-jb-144551-Data.db  trends-trends-jb-144881-Data.db  
> trends-trends-tmp-jb-144884-Data.db
> trends-trends-jb-143533-Data.db  trends-trends-jb-143578-Data.db  
> trends-trends-jb-144552-Data.db  trends-trends-jb-144882-Data.db
> jbf@ip-10-0-2-98:/ebs/cassandra/data/trends/trends$ cd -
> /mnt/cassandra/data/trends/trends
> jbf@ip-10-0-2-98:/mnt/cassandra/data/trends/trends$ sudo lsof * 
> jbf@ip-10-0-2-98:/mnt/cassandra/data/trends/trends$ ls *-Data.db
> trends-trends-jb-124502-Data.db  trends-trends-jb-141113-Data.db  
> trends-trends-jb-141377-Data.db  trends-trends-jb-141846-Data.db  
> trends-trends-jb-144890-Data.db
> trends-trends-jb-125457-Data.db  trends-trends-jb-141123-Data.db  
> trends-trends-jb-141391-Data.db  trends-trends-jb-141871-Data.db  
> trends-trends-jb-41121-Data.db
> trends-trends-jb-130016-Data.db  trends-trends-jb-141137-Data.db  
> trends-trends-jb-141538-Data.db  trends-trends-jb-141883-Data.db  
> trends-trends.trends_date_idx-jb-2100-Data.db
> trends-trends-jb-139563-Data.db  trends-trends-jb-141358-Data.db  
> trends-trends-jb-141806-Data.db  trends-trends-jb-142033-Data.db
> trends-trends-jb-141102-Data.db  trends-trends-jb-141363-Data.db  
> trends-trends-jb-141829-Data.db  trends-trends-jb-144553-Data.db
> Compaction started  INFO [CompactionExecutor:6661] 2015-06-05 14:02:36,515 
> CompactionTask.java (line 120) Compacting 
> [SSTableReader(path='/mnt/cassandra/data/trends/trends/trends-trends-jb-124502-Data.db'),
>  
> SSTableReader(path='/mnt/cassandra/data/trends/trends/trends-trends-jb-141358-Data.db'),
>  
> SSTableReader(path='/mnt/cassandra/data/trends/trends/trends-trends-jb-141883-Data.db'),
>  
> SSTableReader(path='/mnt/cassandra/data/trends/trends/trends-trends-jb-141846-Data.db'),
>  
> SSTableReader(path='/mnt/cassandra/data/trends/trends/trends-trends-jb-141871-Data.db'),
>  
> SSTableReader(path='/mnt/cassandra/data/trends/trends/trends-trends-jb-141391-Data.db'),
>  
> SSTableReader(path='/mnt/cassandra/data/trends/trends/trends-trends-jb-139563-Data.db'),
>  
> SSTableReader(path='/mnt/cassandra/data/trends/trends/trends-trends-jb-125457-Data.db'),
>  
> SSTableReader(path='/mnt/cassandra/data/trends/trends/trends-trends-jb-141806-Data.db'),
>  
> SSTableReader(path='/mnt/cassandra/data/trends/trends/trends-trends-jb-141102-Data.db'),
>  
> SSTableReader(path='/mnt/cassand

[jira] [Updated] (CASSANDRA-9584) Decommissioning a node on Windows sends the wrong schema change event

2015-06-11 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9584:
---
Fix Version/s: 2.2.x
 Assignee: Joshua McKenzie

> Decommissioning a node on Windows sends the wrong schema change event
> -
>
> Key: CASSANDRA-9584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9584
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.2.0-rc1 | python-driver 2.6.0-rc1 | Windows Server 
> 2012 R2 64-bit
>Reporter: Kishan Karunaratne
>Assignee: Joshua McKenzie
> Fix For: 2.2.x
>
>
> Decommissioning a node on Windows sends the wrong schema change event:
> {noformat}
> cassandra.connection: DEBUG: Message pushed from server: 
>  _args={'change_type': u'DOWN', 'address': ('127.0.0.2', 9042)}, stream_id=-1)>
> {noformat}
> On Linux I get the correct event:
> {noformat}
> cassandra.connection: DEBUG: Message pushed from server: 
>  event_args={'change_type': u'REMOVED_NODE', 'address': ('127.0.0.2', 9042)}, 
> stream_id=-1)>
> {noformat}
> We are using ccmlib node.py.decommission() which calls nodetool decommission:
> {noformat}
> def decommission(self):
> self.nodetool("decommission")
> self.status = Status.DECOMMISIONNED
> self._update_config()
> {noformat}
> Interestingly, it does seem to work (correctly?) on CCM CLI:
> {noformat}
> PS C:\Users\Administrator> ccm status
> Cluster: '2.2'
> --
> node1: UP
> node3: UP
> node2: UP
> PS C:\Users\Administrator> ccm node1 ring
> Starting NodeTool
> Datacenter: datacenter1
> ==
> AddressRackStatus State   LoadOwns
> Token
>   
>   3074457345618258602
> 127.0.0.1  rack1   Up Normal  62.43 KB?   
> -9223372036854775808
> 127.0.0.2  rack1   Up Normal  104.87 KB   ?   
> -3074457345618258603
> 127.0.0.3  rack1   Up Normal  83.67 KB?   
> 3074457345618258602
>   Note: Non-system keyspaces don't have the same replication settings, 
> effective ownership information is meaningless
> PS C:\Users\Administrator> ccm node2 decommission
> PS C:\Users\Administrator> ccm status
> Cluster: '2.2'
> --
> node1: UP
> node3: UP
> node2: DECOMMISIONNED
> PS C:\Users\Administrator> ccm node1 ring
> Starting NodeTool
> Datacenter: datacenter1
> ==
> AddressRackStatus State   LoadOwns
> Token
>   
>   3074457345618258602
> 127.0.0.1  rack1   Up Normal  67.11 KB?   
> -9223372036854775808
> 127.0.0.3  rack1   Up Normal  88.35 KB?   
> 3074457345618258602
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9584) Decommissioning a node on Windows sends the wrong schema change event

2015-06-11 Thread Kishan Karunaratne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582302#comment-14582302
 ] 

Kishan Karunaratne edited comment on CASSANDRA-9584 at 6/11/15 6:07 PM:


After changing the default connect port (7199) to the Windows listen port 
(7100), I was able to use nodetool from the CLI directly to decommission the 
first node, 127.0.0.1. While CCM's node status was not updated, I was able to 
verify via nodetool status that the node no longer exists in the ring. However, 
the Java process still exists for the decommissioned node.  

Furthermore, I'm still able to query the decommissioned node through both CCM:
{noformat}
PS C:\Users\Administrator> ccm node1 nodetool status

Starting NodeTool
Datacenter: datacenter1

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  127.0.0.2  62.48 KB   1?   
4cb1b80e-a83e-4754-9d1c-80afcfe1cc4a  rack1
UN  127.0.0.3  62.48 KB   1?   
d8dd050d-cf88-4c45-97c4-f785db3a1c56  rack1

Note: Non-system keyspaces don't have the same replication settings, effective 
ownership information is meaningless

PS C:\Users\Administrator> ccm node1 ring

Starting NodeTool

Datacenter: datacenter1
==
AddressRackStatus State   LoadOwnsToken
  
3074457345618258602
127.0.0.2  rack1   Up Normal  62.48 KB?   
-3074457345618258603
127.0.0.3  rack1   Up Normal  62.48 KB?   
3074457345618258602


  Note: Non-system keyspaces don't have the same replication settings, 
effective ownership information is meaningless
{noformat}

and through nodetool directly:
{noformat}
PS C:\Users\jenkins\git\cassandra\bin> .\nodetool -p 7100 -h 127.0.0.1 status
Starting NodeTool
Datacenter: datacenter1

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  127.0.0.2  62.48 KB   1?   
4cb1b80e-a83e-4754-9d1c-80afcfe1cc4a  rack1
UN  127.0.0.3  62.48 KB   1?   
d8dd050d-cf88-4c45-97c4-f785db3a1c56  rack1

Note: Non-system keyspaces don't have the same replication settings, effective 
ownership information is meaningless

PS C:\Users\jenkins\git\cassandra\bin> .\nodetool -p 7100 -h 127.0.0.1 ring
Starting NodeTool

Datacenter: datacenter1
==
AddressRackStatus State   LoadOwnsToken
  
3074457345618258602
127.0.0.2  rack1   Up Normal  62.48 KB?   
-3074457345618258603
127.0.0.3  rack1   Up Normal  62.48 KB?   
3074457345618258602


  Note: Non-system keyspaces don't have the same replication settings, 
effective ownership information is meaningless
{noformat}

I wasn't able to decommssion other nodes as I get a:
{noformat}
PS C:\Users\jenkins\git\cassandra\bin> .\nodetool -p 7100 -h 127.0.0.2 
decommission
nodetool: Failed to connect to '127.0.0.2:7100' - ConnectException: 'Connection 
refused: connect'.
{noformat}


was (Author: kishkaru):
After changing the default connect port (7199) to the Windows listen port 
(7100), I was able to use nodetool from the CLI directly to decommission the 
first node, 127.0.0.1. While CCM's node status was not updated, I was able to 
verify via nodetool status that the node no longer exists in the ring. However, 
the Java process still exists for the decommissioned node.  

Furthermore, I'm still able to query the decommissioned node through both CCM:
{noformat}
PS C:\Users\Administrator> ccm node1 nodetool status

Starting NodeTool
Datacenter: datacenter1

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  127.0.0.2  62.48 KB   1?   
4cb1b80e-a83e-4754-9d1c-80afcfe1cc4a  rack1
UN  127.0.0.3  62.48 KB   1?   
d8dd050d-cf88-4c45-97c4-f785db3a1c56  rack1

Note: Non-system keyspaces don't have the same replication settings, effective 
ownership information is meaningless

PS C:\Users\Administrator> ccm node1 ring

Starting NodeTool

Datacenter: datacenter1
==
AddressRackStatus State   LoadOwnsToken
  
3074457345618258602
127.0.0.2  rack1   Up Normal  62.48 KB?   
-3074457345618258603
127.0.0.3  rack1   Up Normal  62.48 KB?   
30744573456182586

[jira] [Comment Edited] (CASSANDRA-9584) Decommissioning a node on Windows sends the wrong schema change event

2015-06-11 Thread Kishan Karunaratne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582302#comment-14582302
 ] 

Kishan Karunaratne edited comment on CASSANDRA-9584 at 6/11/15 6:06 PM:


After changing the default connect port (7199) to the Windows listen port 
(7100), I was able to use nodetool from the CLI directly to decommission the 
first node, 127.0.0.1. While CCM's node status was not updated, I was able to 
verify via nodetool status that the node no longer exists in the ring. However, 
the Java process still exists for the decommissioned node.  

Furthermore, I'm still able to query the decommissioned node through both CCM:
{noformat}
PS C:\Users\Administrator> ccm node1 nodetool status

Starting NodeTool
Datacenter: datacenter1

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  127.0.0.2  62.48 KB   1?   
4cb1b80e-a83e-4754-9d1c-80afcfe1cc4a  rack1
UN  127.0.0.3  62.48 KB   1?   
d8dd050d-cf88-4c45-97c4-f785db3a1c56  rack1

Note: Non-system keyspaces don't have the same replication settings, effective 
ownership information is meaningless

PS C:\Users\Administrator> ccm node1 ring

Starting NodeTool

Datacenter: datacenter1
==
AddressRackStatus State   LoadOwnsToken
  
3074457345618258602
127.0.0.2  rack1   Up Normal  62.48 KB?   
-3074457345618258603
127.0.0.3  rack1   Up Normal  62.48 KB?   
3074457345618258602


  Note: Non-system keyspaces don't have the same replication settings, 
effective ownership information is meaningless
{noformat}

and through nodetool directly:
{noformat}
PS C:\Users\jenkins\git\cassandra\bin> .\nodetool -p 7100 -h 127.0.0.1 status
Starting NodeTool
Datacenter: datacenter1

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  127.0.0.2  62.48 KB   1?   
4cb1b80e-a83e-4754-9d1c-80afcfe1cc4a  rack1
UN  127.0.0.3  62.48 KB   1?   
d8dd050d-cf88-4c45-97c4-f785db3a1c56  rack1

Note: Non-system keyspaces don't have the same replication settings, effective 
ownership information is meaningless

PS C:\Users\jenkins\git\cassandra\bin> .\nodetool -p 7100 -h 127.0.0.1 ring
Starting NodeTool

Datacenter: datacenter1
==
AddressRackStatus State   LoadOwnsToken
  
3074457345618258602
127.0.0.2  rack1   Up Normal  62.48 KB?   
-3074457345618258603
127.0.0.3  rack1   Up Normal  62.48 KB?   
3074457345618258602


  Note: Non-system keyspaces don't have the same replication settings, 
effective ownership information is meaningless
{noformat}

I wasn't able to decommssion other nodes as I get a:
{noformat}
nodetool: Failed to connect to '127.0.0.2:7100' - ConnectException: 'Connection 
refused: connect'.
{noformat}


was (Author: kishkaru):
After changing the default connect port (7199) to the Windows listen port 
(7100), I was able to use nodetool from the CLI directly to decommission the 
first node, 127.0.0.1. While CCM's node status was not updated, I was able to 
verify via nodetool status that the node no longer exists in the ring. However, 
the Java process still exists for the decommissioned node.  

Furthermore, I'm still able to query the decommissioned node through both CCM:
{noformat}
PS C:\Users\Administrator> ccm node1 nodetool status

Starting NodeTool
Datacenter: datacenter1

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  127.0.0.2  62.48 KB   1?   
4cb1b80e-a83e-4754-9d1c-80afcfe1cc4a  rack1
UN  127.0.0.3  62.48 KB   1?   
d8dd050d-cf88-4c45-97c4-f785db3a1c56  rack1

Note: Non-system keyspaces don't have the same replication settings, effective 
ownership information is meaningless

PS C:\Users\Administrator> ccm node1 ring

Starting NodeTool

Datacenter: datacenter1
==
AddressRackStatus State   LoadOwnsToken
  
3074457345618258602
127.0.0.2  rack1   Up Normal  62.48 KB?   
-3074457345618258603
127.0.0.3  rack1   Up Normal  62.48 KB?   
3074457345618258602


  Note: Non-system keyspaces don't have the same replication settings, 
effectiv

[jira] [Commented] (CASSANDRA-9584) Decommissioning a node on Windows sends the wrong schema change event

2015-06-11 Thread Kishan Karunaratne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582302#comment-14582302
 ] 

Kishan Karunaratne commented on CASSANDRA-9584:
---

After changing the default connect port (7199) to the Windows listen port 
(7100), I was able to use nodetool from the CLI directly to decommission the 
first node, 127.0.0.1. While CCM's node status was not updated, I was able to 
verify via nodetool status that the node no longer exists in the ring. However, 
the Java process still exists for the decommissioned node.  

Furthermore, I'm still able to query the decommissioned node through both CCM:
{noformat}
PS C:\Users\Administrator> ccm node1 nodetool status

Starting NodeTool
Datacenter: datacenter1

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  127.0.0.2  62.48 KB   1?   
4cb1b80e-a83e-4754-9d1c-80afcfe1cc4a  rack1
UN  127.0.0.3  62.48 KB   1?   
d8dd050d-cf88-4c45-97c4-f785db3a1c56  rack1

Note: Non-system keyspaces don't have the same replication settings, effective 
ownership information is meaningless

PS C:\Users\Administrator> ccm node1 ring

Starting NodeTool

Datacenter: datacenter1
==
AddressRackStatus State   LoadOwnsToken
  
3074457345618258602
127.0.0.2  rack1   Up Normal  62.48 KB?   
-3074457345618258603
127.0.0.3  rack1   Up Normal  62.48 KB?   
3074457345618258602


  Note: Non-system keyspaces don't have the same replication settings, 
effective ownership information is meaningless
{noformat}

and through nodetool directly:
{noformat}
PS C:\Users\jenkins\git\cassandra\bin> .\nodetool -p 7100 -h 127.0.0.1 status
Starting NodeTool
Datacenter: datacenter1

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  127.0.0.2  62.48 KB   1?   
4cb1b80e-a83e-4754-9d1c-80afcfe1cc4a  rack1
UN  127.0.0.3  62.48 KB   1?   
d8dd050d-cf88-4c45-97c4-f785db3a1c56  rack1

Note: Non-system keyspaces don't have the same replication settings, effective 
ownership information is meaningless
PS C:\Users\jenkins\git\cassandra\bin> .\nodetool -p 7100 -h 127.0.0.1 ring
Starting NodeTool

Datacenter: datacenter1
==
AddressRackStatus State   LoadOwnsToken
  
3074457345618258602
127.0.0.2  rack1   Up Normal  62.48 KB?   
-3074457345618258603
127.0.0.3  rack1   Up Normal  62.48 KB?   
3074457345618258602


  Note: Non-system keyspaces don't have the same replication settings, 
effective ownership information is meaningless
{noformat}

I wasn't able to decommssion other nodes as I get a:
{noformat}
nodetool: Failed to connect to '127.0.0.2:7100' - ConnectException: 'Connection 
refused: connect'.
{noformat}

> Decommissioning a node on Windows sends the wrong schema change event
> -
>
> Key: CASSANDRA-9584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9584
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.2.0-rc1 | python-driver 2.6.0-rc1 | Windows Server 
> 2012 R2 64-bit
>Reporter: Kishan Karunaratne
>
> Decommissioning a node on Windows sends the wrong schema change event:
> {noformat}
> cassandra.connection: DEBUG: Message pushed from server: 
>  _args={'change_type': u'DOWN', 'address': ('127.0.0.2', 9042)}, stream_id=-1)>
> {noformat}
> On Linux I get the correct event:
> {noformat}
> cassandra.connection: DEBUG: Message pushed from server: 
>  event_args={'change_type': u'REMOVED_NODE', 'address': ('127.0.0.2', 9042)}, 
> stream_id=-1)>
> {noformat}
> We are using ccmlib node.py.decommission() which calls nodetool decommission:
> {noformat}
> def decommission(self):
> self.nodetool("decommission")
> self.status = Status.DECOMMISIONNED
> self._update_config()
> {noformat}
> Interestingly, it does seem to work (correctly?) on CCM CLI:
> {noformat}
> PS C:\Users\Administrator> ccm status
> Cluster: '2.2'
> --
> node1: UP
> node3: UP
> node2: UP
> PS C:\Users\Administrator> ccm node1 ring
> Starting NodeTool
> Datacenter: datacenter1
> ==
> AddressRackStatus State   LoadOwns
> Token
>   
>   307

[jira] [Commented] (CASSANDRA-9580) Cardinality check broken during incremental compaction re-opening

2015-06-11 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582265#comment-14582265
 ] 

T Jake Luciani commented on CASSANDRA-9580:
---

bq. Happy to do that myself, in another ticket, though.

go for it

> Cardinality check broken during incremental compaction re-opening
> -
>
> Key: CASSANDRA-9580
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9580
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Minor
> Fix For: 2.1.x
>
> Attachments: lcstest.yaml
>
>
> While testing LCS I found cfstats sometimes crashes during compaction 
> It looks to be related to the incremental re-opening not having metadata.
> {code}
> 
> Keyspace: stresscql
>   Read Count: 0
>   Read Latency: NaN ms.
>   Write Count: 6590571
>   Write Latency: 0.026910956273743198 ms.
>   Pending Flushes: 0
>   Table: ycsb
>   SSTable count: 69
>   SSTables in each level: [67/4, 1, 0, 0, 0, 0, 0, 0, 0]
>   Space used (live): 3454857914
>   Space used (total): 3454857914
>   Space used by snapshots (total): 0
>   Off heap memory used (total): 287361
>   SSTable Compression Ratio: 0.0
> error: 
> /home/jake/workspace/cassandra/./bin/../data/data/stresscql/ycsb-ff399910104911e5a797a18c989fb6f2/stresscql-ycsb-tmplink-ka-125-Data.db
> -- StackTrace --
> java.lang.AssertionError: 
> /home/jake/workspace/cassandra/./bin/../data/data/stresscql/ycsb-ff399910104911e5a797a18c989fb6f2/stresscql-ycsb-tmplink-ka-125-Data.db
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:270)
>   at 
> org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:296)
>   at 
> org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:290)
>   at 
> com.yammer.metrics.reporting.JmxReporter$Gauge.getValue(JmxReporter.java:63)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at 
> com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
>   at 
> com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1443)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1307)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1399)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:637)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
>   at sun.rmi.transport.Transport$1.run(Transport.java:178)
>   at sun.rmi.transport.Transport$1.run(Transport.java:175)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:174)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:557)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:812)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:671)
>   at 
> java.util.concurrent.Thr

[jira] [Commented] (CASSANDRA-9584) Decommissioning a node on Windows sends the wrong schema change event

2015-06-11 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582259#comment-14582259
 ] 

Joshua McKenzie commented on CASSANDRA-9584:


So to clarify, it fails through ccmlib, works through ccm on command-line, and 
??? with regular nodetool decommission from the command-line?

> Decommissioning a node on Windows sends the wrong schema change event
> -
>
> Key: CASSANDRA-9584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9584
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.2.0-rc1 | python-driver 2.6.0-rc1 | Windows Server 
> 2012 R2 64-bit
>Reporter: Kishan Karunaratne
>
> Decommissioning a node on Windows sends the wrong schema change event:
> {noformat}
> cassandra.connection: DEBUG: Message pushed from server: 
>  _args={'change_type': u'DOWN', 'address': ('127.0.0.2', 9042)}, stream_id=-1)>
> {noformat}
> On Linux I get the correct event:
> {noformat}
> cassandra.connection: DEBUG: Message pushed from server: 
>  event_args={'change_type': u'REMOVED_NODE', 'address': ('127.0.0.2', 9042)}, 
> stream_id=-1)>
> {noformat}
> We are using ccmlib node.py.decommission() which calls nodetool decommission:
> {noformat}
> def decommission(self):
> self.nodetool("decommission")
> self.status = Status.DECOMMISIONNED
> self._update_config()
> {noformat}
> Interestingly, it does seem to work (correctly?) on CCM CLI:
> {noformat}
> PS C:\Users\Administrator> ccm status
> Cluster: '2.2'
> --
> node1: UP
> node3: UP
> node2: UP
> PS C:\Users\Administrator> ccm node1 ring
> Starting NodeTool
> Datacenter: datacenter1
> ==
> AddressRackStatus State   LoadOwns
> Token
>   
>   3074457345618258602
> 127.0.0.1  rack1   Up Normal  62.43 KB?   
> -9223372036854775808
> 127.0.0.2  rack1   Up Normal  104.87 KB   ?   
> -3074457345618258603
> 127.0.0.3  rack1   Up Normal  83.67 KB?   
> 3074457345618258602
>   Note: Non-system keyspaces don't have the same replication settings, 
> effective ownership information is meaningless
> PS C:\Users\Administrator> ccm node2 decommission
> PS C:\Users\Administrator> ccm status
> Cluster: '2.2'
> --
> node1: UP
> node3: UP
> node2: DECOMMISIONNED
> PS C:\Users\Administrator> ccm node1 ring
> Starting NodeTool
> Datacenter: datacenter1
> ==
> AddressRackStatus State   LoadOwns
> Token
>   
>   3074457345618258602
> 127.0.0.1  rack1   Up Normal  67.11 KB?   
> -9223372036854775808
> 127.0.0.3  rack1   Up Normal  88.35 KB?   
> 3074457345618258602
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9584) Decommissioning a node on Windows sends the wrong schema change event

2015-06-11 Thread Kishan Karunaratne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishan Karunaratne updated CASSANDRA-9584:
--
Reproduced In: 2.2.0 rc1

> Decommissioning a node on Windows sends the wrong schema change event
> -
>
> Key: CASSANDRA-9584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9584
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.2.0-rc1 | python-driver 2.6.0-rc1 | Windows Server 
> 2012 R2 64-bit
>Reporter: Kishan Karunaratne
>
> Decommissioning a node on Windows sends the wrong schema change event:
> {noformat}
> cassandra.connection: DEBUG: Message pushed from server: 
>  _args={'change_type': u'DOWN', 'address': ('127.0.0.2', 9042)}, stream_id=-1)>
> {noformat}
> On Linux I get the correct event:
> {noformat}
> cassandra.connection: DEBUG: Message pushed from server: 
>  event_args={'change_type': u'REMOVED_NODE', 'address': ('127.0.0.2', 9042)}, 
> stream_id=-1)>
> {noformat}
> We are using ccmlib node.py.decommission() which calls nodetool decommission:
> {noformat}
> def decommission(self):
> self.nodetool("decommission")
> self.status = Status.DECOMMISIONNED
> self._update_config()
> {noformat}
> Interestingly, it does seem to work (correctly?) on CCM CLI:
> {noformat}
> PS C:\Users\Administrator> ccm status
> Cluster: '2.2'
> --
> node1: UP
> node3: UP
> node2: UP
> PS C:\Users\Administrator> ccm node1 ring
> Starting NodeTool
> Datacenter: datacenter1
> ==
> AddressRackStatus State   LoadOwns
> Token
>   
>   3074457345618258602
> 127.0.0.1  rack1   Up Normal  62.43 KB?   
> -9223372036854775808
> 127.0.0.2  rack1   Up Normal  104.87 KB   ?   
> -3074457345618258603
> 127.0.0.3  rack1   Up Normal  83.67 KB?   
> 3074457345618258602
>   Note: Non-system keyspaces don't have the same replication settings, 
> effective ownership information is meaningless
> PS C:\Users\Administrator> ccm node2 decommission
> PS C:\Users\Administrator> ccm status
> Cluster: '2.2'
> --
> node1: UP
> node3: UP
> node2: DECOMMISIONNED
> PS C:\Users\Administrator> ccm node1 ring
> Starting NodeTool
> Datacenter: datacenter1
> ==
> AddressRackStatus State   LoadOwns
> Token
>   
>   3074457345618258602
> 127.0.0.1  rack1   Up Normal  67.11 KB?   
> -9223372036854775808
> 127.0.0.3  rack1   Up Normal  88.35 KB?   
> 3074457345618258602
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9584) Decommissioning a node on Windows sends the wrong schema change event

2015-06-11 Thread Kishan Karunaratne (JIRA)
Kishan Karunaratne created CASSANDRA-9584:
-

 Summary: Decommissioning a node on Windows sends the wrong schema 
change event
 Key: CASSANDRA-9584
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9584
 Project: Cassandra
  Issue Type: Bug
 Environment: C* 2.2.0-rc1 | python-driver 2.6.0-rc1 | Windows Server 
2012 R2 64-bit
Reporter: Kishan Karunaratne


Decommissioning a node on Windows sends the wrong schema change event:
{noformat}
cassandra.connection: DEBUG: Message pushed from server: 

{noformat}

On Linux I get the correct event:
{noformat}
cassandra.connection: DEBUG: Message pushed from server: 

{noformat}

We are using ccmlib node.py.decommission() which calls nodetool decommission:
{noformat}
def decommission(self):
self.nodetool("decommission")
self.status = Status.DECOMMISIONNED
self._update_config()
{noformat}

Interestingly, it does seem to work (correctly?) on CCM CLI:
{noformat}
PS C:\Users\Administrator> ccm status
Cluster: '2.2'
--
node1: UP
node3: UP
node2: UP

PS C:\Users\Administrator> ccm node1 ring

Starting NodeTool

Datacenter: datacenter1
==
AddressRackStatus State   LoadOwnsToken

3074457345618258602
127.0.0.1  rack1   Up Normal  62.43 KB?   
-9223372036854775808
127.0.0.2  rack1   Up Normal  104.87 KB   ?   
-3074457345618258603
127.0.0.3  rack1   Up Normal  83.67 KB?   
3074457345618258602


  Note: Non-system keyspaces don't have the same replication settings, 
effective ownership information is meaningless

PS C:\Users\Administrator> ccm node2 decommission

PS C:\Users\Administrator> ccm status
Cluster: '2.2'
--
node1: UP
node3: UP
node2: DECOMMISIONNED

PS C:\Users\Administrator> ccm node1 ring

Starting NodeTool

Datacenter: datacenter1
==
AddressRackStatus State   LoadOwnsToken

3074457345618258602
127.0.0.1  rack1   Up Normal  67.11 KB?   
-9223372036854775808
127.0.0.3  rack1   Up Normal  88.35 KB?   
3074457345618258602
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9580) Cardinality check broken during incremental compaction re-opening

2015-06-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582174#comment-14582174
 ] 

Benedict commented on CASSANDRA-9580:
-

bq. This seems hacky. cc/ Benedict

It is, and I was torn at the time between this and modifying the Descriptor 
equality and doing my best to ensure that didn't break stuff. Fortunately it 
will be going away with CASSANDRA-7066

bq. So here's a patch for ignoring sstables opened for non-normal reasons

There's already a facility for this, by calling 
ColumnFamilyStore.CANONICAL_SSTABLES on a DataTracker.View, but since I thought 
we'd caught all of the problematic codepaths that could use an early open file 
and switched them to this already, perhaps we should make the choice absolutely 
explicit. i.e. hide the sstables set in View, and require that the accessors 
stipulate to a function call which set they want. Happy to do that myself, in 
another ticket, though.

> Cardinality check broken during incremental compaction re-opening
> -
>
> Key: CASSANDRA-9580
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9580
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Minor
> Fix For: 2.1.x
>
> Attachments: lcstest.yaml
>
>
> While testing LCS I found cfstats sometimes crashes during compaction 
> It looks to be related to the incremental re-opening not having metadata.
> {code}
> 
> Keyspace: stresscql
>   Read Count: 0
>   Read Latency: NaN ms.
>   Write Count: 6590571
>   Write Latency: 0.026910956273743198 ms.
>   Pending Flushes: 0
>   Table: ycsb
>   SSTable count: 69
>   SSTables in each level: [67/4, 1, 0, 0, 0, 0, 0, 0, 0]
>   Space used (live): 3454857914
>   Space used (total): 3454857914
>   Space used by snapshots (total): 0
>   Off heap memory used (total): 287361
>   SSTable Compression Ratio: 0.0
> error: 
> /home/jake/workspace/cassandra/./bin/../data/data/stresscql/ycsb-ff399910104911e5a797a18c989fb6f2/stresscql-ycsb-tmplink-ka-125-Data.db
> -- StackTrace --
> java.lang.AssertionError: 
> /home/jake/workspace/cassandra/./bin/../data/data/stresscql/ycsb-ff399910104911e5a797a18c989fb6f2/stresscql-ycsb-tmplink-ka-125-Data.db
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:270)
>   at 
> org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:296)
>   at 
> org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:290)
>   at 
> com.yammer.metrics.reporting.JmxReporter$Gauge.getValue(JmxReporter.java:63)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at 
> com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
>   at 
> com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1443)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1307)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1399)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:637)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.refl

[jira] [Created] (CASSANDRA-9583) test-compression could run multiple unit tests in parallel like test

2015-06-11 Thread Ariel Weisberg (JIRA)
Ariel Weisberg created CASSANDRA-9583:
-

 Summary: test-compression could run multiple unit tests in 
parallel like test
 Key: CASSANDRA-9583
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9583
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-9583) test-compression could run multiple unit tests in parallel like test

2015-06-11 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg reassigned CASSANDRA-9583:
-

Assignee: Ariel Weisberg

> test-compression could run multiple unit tests in parallel like test
> 
>
> Key: CASSANDRA-9583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9583
> Project: Cassandra
>  Issue Type: Test
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9573) OOM when loading sstables (system.hints)

2015-06-11 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-9573:
---
Reviewer: Aleksey Yeschenko

> OOM when loading sstables (system.hints)
> 
>
> Key: CASSANDRA-9573
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9573
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alan Boudreault
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 2.2.0 rc2
>
> Attachments: 9573.txt, hs_err_pid11243.log, 
> java-hints-issue-2015-06-09.snapshot, system.log, yourkit.ss.tar.gz
>
>
> [~andrew.tolbert] discovered an issue while running endurance tests on 2.2. A 
> Node was not able to start and was killed by the OOM Killer.
> Briefly, Cassandra use an excessive amount of memory when loading compressed 
> sstables (off-heap?). We have initially seen the issue with system.hints 
> before knowing it was related to compression. system.hints use lz4 
> compression by default. If we have a sstable of, say 8-10G, Cassandra will be 
> killed by the OOM killer after 1-2 minutes. I can reproduce that bug 
> everytime locally. 
> * the issue also happens if we have 10G of data splitted in 13MB sstables.
> * I can reproduce the issue if I put a lot of data in the system.hints table.
> * I cannot reproduce the issue with a standard table using the same 
> compression (LZ4). Something seems to be different when it's hints?
> You wont see anything in the node system.log but you'll see this in 
> /var/log/syslog.log:
> {code}
> Out of memory: Kill process 30777 (java) score 600 or sacrifice child
> {code}
> The issue has been introduced in this commit but is not related to the 
> performance issue in CASSANDRA-9240: 
> https://github.com/apache/cassandra/commit/aedce5fc6ba46ca734e91190cfaaeb23ba47a846
> Here is the core dump and some yourkit snapshots in attachments. I am not 
> sure you will be able to get useful information from them.
> core dump: http://dl.alanb.ca/core.tar.gz
> Not sure if this is related, but all dumps and snapshot points to 
> EstimatedHistogramReservoir ... and we can see many 
> javax.management.InstanceAlreadyExistsException: 
> org.apache.cassandra.metrics:... exceptions in system.log before it hangs 
> then crash.
> To reproduce the issue: 
> 1. created a cluster of 3 nodes
> 2. start the whole cluster
> 3. shutdown node2 and node3
> 4. writes 10-15G of data on node1 with replication factor 3. You should see a 
> lot of hints.
> 5. stop node1
> 6. start node2 and node3
> 7. start node1, you should OOM.
> //cc [~tjake] [~benedict] [~andrew.tolbert]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9573) OOM when loading sstables (system.hints)

2015-06-11 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-9573:
---
Attachment: 9573.txt

Attached a patch with Benedict's fix, moving the {{tryMlockall}} call to before 
the startup checks run.
Also, seeing as the change in 
[0d2ec11|https://github.com/apache/cassandra/commit/0d2ec11c7e0abfb84d872289af6d3ac386cf381f]
 means we really cannot startup without access to a suitable JNA jar (which we 
are bundling anyway), I've removed the {{cassandra.boot_without_jna}} option. 

> OOM when loading sstables (system.hints)
> 
>
> Key: CASSANDRA-9573
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9573
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alan Boudreault
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 2.2.0 rc2
>
> Attachments: 9573.txt, hs_err_pid11243.log, 
> java-hints-issue-2015-06-09.snapshot, system.log, yourkit.ss.tar.gz
>
>
> [~andrew.tolbert] discovered an issue while running endurance tests on 2.2. A 
> Node was not able to start and was killed by the OOM Killer.
> Briefly, Cassandra use an excessive amount of memory when loading compressed 
> sstables (off-heap?). We have initially seen the issue with system.hints 
> before knowing it was related to compression. system.hints use lz4 
> compression by default. If we have a sstable of, say 8-10G, Cassandra will be 
> killed by the OOM killer after 1-2 minutes. I can reproduce that bug 
> everytime locally. 
> * the issue also happens if we have 10G of data splitted in 13MB sstables.
> * I can reproduce the issue if I put a lot of data in the system.hints table.
> * I cannot reproduce the issue with a standard table using the same 
> compression (LZ4). Something seems to be different when it's hints?
> You wont see anything in the node system.log but you'll see this in 
> /var/log/syslog.log:
> {code}
> Out of memory: Kill process 30777 (java) score 600 or sacrifice child
> {code}
> The issue has been introduced in this commit but is not related to the 
> performance issue in CASSANDRA-9240: 
> https://github.com/apache/cassandra/commit/aedce5fc6ba46ca734e91190cfaaeb23ba47a846
> Here is the core dump and some yourkit snapshots in attachments. I am not 
> sure you will be able to get useful information from them.
> core dump: http://dl.alanb.ca/core.tar.gz
> Not sure if this is related, but all dumps and snapshot points to 
> EstimatedHistogramReservoir ... and we can see many 
> javax.management.InstanceAlreadyExistsException: 
> org.apache.cassandra.metrics:... exceptions in system.log before it hangs 
> then crash.
> To reproduce the issue: 
> 1. created a cluster of 3 nodes
> 2. start the whole cluster
> 3. shutdown node2 and node3
> 4. writes 10-15G of data on node1 with replication factor 3. You should see a 
> lot of hints.
> 5. stop node1
> 6. start node2 and node3
> 7. start node1, you should OOM.
> //cc [~tjake] [~benedict] [~andrew.tolbert]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9582) MarshalException after upgrading to 2.1.6

2015-06-11 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582145#comment-14582145
 ] 

Robert Stupp commented on CASSANDRA-9582:
-

[~tomvandenberge], can you post the rows from {{system.schema_columns}} and 
{{system.schema_columnfamilies}} for that table and the script to create the 
table?

> MarshalException after upgrading to 2.1.6
> -
>
> Key: CASSANDRA-9582
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9582
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tom van den Berge
>
> I've upgraded a node from 2.0.10 to 2.1.6. Before taking down the node, I've 
> run nodetool upgradesstables and nodetool scrub.
> When starting up the node with 2.1.6, I'm getting a MarshalException 
> (stacktrace included below). For some reason, it seems that C* is trying to 
> convert a text value from the column 'currencyCode' to a UUID, which it isn't.
> I've had similar errors for two other columns as well, which I could work 
> around by dropping the table, since it wasn't used anymore.
> The only thing I could do was restoring a snapshot and starting up the old 
> 2.0.10 again.
> The schema of the table (I've got only one table containing a column named 
> 'currencyCode') is:
> CREATE TABLE "InvoiceItem" (
>   key blob,
>   column1 uuid,
>   "currencyCode" text,
>   description text,
>   "priceGross" bigint,
>   "priceNett" bigint,
>   quantity varint,
>   sku text,
>   "unitPriceGross" bigint,
>   "unitPriceNett" bigint,
>   vat bigint,
>   "vatRateBasisPoints" varint,
>   PRIMARY KEY ((key), column1)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   index_interval=128 AND
>   read_repair_chance=1.00 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   default_time_to_live=0 AND
>   speculative_retry='99.0PERCENTILE' AND
>   memtable_flush_period_in_ms=0 AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={};
> The stack trace when starting up:
> ERROR 13:51:57 Exception encountered during startup
> org.apache.cassandra.serializers.MarshalException: unable to make version 1 
> UUID from 'currencyCode'
>   at 
> org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:188) 
> ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.fromString(AbstractCompositeType.java:242)
>  ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.config.ColumnDefinition.fromSchema(ColumnDefinition.java:397)
>  ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1750)
>  ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1860) 
> ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:321)
>  ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:302) 
> ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.db.DefsTables.loadFromKeyspace(DefsTables.java:133) 
> ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:696)
>  ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:672)
>  ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:293) 
> [apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:536)
>  [apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:625) 
> [apache-cassandra-2.1.6.jar:2.1.6]
> Caused by: org.apache.cassandra.serializers.MarshalException: unable to 
> coerce 'currencyCode' to a  formatted date (long)
>   at 
> org.apache.cassandra.serializers.TimestampSerializer.dateStringToTimestamp(TimestampSerializer.java:111)
>  ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:184) 
> ~[apache-cassandra-2.1.6.jar:2.1.6]
>   ... 12 common frames omitted
> Caused by: java.text.ParseException: Unable to parse the date: currencyCode
>   at 
> org.apache.commons.lang3.time.DateUtils.parseDateWithLeniency(DateUtils.java:336)
>  ~[commons-lang3-3.1.jar:3.1]
>   at 
> org.apache.commons.lang3.time.DateUtils.parseDateStrictly(DateUtils.java:286) 
> ~[commons-lang3-3.1.jar:3.1]
>   a

[jira] [Updated] (CASSANDRA-9580) Cardinality check broken during incremental compaction re-opening

2015-06-11 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-9580:
--
Reviewer: Benedict

> Cardinality check broken during incremental compaction re-opening
> -
>
> Key: CASSANDRA-9580
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9580
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Minor
> Fix For: 2.1.x
>
> Attachments: lcstest.yaml
>
>
> While testing LCS I found cfstats sometimes crashes during compaction 
> It looks to be related to the incremental re-opening not having metadata.
> {code}
> 
> Keyspace: stresscql
>   Read Count: 0
>   Read Latency: NaN ms.
>   Write Count: 6590571
>   Write Latency: 0.026910956273743198 ms.
>   Pending Flushes: 0
>   Table: ycsb
>   SSTable count: 69
>   SSTables in each level: [67/4, 1, 0, 0, 0, 0, 0, 0, 0]
>   Space used (live): 3454857914
>   Space used (total): 3454857914
>   Space used by snapshots (total): 0
>   Off heap memory used (total): 287361
>   SSTable Compression Ratio: 0.0
> error: 
> /home/jake/workspace/cassandra/./bin/../data/data/stresscql/ycsb-ff399910104911e5a797a18c989fb6f2/stresscql-ycsb-tmplink-ka-125-Data.db
> -- StackTrace --
> java.lang.AssertionError: 
> /home/jake/workspace/cassandra/./bin/../data/data/stresscql/ycsb-ff399910104911e5a797a18c989fb6f2/stresscql-ycsb-tmplink-ka-125-Data.db
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:270)
>   at 
> org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:296)
>   at 
> org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:290)
>   at 
> com.yammer.metrics.reporting.JmxReporter$Gauge.getValue(JmxReporter.java:63)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at 
> com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
>   at 
> com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1443)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1307)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1399)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:637)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
>   at sun.rmi.transport.Transport$1.run(Transport.java:178)
>   at sun.rmi.transport.Transport$1.run(Transport.java:175)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:174)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:557)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:812)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:671)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolE

[jira] [Updated] (CASSANDRA-9580) Cardinality check broken during incremental compaction re-opening

2015-06-11 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-9580:
--
Fix Version/s: (was: 2.1.7)
   2.1.x

> Cardinality check broken during incremental compaction re-opening
> -
>
> Key: CASSANDRA-9580
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9580
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Minor
> Fix For: 2.1.x
>
> Attachments: lcstest.yaml
>
>
> While testing LCS I found cfstats sometimes crashes during compaction 
> It looks to be related to the incremental re-opening not having metadata.
> {code}
> 
> Keyspace: stresscql
>   Read Count: 0
>   Read Latency: NaN ms.
>   Write Count: 6590571
>   Write Latency: 0.026910956273743198 ms.
>   Pending Flushes: 0
>   Table: ycsb
>   SSTable count: 69
>   SSTables in each level: [67/4, 1, 0, 0, 0, 0, 0, 0, 0]
>   Space used (live): 3454857914
>   Space used (total): 3454857914
>   Space used by snapshots (total): 0
>   Off heap memory used (total): 287361
>   SSTable Compression Ratio: 0.0
> error: 
> /home/jake/workspace/cassandra/./bin/../data/data/stresscql/ycsb-ff399910104911e5a797a18c989fb6f2/stresscql-ycsb-tmplink-ka-125-Data.db
> -- StackTrace --
> java.lang.AssertionError: 
> /home/jake/workspace/cassandra/./bin/../data/data/stresscql/ycsb-ff399910104911e5a797a18c989fb6f2/stresscql-ycsb-tmplink-ka-125-Data.db
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:270)
>   at 
> org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:296)
>   at 
> org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:290)
>   at 
> com.yammer.metrics.reporting.JmxReporter$Gauge.getValue(JmxReporter.java:63)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at 
> com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
>   at 
> com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1443)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1307)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1399)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:637)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
>   at sun.rmi.transport.Transport$1.run(Transport.java:178)
>   at sun.rmi.transport.Transport$1.run(Transport.java:175)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:174)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:557)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:812)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:671)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   

[jira] [Commented] (CASSANDRA-9582) MarshalException after upgrading to 2.1.6

2015-06-11 Thread Tom van den Berge (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582125#comment-14582125
 ] 

Tom van den Berge commented on CASSANDRA-9582:
--

This table is created as a super column family using cassandra-cli.

> MarshalException after upgrading to 2.1.6
> -
>
> Key: CASSANDRA-9582
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9582
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tom van den Berge
>
> I've upgraded a node from 2.0.10 to 2.1.6. Before taking down the node, I've 
> run nodetool upgradesstables and nodetool scrub.
> When starting up the node with 2.1.6, I'm getting a MarshalException 
> (stacktrace included below). For some reason, it seems that C* is trying to 
> convert a text value from the column 'currencyCode' to a UUID, which it isn't.
> I've had similar errors for two other columns as well, which I could work 
> around by dropping the table, since it wasn't used anymore.
> The only thing I could do was restoring a snapshot and starting up the old 
> 2.0.10 again.
> The schema of the table (I've got only one table containing a column named 
> 'currencyCode') is:
> CREATE TABLE "InvoiceItem" (
>   key blob,
>   column1 uuid,
>   "currencyCode" text,
>   description text,
>   "priceGross" bigint,
>   "priceNett" bigint,
>   quantity varint,
>   sku text,
>   "unitPriceGross" bigint,
>   "unitPriceNett" bigint,
>   vat bigint,
>   "vatRateBasisPoints" varint,
>   PRIMARY KEY ((key), column1)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   index_interval=128 AND
>   read_repair_chance=1.00 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   default_time_to_live=0 AND
>   speculative_retry='99.0PERCENTILE' AND
>   memtable_flush_period_in_ms=0 AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={};
> The stack trace when starting up:
> ERROR 13:51:57 Exception encountered during startup
> org.apache.cassandra.serializers.MarshalException: unable to make version 1 
> UUID from 'currencyCode'
>   at 
> org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:188) 
> ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.fromString(AbstractCompositeType.java:242)
>  ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.config.ColumnDefinition.fromSchema(ColumnDefinition.java:397)
>  ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1750)
>  ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1860) 
> ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:321)
>  ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:302) 
> ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.db.DefsTables.loadFromKeyspace(DefsTables.java:133) 
> ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:696)
>  ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:672)
>  ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:293) 
> [apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:536)
>  [apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:625) 
> [apache-cassandra-2.1.6.jar:2.1.6]
> Caused by: org.apache.cassandra.serializers.MarshalException: unable to 
> coerce 'currencyCode' to a  formatted date (long)
>   at 
> org.apache.cassandra.serializers.TimestampSerializer.dateStringToTimestamp(TimestampSerializer.java:111)
>  ~[apache-cassandra-2.1.6.jar:2.1.6]
>   at 
> org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:184) 
> ~[apache-cassandra-2.1.6.jar:2.1.6]
>   ... 12 common frames omitted
> Caused by: java.text.ParseException: Unable to parse the date: currencyCode
>   at 
> org.apache.commons.lang3.time.DateUtils.parseDateWithLeniency(DateUtils.java:336)
>  ~[commons-lang3-3.1.jar:3.1]
>   at 
> org.apache.commons.lang3.time.DateUtils.parseDateStrictly(DateUtils.java:286) 
> ~[commons-lang3-3.1.jar:3.1]
>   at 
> org.apache.cassandra.serializers.TimestampSerializer.dateStringToTimestamp(Time

[jira] [Assigned] (CASSANDRA-9580) Cardinality check broken during incremental compaction re-opening

2015-06-11 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRA-9580:
-

Assignee: T Jake Luciani

> Cardinality check broken during incremental compaction re-opening
> -
>
> Key: CASSANDRA-9580
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9580
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Minor
> Fix For: 2.1.7
>
> Attachments: lcstest.yaml
>
>
> While testing LCS I found cfstats sometimes crashes during compaction 
> It looks to be related to the incremental re-opening not having metadata.
> {code}
> 
> Keyspace: stresscql
>   Read Count: 0
>   Read Latency: NaN ms.
>   Write Count: 6590571
>   Write Latency: 0.026910956273743198 ms.
>   Pending Flushes: 0
>   Table: ycsb
>   SSTable count: 69
>   SSTables in each level: [67/4, 1, 0, 0, 0, 0, 0, 0, 0]
>   Space used (live): 3454857914
>   Space used (total): 3454857914
>   Space used by snapshots (total): 0
>   Off heap memory used (total): 287361
>   SSTable Compression Ratio: 0.0
> error: 
> /home/jake/workspace/cassandra/./bin/../data/data/stresscql/ycsb-ff399910104911e5a797a18c989fb6f2/stresscql-ycsb-tmplink-ka-125-Data.db
> -- StackTrace --
> java.lang.AssertionError: 
> /home/jake/workspace/cassandra/./bin/../data/data/stresscql/ycsb-ff399910104911e5a797a18c989fb6f2/stresscql-ycsb-tmplink-ka-125-Data.db
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:270)
>   at 
> org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:296)
>   at 
> org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:290)
>   at 
> com.yammer.metrics.reporting.JmxReporter$Gauge.getValue(JmxReporter.java:63)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at 
> com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
>   at 
> com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1443)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1307)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1399)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:637)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
>   at sun.rmi.transport.Transport$1.run(Transport.java:178)
>   at sun.rmi.transport.Transport$1.run(Transport.java:175)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:174)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:557)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:812)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:671)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurren

[jira] [Commented] (CASSANDRA-9580) Cardinality check broken during incremental compaction re-opening

2015-06-11 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582120#comment-14582120
 ] 

T Jake Luciani commented on CASSANDRA-9580:
---

I tried fixing this by checking the type of the descriptor and ignore the 
temporary ones.  But it seems to be different from the one in the actual 
descriptor

java.lang.AssertionError: 
/home/jake/workspace/cassandra/./bin/../data/data/stresscql/ycsb-6cd7db00104d11e59c13a18c989fb6f2/stresscql-ycsb-tmplink-ka-290-Data.db,
 type=FINAL, openreason=EARLY

This seems hacky.  cc/ [~benedict]

So here's a patch for ignoring sstables opened for non-normal reasons. 
https://github.com/tjake/cassandra/tree/9580

> Cardinality check broken during incremental compaction re-opening
> -
>
> Key: CASSANDRA-9580
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9580
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>Priority: Minor
> Fix For: 2.1.7
>
> Attachments: lcstest.yaml
>
>
> While testing LCS I found cfstats sometimes crashes during compaction 
> It looks to be related to the incremental re-opening not having metadata.
> {code}
> 
> Keyspace: stresscql
>   Read Count: 0
>   Read Latency: NaN ms.
>   Write Count: 6590571
>   Write Latency: 0.026910956273743198 ms.
>   Pending Flushes: 0
>   Table: ycsb
>   SSTable count: 69
>   SSTables in each level: [67/4, 1, 0, 0, 0, 0, 0, 0, 0]
>   Space used (live): 3454857914
>   Space used (total): 3454857914
>   Space used by snapshots (total): 0
>   Off heap memory used (total): 287361
>   SSTable Compression Ratio: 0.0
> error: 
> /home/jake/workspace/cassandra/./bin/../data/data/stresscql/ycsb-ff399910104911e5a797a18c989fb6f2/stresscql-ycsb-tmplink-ka-125-Data.db
> -- StackTrace --
> java.lang.AssertionError: 
> /home/jake/workspace/cassandra/./bin/../data/data/stresscql/ycsb-ff399910104911e5a797a18c989fb6f2/stresscql-ycsb-tmplink-ka-125-Data.db
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:270)
>   at 
> org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:296)
>   at 
> org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:290)
>   at 
> com.yammer.metrics.reporting.JmxReporter$Gauge.getValue(JmxReporter.java:63)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at 
> com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
>   at 
> com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1443)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1307)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1399)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:637)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
>   at sun.rmi.transport.Transport$1.run(Transport.java:178)
>   at sun.rmi.transport.Transport$1.run(Transport.java:175)
>   at java.security.AccessController.doPrivile

[jira] [Updated] (CASSANDRA-9581) pig-tests spend time waiting on /dev/random for SecureRandom

2015-06-11 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-9581:
--
Description: We don't need secure random numbers (for unit tests) so 
waiting for entropy doesn't make much sense. Luckily Java makes it easy to 
point to /dev/urandom for entropy. It also transparently handles it correctly 
on Windows.  (was: We don't need secure random numbers so waiting for entropy 
doesn't make much sense. Luckily Java makes it easy to point to /dev/urandom 
for entropy. It also transparently handles it correctly on Windows.)

> pig-tests spend time waiting on /dev/random for SecureRandom
> 
>
> Key: CASSANDRA-9581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9581
> Project: Cassandra
>  Issue Type: Test
>Reporter: Ariel Weisberg
>
> We don't need secure random numbers (for unit tests) so waiting for entropy 
> doesn't make much sense. Luckily Java makes it easy to point to /dev/urandom 
> for entropy. It also transparently handles it correctly on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-9581) pig-tests spend time waiting on /dev/random for SecureRandom

2015-06-11 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg reassigned CASSANDRA-9581:
-

Assignee: Ariel Weisberg

> pig-tests spend time waiting on /dev/random for SecureRandom
> 
>
> Key: CASSANDRA-9581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9581
> Project: Cassandra
>  Issue Type: Test
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>
> We don't need secure random numbers (for unit tests) so waiting for entropy 
> doesn't make much sense. Luckily Java makes it easy to point to /dev/urandom 
> for entropy. It also transparently handles it correctly on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9558) Cassandra-stress regression in 2.2

2015-06-11 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582096#comment-14582096
 ] 

Alan Boudreault commented on CASSANDRA-9558:


On GCE, I'm seeing 80k op/s (cassandra-stress 2.1) versus 55k op/s 
(cassandra-stress 2.2).

Locally I'm only seeing a difference of ~6k op/s (48k op/s for 2.1 versus 42k 
op/s for 2.2), but I am mostly CPU-limited on my laptop and cannot fully 
benefit of the 300 threads.

> Cassandra-stress regression in 2.2
> --
>
> Key: CASSANDRA-9558
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9558
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alan Boudreault
>Priority: Blocker
> Fix For: 2.2.0 rc2
>
> Attachments: 2.1.log, 2.2.log, CASSANDRA-9558-2.patch, 
> CASSANDRA-9558-ProtocolV2.patch, atolber-CASSANDRA-9558-stress.tgz, 
> atolber-trunk-driver-coalescing-disabled.txt, 
> stress-2.1-java-driver-2.0.9.2.log, stress-2.1-java-driver-2.2+PATCH.log, 
> stress-2.1-java-driver-2.2.log, stress-2.2-java-driver-2.2+PATCH.log, 
> stress-2.2-java-driver-2.2.log
>
>
> We are seeing some regression in performance when using cassandra-stress 2.2. 
> You can see the difference at this url:
> http://riptano.github.io/cassandra_performance/graph_v5/graph.html?stats=stress_regression.json&metric=op_rate&operation=1_write&smoothing=1&show_aggregates=true&xmin=0&xmax=108.57&ymin=0&ymax=168147.1
> The cassandra version of the cluster doesn't seem to have any impact. 
> //cc [~tjake] [~benedict]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9582) MarshalException after upgrading to 2.1.6

2015-06-11 Thread Tom van den Berge (JIRA)
Tom van den Berge created CASSANDRA-9582:


 Summary: MarshalException after upgrading to 2.1.6
 Key: CASSANDRA-9582
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9582
 Project: Cassandra
  Issue Type: Bug
Reporter: Tom van den Berge


I've upgraded a node from 2.0.10 to 2.1.6. Before taking down the node, I've 
run nodetool upgradesstables and nodetool scrub.

When starting up the node with 2.1.6, I'm getting a MarshalException 
(stacktrace included below). For some reason, it seems that C* is trying to 
convert a text value from the column 'currencyCode' to a UUID, which it isn't.
I've had similar errors for two other columns as well, which I could work 
around by dropping the table, since it wasn't used anymore.

The only thing I could do was restoring a snapshot and starting up the old 
2.0.10 again.

The schema of the table (I've got only one table containing a column named 
'currencyCode') is:
CREATE TABLE "InvoiceItem" (
  key blob,
  column1 uuid,
  "currencyCode" text,
  description text,
  "priceGross" bigint,
  "priceNett" bigint,
  quantity varint,
  sku text,
  "unitPriceGross" bigint,
  "unitPriceNett" bigint,
  vat bigint,
  "vatRateBasisPoints" varint,
  PRIMARY KEY ((key), column1)
) WITH COMPACT STORAGE AND
  bloom_filter_fp_chance=0.01 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.00 AND
  gc_grace_seconds=864000 AND
  index_interval=128 AND
  read_repair_chance=1.00 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  default_time_to_live=0 AND
  speculative_retry='99.0PERCENTILE' AND
  memtable_flush_period_in_ms=0 AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={};


The stack trace when starting up:

ERROR 13:51:57 Exception encountered during startup
org.apache.cassandra.serializers.MarshalException: unable to make version 1 
UUID from 'currencyCode'
at 
org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:188) 
~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.fromString(AbstractCompositeType.java:242)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.ColumnDefinition.fromSchema(ColumnDefinition.java:397)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1750)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1860) 
~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:321)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:302) 
~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.db.DefsTables.loadFromKeyspace(DefsTables.java:133) 
~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:696)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:672)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:293) 
[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:536) 
[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:625) 
[apache-cassandra-2.1.6.jar:2.1.6]
Caused by: org.apache.cassandra.serializers.MarshalException: unable to coerce 
'currencyCode' to a  formatted date (long)
at 
org.apache.cassandra.serializers.TimestampSerializer.dateStringToTimestamp(TimestampSerializer.java:111)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:184) 
~[apache-cassandra-2.1.6.jar:2.1.6]
... 12 common frames omitted
Caused by: java.text.ParseException: Unable to parse the date: currencyCode
at 
org.apache.commons.lang3.time.DateUtils.parseDateWithLeniency(DateUtils.java:336)
 ~[commons-lang3-3.1.jar:3.1]
at 
org.apache.commons.lang3.time.DateUtils.parseDateStrictly(DateUtils.java:286) 
~[commons-lang3-3.1.jar:3.1]
at 
org.apache.cassandra.serializers.TimestampSerializer.dateStringToTimestamp(TimestampSerializer.java:107)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
... 13 common frames omitted
org.apache.cassandra.serializers.MarshalException: unable to make version 1 
UUID from 'currencyCode'
at 
org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:188)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.fromString(AbstractCompositeType.java:242)
at 
org.apache.cassandra.config.ColumnDefi

[jira] [Updated] (CASSANDRA-9213) Compaction errors observed during heavy write load: BAD RELEASE

2015-06-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-9213:
-
Fix Version/s: (was: 2.1.x)

> Compaction errors observed during heavy write load: BAD RELEASE
> ---
>
> Key: CASSANDRA-9213
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9213
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.4.374
> Ubuntu 14.04.2
> java version "1.7.0_45"
> 10-node cluster, RF = 3
>Reporter: Rocco Varela
>Assignee: Marcus Eriksson
> Attachments: COMPACTION-ERR.log
>
>
> During heavy write load testing we're seeing occasional compaction errors 
> with  the following error message:
> {code}
> ERROR [CompactionExecutor:40] 2015-04-16 17:01:16,936  Ref.java:170 - BAD 
> RELEASE: attempted to release a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@31d969bd) that has 
> already been released
> ...
> ERROR [CompactionExecutor:40] 2015-04-16 17:01:22,190  
> CassandraDaemon.java:223 - Exception in thread 
> Thread[CompactionExecutor:40,1,main]
> java.lang.AssertionError: null
>  at 
> org.apache.cassandra.io.sstable.SSTableReader.markObsolete(SSTableReader.java:1699)
>  ~[cassandra-all-2.1.4.374.jar:2.1.4.374]
>  at 
> org.apache.cassandra.db.DataTracker.unmarkCompacting(DataTracker.java:240) 
> ~[cassandra-all-2.1.4.374.jar:2.1.4.374]
>  at 
> org.apache.cassandra.io.sstable.SSTableRewriter.replaceWithFinishedReaders(SSTableRewriter.java:495)
>  ~[cassandra-all-2.1.4.374.jar:2.1.4.374]
>  at
> ...
> {code}
> I have turned on debugrefcount in bin/cassandra:launch_service() and I will 
> repost another stack trace when it happens again.
> {code}
> cassandra_parms="$cassandra_parms -Dcassandra.debugrefcount=true"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9581) pig-tests spend time waiting on /dev/random for SecureRandom

2015-06-11 Thread Ariel Weisberg (JIRA)
Ariel Weisberg created CASSANDRA-9581:
-

 Summary: pig-tests spend time waiting on /dev/random for 
SecureRandom
 Key: CASSANDRA-9581
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9581
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg


We don't need secure random numbers so waiting for entropy doesn't make much 
sense. Luckily Java makes it easy to point to /dev/urandom for entropy. It also 
transparently handles it correctly on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9580) Cardinality check broken during incremental compaction re-opening

2015-06-11 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-9580:
--
Summary: Cardinality check broken during incremental compaction re-opening  
(was: Carnality check broken during incremental compaction re-opening)

> Cardinality check broken during incremental compaction re-opening
> -
>
> Key: CASSANDRA-9580
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9580
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>Priority: Minor
> Fix For: 2.1.7
>
> Attachments: lcstest.yaml
>
>
> While testing LCS I found cfstats sometimes crashes during compaction 
> It looks to be related to the incremental re-opening not having metadata.
> {code}
> 
> Keyspace: stresscql
>   Read Count: 0
>   Read Latency: NaN ms.
>   Write Count: 6590571
>   Write Latency: 0.026910956273743198 ms.
>   Pending Flushes: 0
>   Table: ycsb
>   SSTable count: 69
>   SSTables in each level: [67/4, 1, 0, 0, 0, 0, 0, 0, 0]
>   Space used (live): 3454857914
>   Space used (total): 3454857914
>   Space used by snapshots (total): 0
>   Off heap memory used (total): 287361
>   SSTable Compression Ratio: 0.0
> error: 
> /home/jake/workspace/cassandra/./bin/../data/data/stresscql/ycsb-ff399910104911e5a797a18c989fb6f2/stresscql-ycsb-tmplink-ka-125-Data.db
> -- StackTrace --
> java.lang.AssertionError: 
> /home/jake/workspace/cassandra/./bin/../data/data/stresscql/ycsb-ff399910104911e5a797a18c989fb6f2/stresscql-ycsb-tmplink-ka-125-Data.db
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:270)
>   at 
> org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:296)
>   at 
> org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:290)
>   at 
> com.yammer.metrics.reporting.JmxReporter$Gauge.getValue(JmxReporter.java:63)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at 
> com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
>   at 
> com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1443)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1307)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1399)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:637)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
>   at sun.rmi.transport.Transport$1.run(Transport.java:178)
>   at sun.rmi.transport.Transport$1.run(Transport.java:175)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:174)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:557)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:812)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:671)
>   at 
> java.util.concurrent.ThreadPoolExec

[jira] [Commented] (CASSANDRA-7032) Improve vnode allocation

2015-06-11 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582033#comment-14582033
 ] 

Branimir Lambov commented on CASSANDRA-7032:


[Uploaded|https://github.com/apache/cassandra/compare/trunk...blambov:7032-vnode-assignment]
 a new version that interprets the token ranges correctly. Also incorporates 
your changes and fixes a conversion error in the token size calculations that 
was reducing the precision to float and rarely resulting in slightly wrong 
results.

The results achieved are pretty close to what I was getting with the older 
interpretation.

> Improve vnode allocation
> 
>
> Key: CASSANDRA-7032
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7032
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Branimir Lambov
>  Labels: performance, vnodes
> Fix For: 3.x
>
> Attachments: TestVNodeAllocation.java, TestVNodeAllocation.java, 
> TestVNodeAllocation.java, TestVNodeAllocation.java, TestVNodeAllocation.java, 
> TestVNodeAllocation.java
>
>
> It's been known for a little while that random vnode allocation causes 
> hotspots of ownership. It should be possible to improve dramatically on this 
> with deterministic allocation. I have quickly thrown together a simple greedy 
> algorithm that allocates vnodes efficiently, and will repair hotspots in a 
> randomly allocated cluster gradually as more nodes are added, and also 
> ensures that token ranges are fairly evenly spread between nodes (somewhat 
> tunably so). The allocation still permits slight discrepancies in ownership, 
> but it is bound by the inverse of the size of the cluster (as opposed to 
> random allocation, which strangely gets worse as the cluster size increases). 
> I'm sure there is a decent dynamic programming solution to this that would be 
> even better.
> If on joining the ring a new node were to CAS a shared table where a 
> canonical allocation of token ranges lives after running this (or a similar) 
> algorithm, we could then get guaranteed bounds on the ownership distribution 
> in a cluster. This will also help for CASSANDRA-6696.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9580) Carnality check broken during incremental compaction re-opening

2015-06-11 Thread T Jake Luciani (JIRA)
T Jake Luciani created CASSANDRA-9580:
-

 Summary: Carnality check broken during incremental compaction 
re-opening
 Key: CASSANDRA-9580
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9580
 Project: Cassandra
  Issue Type: Bug
Reporter: T Jake Luciani
Priority: Minor
 Fix For: 2.1.7
 Attachments: lcstest.yaml

While testing LCS I found cfstats sometimes crashes during compaction 

It looks to be related to the incremental re-opening not having metadata.

{code}

Keyspace: stresscql
Read Count: 0
Read Latency: NaN ms.
Write Count: 6590571
Write Latency: 0.026910956273743198 ms.
Pending Flushes: 0
Table: ycsb
SSTable count: 69
SSTables in each level: [67/4, 1, 0, 0, 0, 0, 0, 0, 0]
Space used (live): 3454857914
Space used (total): 3454857914
Space used by snapshots (total): 0
Off heap memory used (total): 287361
SSTable Compression Ratio: 0.0
error: 
/home/jake/workspace/cassandra/./bin/../data/data/stresscql/ycsb-ff399910104911e5a797a18c989fb6f2/stresscql-ycsb-tmplink-ka-125-Data.db
-- StackTrace --
java.lang.AssertionError: 
/home/jake/workspace/cassandra/./bin/../data/data/stresscql/ycsb-ff399910104911e5a797a18c989fb6f2/stresscql-ycsb-tmplink-ka-125-Data.db
at 
org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:270)
at 
org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:296)
at 
org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:290)
at 
com.yammer.metrics.reporting.JmxReporter$Gauge.getValue(JmxReporter.java:63)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at 
com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
at 
com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1443)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1307)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1399)
at 
javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:637)
at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
at sun.rmi.transport.Transport$1.run(Transport.java:178)
at sun.rmi.transport.Transport$1.run(Transport.java:175)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:174)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:557)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:812)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:671)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-06-11 Thread yukim
Merge branch 'cassandra-2.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2c360e60
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2c360e60
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2c360e60

Branch: refs/heads/trunk
Commit: 2c360e60c44866299d4a89ac0f0b4c164b0c4eac
Parents: 1ae5e01 cab33a6
Author: Yuki Morishita 
Authored: Thu Jun 11 09:54:48 2015 -0500
Committer: Yuki Morishita 
Committed: Thu Jun 11 09:54:48 2015 -0500

--
 CHANGES.txt  |  1 +
 .../org/apache/cassandra/service/StorageService.java | 15 +++
 2 files changed, 16 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2c360e60/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2c360e60/src/java/org/apache/cassandra/service/StorageService.java
--



[2/3] cassandra git commit: Fix deprecated repair JMX API

2015-06-11 Thread yukim
Fix deprecated repair JMX API

patch by yukim; reviewed by marcuse for CASSANDRA-9570


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cab33a60
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cab33a60
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cab33a60

Branch: refs/heads/trunk
Commit: cab33a60957525854238c61f9360ba58d04d318b
Parents: 3ee27fb
Author: Yuki Morishita 
Authored: Tue Jun 9 16:20:54 2015 -0500
Committer: Yuki Morishita 
Committed: Thu Jun 11 09:52:58 2015 -0500

--
 CHANGES.txt  |  1 +
 .../org/apache/cassandra/service/StorageService.java | 15 +++
 2 files changed, 16 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cab33a60/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 3163351..72da59f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,7 @@
 2.2
  * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
  * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
+ * Fix deprecated repair JMX API (CASSANDRA-9570)
 
 
 2.2.0-rc1

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cab33a60/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index e059348..2dd56b5 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -2807,6 +2807,21 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 {
 options.getHosts().addAll(hosts);
 }
+if (primaryRange)
+{
+// when repairing only primary range, neither dataCenters nor 
hosts can be set
+if (options.getDataCenters().isEmpty() && 
options.getHosts().isEmpty())
+options.getRanges().addAll(getPrimaryRanges(keyspace));
+// except dataCenters only contain local DC (i.e. -local)
+else if (options.getDataCenters().size() == 1 && 
options.getDataCenters().contains(DatabaseDescriptor.getLocalDataCenter()))
+options.getRanges().addAll(getPrimaryRangesWithinDC(keyspace));
+else
+throw new IllegalArgumentException("You need to run primary 
range repair on all nodes in the cluster.");
+}
+else
+{
+options.getRanges().addAll(getLocalRanges(keyspace));
+}
 if (columnFamilies != null)
 {
 for (String columnFamily : columnFamilies)



[1/3] cassandra git commit: Fix deprecated repair JMX API

2015-06-11 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 3ee27fb1a -> cab33a609
  refs/heads/trunk 1ae5e01c0 -> 2c360e60c


Fix deprecated repair JMX API

patch by yukim; reviewed by marcuse for CASSANDRA-9570


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cab33a60
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cab33a60
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cab33a60

Branch: refs/heads/cassandra-2.2
Commit: cab33a60957525854238c61f9360ba58d04d318b
Parents: 3ee27fb
Author: Yuki Morishita 
Authored: Tue Jun 9 16:20:54 2015 -0500
Committer: Yuki Morishita 
Committed: Thu Jun 11 09:52:58 2015 -0500

--
 CHANGES.txt  |  1 +
 .../org/apache/cassandra/service/StorageService.java | 15 +++
 2 files changed, 16 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cab33a60/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 3163351..72da59f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,7 @@
 2.2
  * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
  * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
+ * Fix deprecated repair JMX API (CASSANDRA-9570)
 
 
 2.2.0-rc1

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cab33a60/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index e059348..2dd56b5 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -2807,6 +2807,21 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 {
 options.getHosts().addAll(hosts);
 }
+if (primaryRange)
+{
+// when repairing only primary range, neither dataCenters nor 
hosts can be set
+if (options.getDataCenters().isEmpty() && 
options.getHosts().isEmpty())
+options.getRanges().addAll(getPrimaryRanges(keyspace));
+// except dataCenters only contain local DC (i.e. -local)
+else if (options.getDataCenters().size() == 1 && 
options.getDataCenters().contains(DatabaseDescriptor.getLocalDataCenter()))
+options.getRanges().addAll(getPrimaryRangesWithinDC(keyspace));
+else
+throw new IllegalArgumentException("You need to run primary 
range repair on all nodes in the cluster.");
+}
+else
+{
+options.getRanges().addAll(getLocalRanges(keyspace));
+}
 if (columnFamilies != null)
 {
 for (String columnFamily : columnFamilies)



[jira] [Assigned] (CASSANDRA-9532) Provide access to select statement's real column definitions

2015-06-11 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe reassigned CASSANDRA-9532:
--

Assignee: Sam Tunnicliffe  (was: mck)

> Provide access to select statement's real column definitions
> 
>
> Key: CASSANDRA-9532
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9532
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: mck
>Assignee: Sam Tunnicliffe
> Fix For: 3.x, 2.1.x, 2.0.x, 2.2.x
>
> Attachments: 9532-2.0-v2.txt, 9532-2.1-v2.txt, 9532-2.2-v2.txt, 
> 9532-trunk-v2.txt, cassandra-2.0-9532.txt, cassandra-2.1-9532.txt, 
> cassandra-2.2-9532.txt, trunk-9532.txt
>
>
> Currently there is no way to get access to the real ColumnDefinitions being 
> used in a SelectStatement.
> This information is there in
> {{selectStatement.selection.columns}} but is private.
> Giving public access would make it possible for third-party implementations 
> of a {{QueryHandler}} to work accurately with the real columns being queried 
> and not have to work-around column aliases (or when the rawSelectors don't 
> map directly to ColumnDefinitions, eg in Selection.fromSelectors(..), like 
> functions), which is what one has to do today with going through 
> ResultSet.metadata.names.
> This issue provides a very minimal patch to provide access to the already 
> final and immutable fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9573) OOM when loading sstables (system.hints)

2015-06-11 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581988#comment-14581988
 ] 

Sam Tunnicliffe commented on CASSANDRA-9573:


I don't see a problem with moving the {{tryMlockall}} to before the startup 
checks run, just as you did [~benedict]. If JNA is not available for whatever 
reason, we'll log a message about it and move on to the checks. At which point, 
if the {{-Dcassandra.boot_without_jna}} flag is not set, we'll halt. 

There is a problem if that flag is set though, as we'll proceed resulting in 
the errors [~aboudreault] noted above, as it seems that {{Memory}} & 
{{MemoryUtil}} have no option but to require JNA for {{Native.malloc}}, which 
appears to have been introduced by CASSANDRA-8714

> OOM when loading sstables (system.hints)
> 
>
> Key: CASSANDRA-9573
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9573
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alan Boudreault
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 2.2.0 rc2
>
> Attachments: hs_err_pid11243.log, 
> java-hints-issue-2015-06-09.snapshot, system.log, yourkit.ss.tar.gz
>
>
> [~andrew.tolbert] discovered an issue while running endurance tests on 2.2. A 
> Node was not able to start and was killed by the OOM Killer.
> Briefly, Cassandra use an excessive amount of memory when loading compressed 
> sstables (off-heap?). We have initially seen the issue with system.hints 
> before knowing it was related to compression. system.hints use lz4 
> compression by default. If we have a sstable of, say 8-10G, Cassandra will be 
> killed by the OOM killer after 1-2 minutes. I can reproduce that bug 
> everytime locally. 
> * the issue also happens if we have 10G of data splitted in 13MB sstables.
> * I can reproduce the issue if I put a lot of data in the system.hints table.
> * I cannot reproduce the issue with a standard table using the same 
> compression (LZ4). Something seems to be different when it's hints?
> You wont see anything in the node system.log but you'll see this in 
> /var/log/syslog.log:
> {code}
> Out of memory: Kill process 30777 (java) score 600 or sacrifice child
> {code}
> The issue has been introduced in this commit but is not related to the 
> performance issue in CASSANDRA-9240: 
> https://github.com/apache/cassandra/commit/aedce5fc6ba46ca734e91190cfaaeb23ba47a846
> Here is the core dump and some yourkit snapshots in attachments. I am not 
> sure you will be able to get useful information from them.
> core dump: http://dl.alanb.ca/core.tar.gz
> Not sure if this is related, but all dumps and snapshot points to 
> EstimatedHistogramReservoir ... and we can see many 
> javax.management.InstanceAlreadyExistsException: 
> org.apache.cassandra.metrics:... exceptions in system.log before it hangs 
> then crash.
> To reproduce the issue: 
> 1. created a cluster of 3 nodes
> 2. start the whole cluster
> 3. shutdown node2 and node3
> 4. writes 10-15G of data on node1 with replication factor 3. You should see a 
> lot of hints.
> 5. stop node1
> 6. start node2 and node3
> 7. start node1, you should OOM.
> //cc [~tjake] [~benedict] [~andrew.tolbert]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9573) OOM when loading sstables (system.hints)

2015-06-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-9573:
-
Assignee: Sam Tunnicliffe  (was: Benedict)

> OOM when loading sstables (system.hints)
> 
>
> Key: CASSANDRA-9573
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9573
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alan Boudreault
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 2.2.0 rc2
>
> Attachments: hs_err_pid11243.log, 
> java-hints-issue-2015-06-09.snapshot, system.log, yourkit.ss.tar.gz
>
>
> [~andrew.tolbert] discovered an issue while running endurance tests on 2.2. A 
> Node was not able to start and was killed by the OOM Killer.
> Briefly, Cassandra use an excessive amount of memory when loading compressed 
> sstables (off-heap?). We have initially seen the issue with system.hints 
> before knowing it was related to compression. system.hints use lz4 
> compression by default. If we have a sstable of, say 8-10G, Cassandra will be 
> killed by the OOM killer after 1-2 minutes. I can reproduce that bug 
> everytime locally. 
> * the issue also happens if we have 10G of data splitted in 13MB sstables.
> * I can reproduce the issue if I put a lot of data in the system.hints table.
> * I cannot reproduce the issue with a standard table using the same 
> compression (LZ4). Something seems to be different when it's hints?
> You wont see anything in the node system.log but you'll see this in 
> /var/log/syslog.log:
> {code}
> Out of memory: Kill process 30777 (java) score 600 or sacrifice child
> {code}
> The issue has been introduced in this commit but is not related to the 
> performance issue in CASSANDRA-9240: 
> https://github.com/apache/cassandra/commit/aedce5fc6ba46ca734e91190cfaaeb23ba47a846
> Here is the core dump and some yourkit snapshots in attachments. I am not 
> sure you will be able to get useful information from them.
> core dump: http://dl.alanb.ca/core.tar.gz
> Not sure if this is related, but all dumps and snapshot points to 
> EstimatedHistogramReservoir ... and we can see many 
> javax.management.InstanceAlreadyExistsException: 
> org.apache.cassandra.metrics:... exceptions in system.log before it hangs 
> then crash.
> To reproduce the issue: 
> 1. created a cluster of 3 nodes
> 2. start the whole cluster
> 3. shutdown node2 and node3
> 4. writes 10-15G of data on node1 with replication factor 3. You should see a 
> lot of hints.
> 5. stop node1
> 6. start node2 and node3
> 7. start node1, you should OOM.
> //cc [~tjake] [~benedict] [~andrew.tolbert]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9573) OOM when loading sstables (system.hints)

2015-06-11 Thread Alan Boudreault (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Boudreault updated CASSANDRA-9573:
---
Summary: OOM when loading sstables (system.hints)  (was: OOM when loading 
compressed sstables (system.hints))

> OOM when loading sstables (system.hints)
> 
>
> Key: CASSANDRA-9573
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9573
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alan Boudreault
>Assignee: Benedict
>Priority: Critical
> Fix For: 2.2.0 rc2
>
> Attachments: hs_err_pid11243.log, 
> java-hints-issue-2015-06-09.snapshot, system.log, yourkit.ss.tar.gz
>
>
> [~andrew.tolbert] discovered an issue while running endurance tests on 2.2. A 
> Node was not able to start and was killed by the OOM Killer.
> Briefly, Cassandra use an excessive amount of memory when loading compressed 
> sstables (off-heap?). We have initially seen the issue with system.hints 
> before knowing it was related to compression. system.hints use lz4 
> compression by default. If we have a sstable of, say 8-10G, Cassandra will be 
> killed by the OOM killer after 1-2 minutes. I can reproduce that bug 
> everytime locally. 
> * the issue also happens if we have 10G of data splitted in 13MB sstables.
> * I can reproduce the issue if I put a lot of data in the system.hints table.
> * I cannot reproduce the issue with a standard table using the same 
> compression (LZ4). Something seems to be different when it's hints?
> You wont see anything in the node system.log but you'll see this in 
> /var/log/syslog.log:
> {code}
> Out of memory: Kill process 30777 (java) score 600 or sacrifice child
> {code}
> The issue has been introduced in this commit but is not related to the 
> performance issue in CASSANDRA-9240: 
> https://github.com/apache/cassandra/commit/aedce5fc6ba46ca734e91190cfaaeb23ba47a846
> Here is the core dump and some yourkit snapshots in attachments. I am not 
> sure you will be able to get useful information from them.
> core dump: http://dl.alanb.ca/core.tar.gz
> Not sure if this is related, but all dumps and snapshot points to 
> EstimatedHistogramReservoir ... and we can see many 
> javax.management.InstanceAlreadyExistsException: 
> org.apache.cassandra.metrics:... exceptions in system.log before it hangs 
> then crash.
> To reproduce the issue: 
> 1. created a cluster of 3 nodes
> 2. start the whole cluster
> 3. shutdown node2 and node3
> 4. writes 10-15G of data on node1 with replication factor 3. You should see a 
> lot of hints.
> 5. stop node1
> 6. start node2 and node3
> 7. start node1, you should OOM.
> //cc [~tjake] [~benedict] [~andrew.tolbert]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9573) OOM when loading compressed sstables (system.hints)

2015-06-11 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581929#comment-14581929
 ] 

Alan Boudreault commented on CASSANDRA-9573:


[~benedict] Well done! That was it. I confirm my cassandra node starts in a few 
second as normal with your patch. 

> OOM when loading compressed sstables (system.hints)
> ---
>
> Key: CASSANDRA-9573
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9573
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alan Boudreault
>Assignee: Benedict
>Priority: Critical
> Fix For: 2.2.0 rc2
>
> Attachments: hs_err_pid11243.log, 
> java-hints-issue-2015-06-09.snapshot, system.log, yourkit.ss.tar.gz
>
>
> [~andrew.tolbert] discovered an issue while running endurance tests on 2.2. A 
> Node was not able to start and was killed by the OOM Killer.
> Briefly, Cassandra use an excessive amount of memory when loading compressed 
> sstables (off-heap?). We have initially seen the issue with system.hints 
> before knowing it was related to compression. system.hints use lz4 
> compression by default. If we have a sstable of, say 8-10G, Cassandra will be 
> killed by the OOM killer after 1-2 minutes. I can reproduce that bug 
> everytime locally. 
> * the issue also happens if we have 10G of data splitted in 13MB sstables.
> * I can reproduce the issue if I put a lot of data in the system.hints table.
> * I cannot reproduce the issue with a standard table using the same 
> compression (LZ4). Something seems to be different when it's hints?
> You wont see anything in the node system.log but you'll see this in 
> /var/log/syslog.log:
> {code}
> Out of memory: Kill process 30777 (java) score 600 or sacrifice child
> {code}
> The issue has been introduced in this commit but is not related to the 
> performance issue in CASSANDRA-9240: 
> https://github.com/apache/cassandra/commit/aedce5fc6ba46ca734e91190cfaaeb23ba47a846
> Here is the core dump and some yourkit snapshots in attachments. I am not 
> sure you will be able to get useful information from them.
> core dump: http://dl.alanb.ca/core.tar.gz
> Not sure if this is related, but all dumps and snapshot points to 
> EstimatedHistogramReservoir ... and we can see many 
> javax.management.InstanceAlreadyExistsException: 
> org.apache.cassandra.metrics:... exceptions in system.log before it hangs 
> then crash.
> To reproduce the issue: 
> 1. created a cluster of 3 nodes
> 2. start the whole cluster
> 3. shutdown node2 and node3
> 4. writes 10-15G of data on node1 with replication factor 3. You should see a 
> lot of hints.
> 5. stop node1
> 6. start node2 and node3
> 7. start node1, you should OOM.
> //cc [~tjake] [~benedict] [~andrew.tolbert]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7556) Update cqlsh for UDFs

2015-06-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581891#comment-14581891
 ] 

Aleksey Yeschenko commented on CASSANDRA-7556:
--

Should fix the NPE though, and throw a better exception instead.

> Update cqlsh for UDFs
> -
>
> Key: CASSANDRA-7556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Robert Stupp
>  Labels: udf
> Fix For: 2.2.x
>
> Attachments: 7556-2.txt, 7556.txt
>
>
> Once CASSANDRA-7395 and CASSANDRA-7526 are complete, we'll want to add cqlsh 
> support for user defined functions.
> This will include:
> * Completion for {{CREATE FUNCTION}} and {{DROP FUNCTION}}
> * Tolerating (almost) arbitrary text inside function bodies
> * {{DESCRIBE TYPE}} support
> * Including types in {{DESCRIBE KEYSPACE}} output
> * Possibly {{GRANT}} completion for any new privileges



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7556) Update cqlsh for UDFs

2015-06-11 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581854#comment-14581854
 ] 

Robert Stupp commented on CASSANDRA-7556:
-

Thanks, [~beobal] - it wasn't completely obvious (from the code in Cql.g + 
cqlsh) whether no-argtypes are allowed or not. Will change that to enforce 
argtypes. (Without argtypes C* just produced an NPE - so I wasn't sure whether 
that's a bug or just unsupported.)

> Update cqlsh for UDFs
> -
>
> Key: CASSANDRA-7556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Robert Stupp
>  Labels: udf
> Fix For: 2.2.x
>
> Attachments: 7556-2.txt, 7556.txt
>
>
> Once CASSANDRA-7395 and CASSANDRA-7526 are complete, we'll want to add cqlsh 
> support for user defined functions.
> This will include:
> * Completion for {{CREATE FUNCTION}} and {{DROP FUNCTION}}
> * Tolerating (almost) arbitrary text inside function bodies
> * {{DESCRIBE TYPE}} support
> * Including types in {{DESCRIBE KEYSPACE}} output
> * Possibly {{GRANT}} completion for any new privileges



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9570) Deprecated forceRepairAsync methods in StorageService do not work

2015-06-11 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581839#comment-14581839
 ] 

Marcus Eriksson commented on CASSANDRA-9570:


+1

> Deprecated forceRepairAsync methods in StorageService do not work
> -
>
> Key: CASSANDRA-9570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9570
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Mike Adamson
>Assignee: Yuki Morishita
> Fix For: 2.2.0 rc2
>
>
> The deprecated forceRepairAsync methods in StorageService don't work because 
> they are creating RepairOption as follows:
> {noformat}
> RepairOption options = new RepairOption(parallelism, primaryRange, 
> !fullRepair, false, 1, Collections.>emptyList());
> {noformat}
> This creates a RepairOption with an empty token range. The methods call down 
> to:
> {noformat}
> public int forceRepairAsync(String keyspace, RepairOption options)
> {
> if (options.getRanges().isEmpty() || 
> Keyspace.open(keyspace).getReplicationStrategy().getReplicationFactor() < 2)
> return 0;
> int cmd = nextRepairCommand.incrementAndGet();
> new Thread(createRepairTask(cmd, keyspace, options)).start();
> return cmd;
> }
> {noformat}
> to run the repair and this returns 0 because option ranges are empty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9291) Too many tombstones in schema_columns from creating too many CFs

2015-06-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581829#comment-14581829
 ] 

Aleksey Yeschenko commented on CASSANDRA-9291:
--

It's most definitely not a minor issue, no. It's just very non-trivial to fix 
in the 2.x line (I don't know a way).

> Too many tombstones in schema_columns from creating too many CFs
> 
>
> Key: CASSANDRA-9291
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9291
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Production Cluster with 2 DCs of 3 nodes each and 1 DC 
> of 7 nodes, running on dedicated Xeon hexacore, 96GB ram, RAID for Data and 
> SSF for commitlog, running Debian 7 (with Java 1.7.0_76-b13 64-Bit, 8GB and 
> 16GB of heap tested).
> Dev Cluster with 1 DC with 3 nodes and 1 DC with 1 node, running on 
> virtualized env., Ubuntu 12.04.5 (with Java 1.7.0_72-b14 64-Bit 1GB, 4GB 
> heap) 
>Reporter: Luis Correia
>Priority: Blocker
> Attachments: after_schema.txt, before_schema.txt, schemas500.cql
>
>
> When creating lots of columnfamilies (about 200) the system.schema_columns 
> gets filled with tombstones and therefore prevents clients using the binary 
> protocol of connecting.
> Clients already connected continue normal operation (reading and inserting).
> Log messages are:
> For the first tries (sorry for the lack of precision):
> bq. ERROR [main] 2015-04-22 00:01:38,527 SliceQueryFilter.java (line 200) 
> Scanned over 10 tombstones in system.schema_columns; query aborted (see 
> tombstone_failure_threshold)
> For each client that tries to connect but fails with timeout:
>  bq. WARN [ReadStage:35] 2015-04-27 15:40:10,600 SliceQueryFilter.java (line 
> 231) Read 395 live and 1217 tombstoned cells in system.schema_columns (see 
> tombstone_warn_threshold). 2147283441 columns was requested, slices=[-]
> bq. WARN [ReadStage:40] 2015-04-27 15:40:10,609 SliceQueryFilter.java (line 
> 231) Read 395 live and 1217 tombstoned cells in system.schema_columns (see 
> tombstone_warn_threshold). 2147283441 columns was requested, slices=[-]
> bq. WARN [ReadStage:61] 2015-04-27 15:40:10,670 SliceQueryFilter.java (line 
> 231) Read 395 live and 1217 tombstoned cells in system.schema_columns (see 
> tombstone_warn_threshold). 2147283441 columns was requested, slices=[-]
> bq. WARN [ReadStage:51] 2015-04-27 15:40:10,670 SliceQueryFilter.java (line 
> 231) Read 395 live and 1217 tombstoned cells in system.schema_columns (see 
> tombstone_warn_threshold). 2147283441 columns was requested, slices=[-]
> bq. WARN [ReadStage:55] 2015-04-27 15:40:10,675 SliceQueryFilter.java (line 
> 231) Read 395 live and 1217 tombstoned cells in system.schema_columns (see 
> tombstone_warn_threshold). 2147283441 columns was requested, slices=[-]
> bq. WARN [ReadStage:35] 2015-04-27 15:40:10,707 SliceQueryFilter.java (line 
> 231) Read 1146 live and 3534 tombstoned cells in system.schema_columns (see 
> tombstone_warn_threshold). 2147282894 columns was requested, slices=[-]
> bq. WARN [ReadStage:40] 2015-04-27 15:40:10,708 SliceQueryFilter.java (line 
> 231) Read 1146 live and 3534 tombstoned cells in system.schema_columns (see 
> tombstone_warn_threshold). 2147282894 columns was requested, slices=[-]
> bq. WARN [ReadStage:43] 2015-04-27 15:40:10,715 SliceQueryFilter.java (line 
> 231) Read 395 live and 1217 tombstoned cells in system.schema_columns (see 
> tombstone_warn_threshold). 2147283441 columns was requested, slices=[-]
> bq. WARN [ReadStage:51] 2015-04-27 15:40:10,736 SliceQueryFilter.java (line 
> 231) Read 1146 live and 3534 tombstoned cells in system.schema_columns (see 
> tombstone_warn_threshold). 2147282894 columns was requested, slices=[-]
> bq. WARN [ReadStage:61] 2015-04-27 15:40:10,736 SliceQueryFilter.java (line 
> 231) Read 1146 live and 3534 tombstoned cells in system.schema_columns (see 
> tombstone_warn_threshold). 2147282894 columns was requested, slices=[-]
> bq. WARN [ReadStage:35] 2015-04-27 15:40:10,750 SliceQueryFilter.java (line 
> 231) Read 864 live and 2664 tombstoned cells in system.schema_columns (see 
> tombstone_warn_threshold). 2147281748 columns was requested, slices=[-]
> bq. WARN [ReadStage:40] 2015-04-27 15:40:10,751 SliceQueryFilter.java (line 
> 231) Read 864 live and 2664 tombstoned cells in system.schema_columns (see 
> tombstone_warn_threshold). 2147281748 columns was requested, slices=[-]
> bq. WARN [ReadStage:55] 2015-04-27 15:40:10,759 SliceQueryFilter.java (line 
> 231) Read 1146 live and 3534 tombstoned cells in system.schema_columns (see 
> tombstone_warn_threshold). 2147282894 columns was requested, slices=[-]
> bq. WARN [ReadStage:51] 2015-04-27 15:40:10,821 SliceQueryFilter.java (line 
> 231) Read 864 live and 266

[jira] [Commented] (CASSANDRA-9572) DateTieredCompactionStrategy fails to combine SSTables correctly when TTL is used.

2015-06-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581760#comment-14581760
 ] 

Björn Hegerfors commented on CASSANDRA-9572:


Looks like the right solution to this (except refactoring to avoid calling 
getFullyExpiredSSTables twice). But why is the sort still there (line 115/120)? 
It's redundant since CASSANDRA-8243, and was removed in 2.1+.

> DateTieredCompactionStrategy fails to combine SSTables correctly when TTL is 
> used.
> --
>
> Key: CASSANDRA-9572
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9572
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Antti Nissinen
>Assignee: Marcus Eriksson
>  Labels: dtcs
> Fix For: 3.x, 2.1.x, 2.0.x, 2.2.x
>
> Attachments: cassandra_sstable_metadata_reader.py, 
> cassandra_sstable_timespan_graph.py, compaction_stage_test01_jira.log, 
> compaction_stage_test02_jira.log, datagen.py, explanation_jira.txt, 
> first_results_after_patch.txt, motivation_jira.txt, src_2.1.5_with_debug.zip
>
>
> DateTieredCompaction works correctly when data is dumped for a certain time 
> period in short SSTables in time manner and then compacted together. However, 
> if TTL is applied to the data columns the DTCS fails to compact files 
> correctly in timely manner. In our opinion the problem is caused by two 
> issues:
> A) During the DateTieredCompaction process the getFullyExpiredSStables is 
> called twice. First from the DateTieredCompactionStrategy class and second 
> time from the CompactionTask class. On the first time the target is to find 
> out fully expired SStables that are not overlapping with any non-fully 
> expired SSTables. That works correctly. When the getFullyExpiredSSTables is 
> called second time from CompactionTask class the selection of fully expired 
> SSTables is modified compared to the first selection.
> B) The minimum timestamp of the new SSTables created by combining together 
> fully expired SSTable and files from the most interesting bucket is not 
> correct.
> These two issues together cause problems for the DTCS process when it 
> combines together SSTables having overlap in time and TTL for the column. 
> This is demonstrated by generating test data first without compactions and 
> showing the timely distribution of files. When the compaction is enabled the 
> DCTS combines files together, but the end result is not something to be 
> expected. This is demonstrated in the file motivation_jira.txt
> Attachments contain following material:
> - Motivation_jira.txt: Practical examples how the DTCS behaves with TTL
> - Explanation_jira.txt: gives more details, explains test cases and 
> demonstrates the problems in the compaction process
> - Logfile file for the compactions in the first test case 
> (compaction_stage_test01_jira.log)
> - Logfile file for the compactions in the seconnd test case 
> (compaction_stage_test02_jira.log)
> - source code zip file for version 2.1.5 with additional comment statements 
> (src_2.1.5_with_debug.zip)
> - Python script to generate test data (datagen.py)
> - Python script to read metadata from SStables 
> (cassandra_sstable_metadata_reader.py)
> - Python script to generate timeline representation of SSTables 
> (cassandra_sstable_timespan_graph.py)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9572) DateTieredCompactionStrategy fails to combine SSTables correctly when TTL is used.

2015-06-11 Thread Antti Nissinen (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581706#comment-14581706
 ] 

Antti Nissinen edited comment on CASSANDRA-9572 at 6/11/15 9:22 AM:


Thank you for the batch [~krummas]! First results look promising!
I added a new file to attachments showing the time lines with and without TTL 
(first_results_after_patch.txt)



was (Author: anissinen):
Thank you for the batch [~krummas]! First results look promising!
I added a new file to attachment showing the time lines with and without TTL 
(first_results_after_patch.txt)


> DateTieredCompactionStrategy fails to combine SSTables correctly when TTL is 
> used.
> --
>
> Key: CASSANDRA-9572
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9572
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Antti Nissinen
>Assignee: Marcus Eriksson
>  Labels: dtcs
> Fix For: 3.x, 2.1.x, 2.0.x, 2.2.x
>
> Attachments: cassandra_sstable_metadata_reader.py, 
> cassandra_sstable_timespan_graph.py, compaction_stage_test01_jira.log, 
> compaction_stage_test02_jira.log, datagen.py, explanation_jira.txt, 
> first_results_after_patch.txt, motivation_jira.txt, src_2.1.5_with_debug.zip
>
>
> DateTieredCompaction works correctly when data is dumped for a certain time 
> period in short SSTables in time manner and then compacted together. However, 
> if TTL is applied to the data columns the DTCS fails to compact files 
> correctly in timely manner. In our opinion the problem is caused by two 
> issues:
> A) During the DateTieredCompaction process the getFullyExpiredSStables is 
> called twice. First from the DateTieredCompactionStrategy class and second 
> time from the CompactionTask class. On the first time the target is to find 
> out fully expired SStables that are not overlapping with any non-fully 
> expired SSTables. That works correctly. When the getFullyExpiredSSTables is 
> called second time from CompactionTask class the selection of fully expired 
> SSTables is modified compared to the first selection.
> B) The minimum timestamp of the new SSTables created by combining together 
> fully expired SSTable and files from the most interesting bucket is not 
> correct.
> These two issues together cause problems for the DTCS process when it 
> combines together SSTables having overlap in time and TTL for the column. 
> This is demonstrated by generating test data first without compactions and 
> showing the timely distribution of files. When the compaction is enabled the 
> DCTS combines files together, but the end result is not something to be 
> expected. This is demonstrated in the file motivation_jira.txt
> Attachments contain following material:
> - Motivation_jira.txt: Practical examples how the DTCS behaves with TTL
> - Explanation_jira.txt: gives more details, explains test cases and 
> demonstrates the problems in the compaction process
> - Logfile file for the compactions in the first test case 
> (compaction_stage_test01_jira.log)
> - Logfile file for the compactions in the seconnd test case 
> (compaction_stage_test02_jira.log)
> - source code zip file for version 2.1.5 with additional comment statements 
> (src_2.1.5_with_debug.zip)
> - Python script to generate test data (datagen.py)
> - Python script to read metadata from SStables 
> (cassandra_sstable_metadata_reader.py)
> - Python script to generate timeline representation of SSTables 
> (cassandra_sstable_timespan_graph.py)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Fix CHANGES.txt

2015-06-11 Thread snazy
Fix CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3ee27fb1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3ee27fb1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3ee27fb1

Branch: refs/heads/trunk
Commit: 3ee27fb1a5a5d94f33533f316c5b65f6cb5aad27
Parents: 3842187
Author: Robert Stupp 
Authored: Thu Jun 11 11:19:54 2015 +0200
Committer: Robert Stupp 
Committed: Thu Jun 11 11:19:54 2015 +0200

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ee27fb1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e0447e9..3163351 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2
+ * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
  * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
 
 
@@ -24,7 +25,6 @@
  * Revert CASSANDRA-7807 (tracing completion client notifications) 
(CASSANDRA-9429)
  * Add ability to stop compaction by ID (CASSANDRA-7207)
  * Let CassandraVersion handle SNAPSHOT version (CASSANDRA-9438)
- * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
 Merged from 2.1:
  * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
  * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)



[jira] [Comment Edited] (CASSANDRA-9572) DateTieredCompactionStrategy fails to combine SSTables correctly when TTL is used.

2015-06-11 Thread Antti Nissinen (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581706#comment-14581706
 ] 

Antti Nissinen edited comment on CASSANDRA-9572 at 6/11/15 9:22 AM:


Thank you for the batch [~krummas]! First results look promising!
I added a new file to attachment showing the time lines with and without TTL 
(first_results_after_patch.txt)



was (Author: anissinen):
Thank you for the batch [~krummas]! First results look promising!
I added a new file to attachment showing the time lines with and without TTL


> DateTieredCompactionStrategy fails to combine SSTables correctly when TTL is 
> used.
> --
>
> Key: CASSANDRA-9572
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9572
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Antti Nissinen
>Assignee: Marcus Eriksson
>  Labels: dtcs
> Fix For: 3.x, 2.1.x, 2.0.x, 2.2.x
>
> Attachments: cassandra_sstable_metadata_reader.py, 
> cassandra_sstable_timespan_graph.py, compaction_stage_test01_jira.log, 
> compaction_stage_test02_jira.log, datagen.py, explanation_jira.txt, 
> first_results_after_patch.txt, motivation_jira.txt, src_2.1.5_with_debug.zip
>
>
> DateTieredCompaction works correctly when data is dumped for a certain time 
> period in short SSTables in time manner and then compacted together. However, 
> if TTL is applied to the data columns the DTCS fails to compact files 
> correctly in timely manner. In our opinion the problem is caused by two 
> issues:
> A) During the DateTieredCompaction process the getFullyExpiredSStables is 
> called twice. First from the DateTieredCompactionStrategy class and second 
> time from the CompactionTask class. On the first time the target is to find 
> out fully expired SStables that are not overlapping with any non-fully 
> expired SSTables. That works correctly. When the getFullyExpiredSSTables is 
> called second time from CompactionTask class the selection of fully expired 
> SSTables is modified compared to the first selection.
> B) The minimum timestamp of the new SSTables created by combining together 
> fully expired SSTable and files from the most interesting bucket is not 
> correct.
> These two issues together cause problems for the DTCS process when it 
> combines together SSTables having overlap in time and TTL for the column. 
> This is demonstrated by generating test data first without compactions and 
> showing the timely distribution of files. When the compaction is enabled the 
> DCTS combines files together, but the end result is not something to be 
> expected. This is demonstrated in the file motivation_jira.txt
> Attachments contain following material:
> - Motivation_jira.txt: Practical examples how the DTCS behaves with TTL
> - Explanation_jira.txt: gives more details, explains test cases and 
> demonstrates the problems in the compaction process
> - Logfile file for the compactions in the first test case 
> (compaction_stage_test01_jira.log)
> - Logfile file for the compactions in the seconnd test case 
> (compaction_stage_test02_jira.log)
> - source code zip file for version 2.1.5 with additional comment statements 
> (src_2.1.5_with_debug.zip)
> - Python script to generate test data (datagen.py)
> - Python script to read metadata from SStables 
> (cassandra_sstable_metadata_reader.py)
> - Python script to generate timeline representation of SSTables 
> (cassandra_sstable_timespan_graph.py)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: Fix CHANGES.txt

2015-06-11 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 38421872c -> 3ee27fb1a
  refs/heads/trunk 4d3562f61 -> 1ae5e01c0


Fix CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3ee27fb1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3ee27fb1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3ee27fb1

Branch: refs/heads/cassandra-2.2
Commit: 3ee27fb1a5a5d94f33533f316c5b65f6cb5aad27
Parents: 3842187
Author: Robert Stupp 
Authored: Thu Jun 11 11:19:54 2015 +0200
Committer: Robert Stupp 
Committed: Thu Jun 11 11:19:54 2015 +0200

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ee27fb1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e0447e9..3163351 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2
+ * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
  * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
 
 
@@ -24,7 +25,6 @@
  * Revert CASSANDRA-7807 (tracing completion client notifications) 
(CASSANDRA-9429)
  * Add ability to stop compaction by ID (CASSANDRA-7207)
  * Let CassandraVersion handle SNAPSHOT version (CASSANDRA-9438)
- * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
 Merged from 2.1:
  * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
  * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)



[3/3] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-06-11 Thread snazy
Merge branch 'cassandra-2.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1ae5e01c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1ae5e01c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1ae5e01c

Branch: refs/heads/trunk
Commit: 1ae5e01c0daba013cceae1682c9ac19e4f9198f1
Parents: 4d3562f 3ee27fb
Author: Robert Stupp 
Authored: Thu Jun 11 11:21:00 2015 +0200
Committer: Robert Stupp 
Committed: Thu Jun 11 11:21:00 2015 +0200

--

--




[jira] [Commented] (CASSANDRA-9572) DateTieredCompactionStrategy fails to combine SSTables correctly when TTL is used.

2015-06-11 Thread Antti Nissinen (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581706#comment-14581706
 ] 

Antti Nissinen commented on CASSANDRA-9572:
---

Thank you for the batch [~krummas]! First results look promising!
I added a new file to attachment showing the time lines with and without TTL


> DateTieredCompactionStrategy fails to combine SSTables correctly when TTL is 
> used.
> --
>
> Key: CASSANDRA-9572
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9572
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Antti Nissinen
>Assignee: Marcus Eriksson
>  Labels: dtcs
> Fix For: 3.x, 2.1.x, 2.0.x, 2.2.x
>
> Attachments: cassandra_sstable_metadata_reader.py, 
> cassandra_sstable_timespan_graph.py, compaction_stage_test01_jira.log, 
> compaction_stage_test02_jira.log, datagen.py, explanation_jira.txt, 
> first_results_after_patch.txt, motivation_jira.txt, src_2.1.5_with_debug.zip
>
>
> DateTieredCompaction works correctly when data is dumped for a certain time 
> period in short SSTables in time manner and then compacted together. However, 
> if TTL is applied to the data columns the DTCS fails to compact files 
> correctly in timely manner. In our opinion the problem is caused by two 
> issues:
> A) During the DateTieredCompaction process the getFullyExpiredSStables is 
> called twice. First from the DateTieredCompactionStrategy class and second 
> time from the CompactionTask class. On the first time the target is to find 
> out fully expired SStables that are not overlapping with any non-fully 
> expired SSTables. That works correctly. When the getFullyExpiredSSTables is 
> called second time from CompactionTask class the selection of fully expired 
> SSTables is modified compared to the first selection.
> B) The minimum timestamp of the new SSTables created by combining together 
> fully expired SSTable and files from the most interesting bucket is not 
> correct.
> These two issues together cause problems for the DTCS process when it 
> combines together SSTables having overlap in time and TTL for the column. 
> This is demonstrated by generating test data first without compactions and 
> showing the timely distribution of files. When the compaction is enabled the 
> DCTS combines files together, but the end result is not something to be 
> expected. This is demonstrated in the file motivation_jira.txt
> Attachments contain following material:
> - Motivation_jira.txt: Practical examples how the DTCS behaves with TTL
> - Explanation_jira.txt: gives more details, explains test cases and 
> demonstrates the problems in the compaction process
> - Logfile file for the compactions in the first test case 
> (compaction_stage_test01_jira.log)
> - Logfile file for the compactions in the seconnd test case 
> (compaction_stage_test02_jira.log)
> - source code zip file for version 2.1.5 with additional comment statements 
> (src_2.1.5_with_debug.zip)
> - Python script to generate test data (datagen.py)
> - Python script to read metadata from SStables 
> (cassandra_sstable_metadata_reader.py)
> - Python script to generate timeline representation of SSTables 
> (cassandra_sstable_timespan_graph.py)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9572) DateTieredCompactionStrategy fails to combine SSTables correctly when TTL is used.

2015-06-11 Thread Antti Nissinen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antti Nissinen updated CASSANDRA-9572:
--
Attachment: first_results_after_patch.txt

> DateTieredCompactionStrategy fails to combine SSTables correctly when TTL is 
> used.
> --
>
> Key: CASSANDRA-9572
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9572
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Antti Nissinen
>Assignee: Marcus Eriksson
>  Labels: dtcs
> Fix For: 3.x, 2.1.x, 2.0.x, 2.2.x
>
> Attachments: cassandra_sstable_metadata_reader.py, 
> cassandra_sstable_timespan_graph.py, compaction_stage_test01_jira.log, 
> compaction_stage_test02_jira.log, datagen.py, explanation_jira.txt, 
> first_results_after_patch.txt, motivation_jira.txt, src_2.1.5_with_debug.zip
>
>
> DateTieredCompaction works correctly when data is dumped for a certain time 
> period in short SSTables in time manner and then compacted together. However, 
> if TTL is applied to the data columns the DTCS fails to compact files 
> correctly in timely manner. In our opinion the problem is caused by two 
> issues:
> A) During the DateTieredCompaction process the getFullyExpiredSStables is 
> called twice. First from the DateTieredCompactionStrategy class and second 
> time from the CompactionTask class. On the first time the target is to find 
> out fully expired SStables that are not overlapping with any non-fully 
> expired SSTables. That works correctly. When the getFullyExpiredSSTables is 
> called second time from CompactionTask class the selection of fully expired 
> SSTables is modified compared to the first selection.
> B) The minimum timestamp of the new SSTables created by combining together 
> fully expired SSTable and files from the most interesting bucket is not 
> correct.
> These two issues together cause problems for the DTCS process when it 
> combines together SSTables having overlap in time and TTL for the column. 
> This is demonstrated by generating test data first without compactions and 
> showing the timely distribution of files. When the compaction is enabled the 
> DCTS combines files together, but the end result is not something to be 
> expected. This is demonstrated in the file motivation_jira.txt
> Attachments contain following material:
> - Motivation_jira.txt: Practical examples how the DTCS behaves with TTL
> - Explanation_jira.txt: gives more details, explains test cases and 
> demonstrates the problems in the compaction process
> - Logfile file for the compactions in the first test case 
> (compaction_stage_test01_jira.log)
> - Logfile file for the compactions in the seconnd test case 
> (compaction_stage_test02_jira.log)
> - source code zip file for version 2.1.5 with additional comment statements 
> (src_2.1.5_with_debug.zip)
> - Python script to generate test data (datagen.py)
> - Python script to read metadata from SStables 
> (cassandra_sstable_metadata_reader.py)
> - Python script to generate timeline representation of SSTables 
> (cassandra_sstable_timespan_graph.py)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-06-11 Thread snazy
Merge branch 'cassandra-2.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4d3562f6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4d3562f6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4d3562f6

Branch: refs/heads/trunk
Commit: 4d3562f6103b83bd9e8b8f7a324a78aeb39be754
Parents: a91d1c9 ef8a9f8
Author: Robert Stupp 
Authored: Thu Jun 11 11:05:43 2015 +0200
Committer: Robert Stupp 
Committed: Thu Jun 11 11:05:43 2015 +0200

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4d3562f6/CHANGES.txt
--
diff --cc CHANGES.txt
index ca5ae48,4d4293c..377527e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,13 -1,5 +1,14 @@@
 +3.0:
 + * Make file buffer cache independent of paths being read (CASSANDRA-8897)
 + * Remove deprecated legacy Hadoop code (CASSANDRA-9353)
 + * Decommissioned nodes will not rejoin the cluster (CASSANDRA-8801)
 + * Change gossip stabilization to use endpoit size (CASSANDRA-9401)
 + * Change default garbage collector to G1 (CASSANDRA-7486)
 + * Populate TokenMetadata early during startup (CASSANDRA-9317)
 +
 +
  2.2
+  * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
   * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
  
  
@@@ -33,8 -25,8 +34,7 @@@
   * Revert CASSANDRA-7807 (tracing completion client notifications) 
(CASSANDRA-9429)
   * Add ability to stop compaction by ID (CASSANDRA-7207)
   * Let CassandraVersion handle SNAPSHOT version (CASSANDRA-9438)
-  * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
  Merged from 2.1:
 - * Make nodetool exit with non-0 status on failure (CASSANDRA-9569)
   * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
   * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
   * Use ProtocolError code instead of ServerError code for native protocol



[1/2] cassandra git commit: Fix CHANGES.txt

2015-06-11 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/trunk a91d1c965 -> 4d3562f61


Fix CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ef8a9f88
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ef8a9f88
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ef8a9f88

Branch: refs/heads/trunk
Commit: ef8a9f88c5ac28dfc14942c51f18a41338f8530f
Parents: c08aaab
Author: Robert Stupp 
Authored: Thu Jun 11 11:04:48 2015 +0200
Committer: Robert Stupp 
Committed: Thu Jun 11 11:04:48 2015 +0200

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef8a9f88/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0a03e60..4d4293c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2
+ * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
  * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
 
 
@@ -24,7 +25,6 @@
  * Revert CASSANDRA-7807 (tracing completion client notifications) 
(CASSANDRA-9429)
  * Add ability to stop compaction by ID (CASSANDRA-7207)
  * Let CassandraVersion handle SNAPSHOT version (CASSANDRA-9438)
- * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
 Merged from 2.1:
  * Make nodetool exit with non-0 status on failure (CASSANDRA-9569)
  * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)



[3/3] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-06-11 Thread marcuse
Merge branch 'cassandra-2.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a91d1c96
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a91d1c96
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a91d1c96

Branch: refs/heads/trunk
Commit: a91d1c9652e9ad97c173b12b51143cc4bd98b7f7
Parents: 713b7db 3842187
Author: Marcus Eriksson 
Authored: Thu Jun 11 10:57:46 2015 +0200
Committer: Marcus Eriksson 
Committed: Thu Jun 11 10:57:46 2015 +0200

--
 CHANGES.txt | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a91d1c96/CHANGES.txt
--



[1/2] cassandra git commit: fix CHANGES.txt

2015-06-11 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 c08aaabd9 -> 38421872c


fix CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/16665ee1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/16665ee1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/16665ee1

Branch: refs/heads/cassandra-2.2
Commit: 16665ee1936ed19a054393a03df1154afa8f671e
Parents: e7d02e3
Author: Marcus Eriksson 
Authored: Thu Jun 11 10:56:01 2015 +0200
Committer: Marcus Eriksson 
Committed: Thu Jun 11 10:56:01 2015 +0200

--
 CHANGES.txt | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/16665ee1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5c31509..928eb55 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,4 @@
 2.1.6
- * Make nodetool exit with non-0 status on failure (CASSANDRA-9569)
  * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
  * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
  * Use ProtocolError code instead of ServerError code for native protocol



[1/3] cassandra git commit: fix CHANGES.txt

2015-06-11 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk 713b7db22 -> a91d1c965


fix CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/16665ee1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/16665ee1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/16665ee1

Branch: refs/heads/trunk
Commit: 16665ee1936ed19a054393a03df1154afa8f671e
Parents: e7d02e3
Author: Marcus Eriksson 
Authored: Thu Jun 11 10:56:01 2015 +0200
Committer: Marcus Eriksson 
Committed: Thu Jun 11 10:56:01 2015 +0200

--
 CHANGES.txt | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/16665ee1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5c31509..928eb55 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,4 @@
 2.1.6
- * Make nodetool exit with non-0 status on failure (CASSANDRA-9569)
  * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
  * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
  * Use ProtocolError code instead of ServerError code for native protocol



[jira] [Commented] (CASSANDRA-9573) OOM when loading compressed sstables (system.hints)

2015-06-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581696#comment-14581696
 ] 

Benedict commented on CASSANDRA-9573:
-

Ah, I forgot we used the jna Native class now.

I've pushed a branch with a teensy change 
[here|https://github.com/belliottsmith/cassandra/tree/9573], that just runs the 
startup checks before mlockall. If that passes we've found our culprit. 

It isn't a final patch; I'll leave that to Sam, if it is indeed the problem, 
since CASSANDRA-8049 would be the real culprit (without CASSANDRA-9240 it would 
just happen a little less - a large enough bloom filter would cause the same 
issue)

> OOM when loading compressed sstables (system.hints)
> ---
>
> Key: CASSANDRA-9573
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9573
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alan Boudreault
>Assignee: Benedict
>Priority: Critical
> Fix For: 2.2.0 rc2
>
> Attachments: hs_err_pid11243.log, 
> java-hints-issue-2015-06-09.snapshot, system.log, yourkit.ss.tar.gz
>
>
> [~andrew.tolbert] discovered an issue while running endurance tests on 2.2. A 
> Node was not able to start and was killed by the OOM Killer.
> Briefly, Cassandra use an excessive amount of memory when loading compressed 
> sstables (off-heap?). We have initially seen the issue with system.hints 
> before knowing it was related to compression. system.hints use lz4 
> compression by default. If we have a sstable of, say 8-10G, Cassandra will be 
> killed by the OOM killer after 1-2 minutes. I can reproduce that bug 
> everytime locally. 
> * the issue also happens if we have 10G of data splitted in 13MB sstables.
> * I can reproduce the issue if I put a lot of data in the system.hints table.
> * I cannot reproduce the issue with a standard table using the same 
> compression (LZ4). Something seems to be different when it's hints?
> You wont see anything in the node system.log but you'll see this in 
> /var/log/syslog.log:
> {code}
> Out of memory: Kill process 30777 (java) score 600 or sacrifice child
> {code}
> The issue has been introduced in this commit but is not related to the 
> performance issue in CASSANDRA-9240: 
> https://github.com/apache/cassandra/commit/aedce5fc6ba46ca734e91190cfaaeb23ba47a846
> Here is the core dump and some yourkit snapshots in attachments. I am not 
> sure you will be able to get useful information from them.
> core dump: http://dl.alanb.ca/core.tar.gz
> Not sure if this is related, but all dumps and snapshot points to 
> EstimatedHistogramReservoir ... and we can see many 
> javax.management.InstanceAlreadyExistsException: 
> org.apache.cassandra.metrics:... exceptions in system.log before it hangs 
> then crash.
> To reproduce the issue: 
> 1. created a cluster of 3 nodes
> 2. start the whole cluster
> 3. shutdown node2 and node3
> 4. writes 10-15G of data on node1 with replication factor 3. You should see a 
> lot of hints.
> 5. stop node1
> 6. start node2 and node3
> 7. start node1, you should OOM.
> //cc [~tjake] [~benedict] [~andrew.tolbert]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-06-11 Thread marcuse
Merge branch 'cassandra-2.1' into cassandra-2.2

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/38421872
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/38421872
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/38421872

Branch: refs/heads/cassandra-2.2
Commit: 38421872c428a3d650840dd70e3e1d0602c2f4f7
Parents: c08aaab 16665ee
Author: Marcus Eriksson 
Authored: Thu Jun 11 10:57:33 2015 +0200
Committer: Marcus Eriksson 
Committed: Thu Jun 11 10:57:33 2015 +0200

--
 CHANGES.txt | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/38421872/CHANGES.txt
--
diff --cc CHANGES.txt
index 0a03e60,928eb55..e0447e9
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,32 -1,4 +1,31 @@@
 -2.1.6
 +2.2
 + * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
 +
 +
 +2.2.0-rc1
 + * Compressed commit log should measure compressed space used (CASSANDRA-9095)
 + * Fix comparison bug in CassandraRoleManager#collectRoles (CASSANDRA-9551)
 + * Add tinyint,smallint,time,date support for UDFs (CASSANDRA-9400)
 + * Deprecates SSTableSimpleWriter and SSTableSimpleUnsortedWriter 
(CASSANDRA-9546)
 + * Empty INITCOND treated as null in aggregate (CASSANDRA-9457)
 + * Remove use of Cell in Thrift MapReduce classes (CASSANDRA-8609)
 + * Integrate pre-release Java Driver 2.2-rc1, custom build (CASSANDRA-9493)
 + * Clean up gossiper logic for old versions (CASSANDRA-9370)
 + * Fix custom payload coding/decoding to match the spec (CASSANDRA-9515)
 + * ant test-all results incomplete when parsed (CASSANDRA-9463)
 + * Disallow frozen<> types in function arguments and return types for
 +   clarity (CASSANDRA-9411)
 + * Static Analysis to warn on unsafe use of Autocloseable instances 
(CASSANDRA-9431)
 + * Update commitlog archiving examples now that commitlog segments are
 +   not recycled (CASSANDRA-9350)
 + * Extend Transactional API to sstable lifecycle management (CASSANDRA-8568)
 + * (cqlsh) Add support for native protocol 4 (CASSANDRA-9399)
 + * Ensure that UDF and UDAs are keyspace-isolated (CASSANDRA-9409)
 + * Revert CASSANDRA-7807 (tracing completion client notifications) 
(CASSANDRA-9429)
 + * Add ability to stop compaction by ID (CASSANDRA-7207)
 + * Let CassandraVersion handle SNAPSHOT version (CASSANDRA-9438)
 + * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
 +Merged from 2.1:
-  * Make nodetool exit with non-0 status on failure (CASSANDRA-9569)
   * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
   * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
   * Use ProtocolError code instead of ServerError code for native protocol



[2/3] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-06-11 Thread marcuse
Merge branch 'cassandra-2.1' into cassandra-2.2

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/38421872
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/38421872
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/38421872

Branch: refs/heads/trunk
Commit: 38421872c428a3d650840dd70e3e1d0602c2f4f7
Parents: c08aaab 16665ee
Author: Marcus Eriksson 
Authored: Thu Jun 11 10:57:33 2015 +0200
Committer: Marcus Eriksson 
Committed: Thu Jun 11 10:57:33 2015 +0200

--
 CHANGES.txt | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/38421872/CHANGES.txt
--
diff --cc CHANGES.txt
index 0a03e60,928eb55..e0447e9
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,32 -1,4 +1,31 @@@
 -2.1.6
 +2.2
 + * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
 +
 +
 +2.2.0-rc1
 + * Compressed commit log should measure compressed space used (CASSANDRA-9095)
 + * Fix comparison bug in CassandraRoleManager#collectRoles (CASSANDRA-9551)
 + * Add tinyint,smallint,time,date support for UDFs (CASSANDRA-9400)
 + * Deprecates SSTableSimpleWriter and SSTableSimpleUnsortedWriter 
(CASSANDRA-9546)
 + * Empty INITCOND treated as null in aggregate (CASSANDRA-9457)
 + * Remove use of Cell in Thrift MapReduce classes (CASSANDRA-8609)
 + * Integrate pre-release Java Driver 2.2-rc1, custom build (CASSANDRA-9493)
 + * Clean up gossiper logic for old versions (CASSANDRA-9370)
 + * Fix custom payload coding/decoding to match the spec (CASSANDRA-9515)
 + * ant test-all results incomplete when parsed (CASSANDRA-9463)
 + * Disallow frozen<> types in function arguments and return types for
 +   clarity (CASSANDRA-9411)
 + * Static Analysis to warn on unsafe use of Autocloseable instances 
(CASSANDRA-9431)
 + * Update commitlog archiving examples now that commitlog segments are
 +   not recycled (CASSANDRA-9350)
 + * Extend Transactional API to sstable lifecycle management (CASSANDRA-8568)
 + * (cqlsh) Add support for native protocol 4 (CASSANDRA-9399)
 + * Ensure that UDF and UDAs are keyspace-isolated (CASSANDRA-9409)
 + * Revert CASSANDRA-7807 (tracing completion client notifications) 
(CASSANDRA-9429)
 + * Add ability to stop compaction by ID (CASSANDRA-7207)
 + * Let CassandraVersion handle SNAPSHOT version (CASSANDRA-9438)
 + * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
 +Merged from 2.1:
-  * Make nodetool exit with non-0 status on failure (CASSANDRA-9569)
   * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
   * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
   * Use ProtocolError code instead of ServerError code for native protocol



cassandra git commit: fix CHANGES.txt

2015-06-11 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 e7d02e39c -> 16665ee19


fix CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/16665ee1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/16665ee1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/16665ee1

Branch: refs/heads/cassandra-2.1
Commit: 16665ee1936ed19a054393a03df1154afa8f671e
Parents: e7d02e3
Author: Marcus Eriksson 
Authored: Thu Jun 11 10:56:01 2015 +0200
Committer: Marcus Eriksson 
Committed: Thu Jun 11 10:56:01 2015 +0200

--
 CHANGES.txt | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/16665ee1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5c31509..928eb55 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,4 @@
 2.1.6
- * Make nodetool exit with non-0 status on failure (CASSANDRA-9569)
  * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
  * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
  * Use ProtocolError code instead of ServerError code for native protocol



[jira] [Resolved] (CASSANDRA-9213) Compaction errors observed during heavy write load: BAD RELEASE

2015-06-11 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-9213.

Resolution: Cannot Reproduce

I'll close this as cannot reproduce, and if anyone hits this again, please 
reopen

> Compaction errors observed during heavy write load: BAD RELEASE
> ---
>
> Key: CASSANDRA-9213
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9213
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.4.374
> Ubuntu 14.04.2
> java version "1.7.0_45"
> 10-node cluster, RF = 3
>Reporter: Rocco Varela
>Assignee: Marcus Eriksson
> Fix For: 2.1.x
>
> Attachments: COMPACTION-ERR.log
>
>
> During heavy write load testing we're seeing occasional compaction errors 
> with  the following error message:
> {code}
> ERROR [CompactionExecutor:40] 2015-04-16 17:01:16,936  Ref.java:170 - BAD 
> RELEASE: attempted to release a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@31d969bd) that has 
> already been released
> ...
> ERROR [CompactionExecutor:40] 2015-04-16 17:01:22,190  
> CassandraDaemon.java:223 - Exception in thread 
> Thread[CompactionExecutor:40,1,main]
> java.lang.AssertionError: null
>  at 
> org.apache.cassandra.io.sstable.SSTableReader.markObsolete(SSTableReader.java:1699)
>  ~[cassandra-all-2.1.4.374.jar:2.1.4.374]
>  at 
> org.apache.cassandra.db.DataTracker.unmarkCompacting(DataTracker.java:240) 
> ~[cassandra-all-2.1.4.374.jar:2.1.4.374]
>  at 
> org.apache.cassandra.io.sstable.SSTableRewriter.replaceWithFinishedReaders(SSTableRewriter.java:495)
>  ~[cassandra-all-2.1.4.374.jar:2.1.4.374]
>  at
> ...
> {code}
> I have turned on debugrefcount in bin/cassandra:launch_service() and I will 
> repost another stack trace when it happens again.
> {code}
> cassandra_parms="$cassandra_parms -Dcassandra.debugrefcount=true"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7556) Update cqlsh for UDFs

2015-06-11 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581686#comment-14581686
 ] 

Sam Tunnicliffe commented on CASSANDRA-7556:


[~snazy] It was intentional to not support GRANT/REVOKE on functions without 
arguments. When the permissions are checked in {{SelectStatement#checkAccess}}, 
the FunctionResource is derived from the actual Function object, so it always 
has argtypes. If you grant permissions on the function without arguments, then 
the resource in the permissions table won't match and the request will be 
rejected. 

Also, what does it mean to apply permissions without argtypes where the 
function is overloaded? Should such a GRANT that mean the role has permissions 
on *all* overloaded versions? What if we they REVOKE permissions on a specific 
overload? IMO keeping things explicit is simpler and simpler is better here.

> Update cqlsh for UDFs
> -
>
> Key: CASSANDRA-7556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Robert Stupp
>  Labels: udf
> Fix For: 2.2.x
>
> Attachments: 7556-2.txt, 7556.txt
>
>
> Once CASSANDRA-7395 and CASSANDRA-7526 are complete, we'll want to add cqlsh 
> support for user defined functions.
> This will include:
> * Completion for {{CREATE FUNCTION}} and {{DROP FUNCTION}}
> * Tolerating (almost) arbitrary text inside function bodies
> * {{DESCRIBE TYPE}} support
> * Including types in {{DESCRIBE KEYSPACE}} output
> * Possibly {{GRANT}} completion for any new privileges



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Add functions to convert timeuuid to date or time, deprecate dateOf and unixTimestampOf

2015-06-11 Thread snazy
Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf

patch by Benjamin Lerer; reviewed by Robert Stupp for CASSANDRA-9229


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c08aaabd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c08aaabd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c08aaabd

Branch: refs/heads/trunk
Commit: c08aaabd95d4872593c29807de6ec1485cefa7fa
Parents: 6dfde0e
Author: Benjamin Lerer 
Authored: Thu Jun 11 10:18:05 2015 +0200
Committer: Robert Stupp 
Committed: Thu Jun 11 10:18:05 2015 +0200

--
 CHANGES.txt |   1 +
 NEWS.txt|   7 +
 doc/cql3/CQL.textile|  17 +-
 .../cassandra/cql3/functions/Functions.java |  17 +-
 .../cassandra/cql3/functions/TimeFcts.java  | 229 +++
 .../cassandra/cql3/functions/TimeuuidFcts.java  |  88 ---
 .../cassandra/db/marshal/SimpleDateType.java|  10 +
 .../cassandra/db/marshal/TimestampType.java |   5 +
 .../repair/SystemDistributedKeyspace.java   |  12 +-
 .../serializers/SimpleDateSerializer.java   |  18 +-
 .../apache/cassandra/cql3/AggregationTest.java  |  12 +-
 .../org/apache/cassandra/cql3/TypeTest.java |   9 +-
 test/unit/org/apache/cassandra/cql3/UFTest.java |  10 +-
 .../cassandra/cql3/functions/TimeFctsTest.java  | 206 +
 14 files changed, 523 insertions(+), 118 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c08aaabd/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 355eefb..0a03e60 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -24,6 +24,7 @@
  * Revert CASSANDRA-7807 (tracing completion client notifications) 
(CASSANDRA-9429)
  * Add ability to stop compaction by ID (CASSANDRA-7207)
  * Let CassandraVersion handle SNAPSHOT version (CASSANDRA-9438)
+ * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
 Merged from 2.1:
  * Make nodetool exit with non-0 status on failure (CASSANDRA-9569)
  * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c08aaabd/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 9beb911..3c71310 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -88,6 +88,13 @@ New features
- New `ShortType` (cql smallint). 2-byte signed integer
- New `SimpleDateType` (cql date). 4-byte unsigned integer
- New `TimeType` (cql time). 8-byte long
+   - The toDate(timeuuid), toTimestamp(timeuuid) and toUnixTimestamp(timeuuid) 
functions have been added to allow
+ to convert from timeuuid into date type, timestamp type and bigint raw 
value.
+ The functions unixTimestampOf(timeuuid) and dateOf(timeuuid) have been 
deprecated.
+   - The toDate(timestamp) and toUnixTimestamp(timestamp) functions have been 
added to allow
+ to convert from timestamp into date type and bigint raw value.
+   - The toTimestamp(date) and toUnixTimestamp(date) functions have been added 
to allow
+ to convert from date into timestamp type and bigint raw value.
 
 
 Upgrading

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c08aaabd/doc/cql3/CQL.textile
--
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
index 9cf7b23..3755a2d 100644
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@ -1819,9 +1819,20 @@ will select all rows where the @timeuuid@ column @t@ is 
strictly older than '201
 
 _Warning_: We called the values generated by @minTimeuuid@ and @maxTimeuuid@ 
_fake_ UUID because they do no respect the Time-Based UUID generation process 
specified by the "RFC 4122":http://www.ietf.org/rfc/rfc4122.txt. In particular, 
the value returned by these 2 methods will not be unique. This means you should 
only use those methods for querying (as in the example above). Inserting the 
result of those methods is almost certainly _a bad idea_.
 
-h4. @dateOf@ and @unixTimestampOf@
-
-The @dateOf@ and @unixTimestampOf@ functions take a @timeuuid@ argument and 
extract the embedded timestamp. However, while the @dateof@ function return it 
with the @timestamp@ type (that most client, including cqlsh, interpret as a 
date), the @unixTimestampOf@ function returns it as a @bigint@ raw value.
+h3(#timeFun). Time conversion functions
+
+A number of functions are provided to "convert" a @timeuuid@, a @timestamp@ or 
a @date@ into another @native@ type.
+
+|_. function name|_. input type   |_. description|
+|@toDate@|@timeuuid@  |Converts the @timeu

[3/3] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-06-11 Thread snazy
Merge branch 'cassandra-2.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/713b7db2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/713b7db2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/713b7db2

Branch: refs/heads/trunk
Commit: 713b7db2240863c4e6277d98356f827b1d1f668e
Parents: 9307546 c08aaab
Author: Robert Stupp 
Authored: Thu Jun 11 10:18:39 2015 +0200
Committer: Robert Stupp 
Committed: Thu Jun 11 10:18:39 2015 +0200

--
 CHANGES.txt |   1 +
 NEWS.txt|   7 +
 doc/cql3/CQL.textile|  17 +-
 .../cassandra/cql3/functions/Functions.java |  17 +-
 .../cassandra/cql3/functions/TimeFcts.java  | 229 +++
 .../cassandra/cql3/functions/TimeuuidFcts.java  |  88 ---
 .../cassandra/db/marshal/SimpleDateType.java|  10 +
 .../cassandra/db/marshal/TimestampType.java |   5 +
 .../repair/SystemDistributedKeyspace.java   |  12 +-
 .../serializers/SimpleDateSerializer.java   |  18 +-
 .../apache/cassandra/cql3/AggregationTest.java  |  12 +-
 .../org/apache/cassandra/cql3/TypeTest.java |   9 +-
 test/unit/org/apache/cassandra/cql3/UFTest.java |  10 +-
 .../cassandra/cql3/functions/TimeFctsTest.java  | 206 +
 14 files changed, 523 insertions(+), 118 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/713b7db2/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/713b7db2/NEWS.txt
--



[1/3] cassandra git commit: Add functions to convert timeuuid to date or time, deprecate dateOf and unixTimestampOf

2015-06-11 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 6dfde0e32 -> c08aaabd9
  refs/heads/trunk 9307546b6 -> 713b7db22


Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf

patch by Benjamin Lerer; reviewed by Robert Stupp for CASSANDRA-9229


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c08aaabd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c08aaabd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c08aaabd

Branch: refs/heads/cassandra-2.2
Commit: c08aaabd95d4872593c29807de6ec1485cefa7fa
Parents: 6dfde0e
Author: Benjamin Lerer 
Authored: Thu Jun 11 10:18:05 2015 +0200
Committer: Robert Stupp 
Committed: Thu Jun 11 10:18:05 2015 +0200

--
 CHANGES.txt |   1 +
 NEWS.txt|   7 +
 doc/cql3/CQL.textile|  17 +-
 .../cassandra/cql3/functions/Functions.java |  17 +-
 .../cassandra/cql3/functions/TimeFcts.java  | 229 +++
 .../cassandra/cql3/functions/TimeuuidFcts.java  |  88 ---
 .../cassandra/db/marshal/SimpleDateType.java|  10 +
 .../cassandra/db/marshal/TimestampType.java |   5 +
 .../repair/SystemDistributedKeyspace.java   |  12 +-
 .../serializers/SimpleDateSerializer.java   |  18 +-
 .../apache/cassandra/cql3/AggregationTest.java  |  12 +-
 .../org/apache/cassandra/cql3/TypeTest.java |   9 +-
 test/unit/org/apache/cassandra/cql3/UFTest.java |  10 +-
 .../cassandra/cql3/functions/TimeFctsTest.java  | 206 +
 14 files changed, 523 insertions(+), 118 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c08aaabd/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 355eefb..0a03e60 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -24,6 +24,7 @@
  * Revert CASSANDRA-7807 (tracing completion client notifications) 
(CASSANDRA-9429)
  * Add ability to stop compaction by ID (CASSANDRA-7207)
  * Let CassandraVersion handle SNAPSHOT version (CASSANDRA-9438)
+ * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
 Merged from 2.1:
  * Make nodetool exit with non-0 status on failure (CASSANDRA-9569)
  * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c08aaabd/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 9beb911..3c71310 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -88,6 +88,13 @@ New features
- New `ShortType` (cql smallint). 2-byte signed integer
- New `SimpleDateType` (cql date). 4-byte unsigned integer
- New `TimeType` (cql time). 8-byte long
+   - The toDate(timeuuid), toTimestamp(timeuuid) and toUnixTimestamp(timeuuid) 
functions have been added to allow
+ to convert from timeuuid into date type, timestamp type and bigint raw 
value.
+ The functions unixTimestampOf(timeuuid) and dateOf(timeuuid) have been 
deprecated.
+   - The toDate(timestamp) and toUnixTimestamp(timestamp) functions have been 
added to allow
+ to convert from timestamp into date type and bigint raw value.
+   - The toTimestamp(date) and toUnixTimestamp(date) functions have been added 
to allow
+ to convert from date into timestamp type and bigint raw value.
 
 
 Upgrading

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c08aaabd/doc/cql3/CQL.textile
--
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
index 9cf7b23..3755a2d 100644
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@ -1819,9 +1819,20 @@ will select all rows where the @timeuuid@ column @t@ is 
strictly older than '201
 
 _Warning_: We called the values generated by @minTimeuuid@ and @maxTimeuuid@ 
_fake_ UUID because they do no respect the Time-Based UUID generation process 
specified by the "RFC 4122":http://www.ietf.org/rfc/rfc4122.txt. In particular, 
the value returned by these 2 methods will not be unique. This means you should 
only use those methods for querying (as in the example above). Inserting the 
result of those methods is almost certainly _a bad idea_.
 
-h4. @dateOf@ and @unixTimestampOf@
-
-The @dateOf@ and @unixTimestampOf@ functions take a @timeuuid@ argument and 
extract the embedded timestamp. However, while the @dateof@ function return it 
with the @timestamp@ type (that most client, including cqlsh, interpret as a 
date), the @unixTimestampOf@ function returns it as a @bigint@ raw value.
+h3(#timeFun). Time conversion functions
+
+A number of functions are provided to "convert" a @timeuuid@, a @timestamp@ or 
a @date@ int

[jira] [Commented] (CASSANDRA-9142) DC Local repair or -hosts should only be allowed with -full repair

2015-06-11 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581598#comment-14581598
 ] 

Marcus Eriksson commented on CASSANDRA-9142:


bq. mark the stables streamed as non repaired?
yes, nice catch

force pushed a new version to 
https://github.com/krummas/cassandra/tree/marcuse/9142-2.2 - this means we need 
to state in the PrepareMessage whether it is a global repair or not, and we 
don't anticompact if it is a non-global repair

> DC Local repair or -hosts should only be allowed with -full repair
> --
>
> Key: CASSANDRA-9142
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9142
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: sankalp kohli
>Assignee: Marcus Eriksson
>Priority: Minor
> Fix For: 2.2.x
>
> Attachments: trunk_9142.txt
>
>
> We should not let users mix incremental repair with dc local repair or -host 
> or any repair which does not include all replicas. 
> This will currently cause stables on some replicas to be marked as repaired. 
> The next incremental repair will not work on same set of data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Make nodetool exit with non-0 status if there is a failure

2015-06-11 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 081a37224 -> 6dfde0e32


Make nodetool exit with non-0 status if there is a failure

Patch by marcuse; reviewed by Aleksey Yeschenko for CASSANDRA-9569


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e7d02e39
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e7d02e39
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e7d02e39

Branch: refs/heads/cassandra-2.2
Commit: e7d02e39cb13f272ddc3d09b9a570c4d6948c37e
Parents: 212a2c1
Author: Marcus Eriksson 
Authored: Tue Jun 9 10:18:25 2015 +0200
Committer: Marcus Eriksson 
Committed: Thu Jun 11 09:24:08 2015 +0200

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/tools/NodeTool.java | 4 +++-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7d02e39/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 928eb55..5c31509 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.6
+ * Make nodetool exit with non-0 status on failure (CASSANDRA-9569)
  * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
  * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
  * Use ProtocolError code instead of ServerError code for native protocol

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7d02e39/src/java/org/apache/cassandra/tools/NodeTool.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeTool.java 
b/src/java/org/apache/cassandra/tools/NodeTool.java
index 86b5f52..a2d4ead 100644
--- a/src/java/org/apache/cassandra/tools/NodeTool.java
+++ b/src/java/org/apache/cassandra/tools/NodeTool.java
@@ -286,7 +286,9 @@ public class NodeTool
 try (NodeProbe probe = connect())
 {
 execute(probe);
-} 
+if (probe.isFailed())
+throw new RuntimeException("nodetool failed, check server 
logs");
+}
 catch (IOException e)
 {
 throw new RuntimeException("Error while closing JMX 
connection", e);



[3/3] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-06-11 Thread marcuse
Merge branch 'cassandra-2.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9307546b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9307546b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9307546b

Branch: refs/heads/trunk
Commit: 9307546b6278e50f262257e3513fb26c124dec4c
Parents: 622e001 6dfde0e
Author: Marcus Eriksson 
Authored: Thu Jun 11 09:25:42 2015 +0200
Committer: Marcus Eriksson 
Committed: Thu Jun 11 09:25:42 2015 +0200

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/tools/NodeTool.java | 2 ++
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9307546b/CHANGES.txt
--



[1/3] cassandra git commit: Make nodetool exit with non-0 status if there is a failure

2015-06-11 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk 622e001c9 -> 9307546b6


Make nodetool exit with non-0 status if there is a failure

Patch by marcuse; reviewed by Aleksey Yeschenko for CASSANDRA-9569


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e7d02e39
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e7d02e39
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e7d02e39

Branch: refs/heads/trunk
Commit: e7d02e39cb13f272ddc3d09b9a570c4d6948c37e
Parents: 212a2c1
Author: Marcus Eriksson 
Authored: Tue Jun 9 10:18:25 2015 +0200
Committer: Marcus Eriksson 
Committed: Thu Jun 11 09:24:08 2015 +0200

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/tools/NodeTool.java | 4 +++-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7d02e39/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 928eb55..5c31509 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.6
+ * Make nodetool exit with non-0 status on failure (CASSANDRA-9569)
  * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
  * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
  * Use ProtocolError code instead of ServerError code for native protocol

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7d02e39/src/java/org/apache/cassandra/tools/NodeTool.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeTool.java 
b/src/java/org/apache/cassandra/tools/NodeTool.java
index 86b5f52..a2d4ead 100644
--- a/src/java/org/apache/cassandra/tools/NodeTool.java
+++ b/src/java/org/apache/cassandra/tools/NodeTool.java
@@ -286,7 +286,9 @@ public class NodeTool
 try (NodeProbe probe = connect())
 {
 execute(probe);
-} 
+if (probe.isFailed())
+throw new RuntimeException("nodetool failed, check server 
logs");
+}
 catch (IOException e)
 {
 throw new RuntimeException("Error while closing JMX 
connection", e);



[2/3] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-06-11 Thread marcuse
Merge branch 'cassandra-2.1' into cassandra-2.2

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/tools/NodeTool.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6dfde0e3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6dfde0e3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6dfde0e3

Branch: refs/heads/trunk
Commit: 6dfde0e32e9b2c7a7b36e26997001a28316664a2
Parents: 081a372 e7d02e3
Author: Marcus Eriksson 
Authored: Thu Jun 11 09:25:32 2015 +0200
Committer: Marcus Eriksson 
Committed: Thu Jun 11 09:25:32 2015 +0200

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/tools/NodeTool.java | 2 ++
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6dfde0e3/CHANGES.txt
--
diff --cc CHANGES.txt
index 1b75756,5c31509..355eefb
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,30 -1,5 +1,31 @@@
 -2.1.6
 +2.2
 + * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
 +
 +
 +2.2.0-rc1
 + * Compressed commit log should measure compressed space used (CASSANDRA-9095)
 + * Fix comparison bug in CassandraRoleManager#collectRoles (CASSANDRA-9551)
 + * Add tinyint,smallint,time,date support for UDFs (CASSANDRA-9400)
 + * Deprecates SSTableSimpleWriter and SSTableSimpleUnsortedWriter 
(CASSANDRA-9546)
 + * Empty INITCOND treated as null in aggregate (CASSANDRA-9457)
 + * Remove use of Cell in Thrift MapReduce classes (CASSANDRA-8609)
 + * Integrate pre-release Java Driver 2.2-rc1, custom build (CASSANDRA-9493)
 + * Clean up gossiper logic for old versions (CASSANDRA-9370)
 + * Fix custom payload coding/decoding to match the spec (CASSANDRA-9515)
 + * ant test-all results incomplete when parsed (CASSANDRA-9463)
 + * Disallow frozen<> types in function arguments and return types for
 +   clarity (CASSANDRA-9411)
 + * Static Analysis to warn on unsafe use of Autocloseable instances 
(CASSANDRA-9431)
 + * Update commitlog archiving examples now that commitlog segments are
 +   not recycled (CASSANDRA-9350)
 + * Extend Transactional API to sstable lifecycle management (CASSANDRA-8568)
 + * (cqlsh) Add support for native protocol 4 (CASSANDRA-9399)
 + * Ensure that UDF and UDAs are keyspace-isolated (CASSANDRA-9409)
 + * Revert CASSANDRA-7807 (tracing completion client notifications) 
(CASSANDRA-9429)
 + * Add ability to stop compaction by ID (CASSANDRA-7207)
 + * Let CassandraVersion handle SNAPSHOT version (CASSANDRA-9438)
 +Merged from 2.1:
+  * Make nodetool exit with non-0 status on failure (CASSANDRA-9569)
   * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
   * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
   * Use ProtocolError code instead of ServerError code for native protocol

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6dfde0e3/src/java/org/apache/cassandra/tools/NodeTool.java
--



[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-06-11 Thread marcuse
Merge branch 'cassandra-2.1' into cassandra-2.2

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/tools/NodeTool.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6dfde0e3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6dfde0e3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6dfde0e3

Branch: refs/heads/cassandra-2.2
Commit: 6dfde0e32e9b2c7a7b36e26997001a28316664a2
Parents: 081a372 e7d02e3
Author: Marcus Eriksson 
Authored: Thu Jun 11 09:25:32 2015 +0200
Committer: Marcus Eriksson 
Committed: Thu Jun 11 09:25:32 2015 +0200

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/tools/NodeTool.java | 2 ++
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6dfde0e3/CHANGES.txt
--
diff --cc CHANGES.txt
index 1b75756,5c31509..355eefb
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,30 -1,5 +1,31 @@@
 -2.1.6
 +2.2
 + * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
 +
 +
 +2.2.0-rc1
 + * Compressed commit log should measure compressed space used (CASSANDRA-9095)
 + * Fix comparison bug in CassandraRoleManager#collectRoles (CASSANDRA-9551)
 + * Add tinyint,smallint,time,date support for UDFs (CASSANDRA-9400)
 + * Deprecates SSTableSimpleWriter and SSTableSimpleUnsortedWriter 
(CASSANDRA-9546)
 + * Empty INITCOND treated as null in aggregate (CASSANDRA-9457)
 + * Remove use of Cell in Thrift MapReduce classes (CASSANDRA-8609)
 + * Integrate pre-release Java Driver 2.2-rc1, custom build (CASSANDRA-9493)
 + * Clean up gossiper logic for old versions (CASSANDRA-9370)
 + * Fix custom payload coding/decoding to match the spec (CASSANDRA-9515)
 + * ant test-all results incomplete when parsed (CASSANDRA-9463)
 + * Disallow frozen<> types in function arguments and return types for
 +   clarity (CASSANDRA-9411)
 + * Static Analysis to warn on unsafe use of Autocloseable instances 
(CASSANDRA-9431)
 + * Update commitlog archiving examples now that commitlog segments are
 +   not recycled (CASSANDRA-9350)
 + * Extend Transactional API to sstable lifecycle management (CASSANDRA-8568)
 + * (cqlsh) Add support for native protocol 4 (CASSANDRA-9399)
 + * Ensure that UDF and UDAs are keyspace-isolated (CASSANDRA-9409)
 + * Revert CASSANDRA-7807 (tracing completion client notifications) 
(CASSANDRA-9429)
 + * Add ability to stop compaction by ID (CASSANDRA-7207)
 + * Let CassandraVersion handle SNAPSHOT version (CASSANDRA-9438)
 +Merged from 2.1:
+  * Make nodetool exit with non-0 status on failure (CASSANDRA-9569)
   * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
   * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
   * Use ProtocolError code instead of ServerError code for native protocol

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6dfde0e3/src/java/org/apache/cassandra/tools/NodeTool.java
--



cassandra git commit: Make nodetool exit with non-0 status if there is a failure

2015-06-11 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 212a2c194 -> e7d02e39c


Make nodetool exit with non-0 status if there is a failure

Patch by marcuse; reviewed by Aleksey Yeschenko for CASSANDRA-9569


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e7d02e39
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e7d02e39
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e7d02e39

Branch: refs/heads/cassandra-2.1
Commit: e7d02e39cb13f272ddc3d09b9a570c4d6948c37e
Parents: 212a2c1
Author: Marcus Eriksson 
Authored: Tue Jun 9 10:18:25 2015 +0200
Committer: Marcus Eriksson 
Committed: Thu Jun 11 09:24:08 2015 +0200

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/tools/NodeTool.java | 4 +++-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7d02e39/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 928eb55..5c31509 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.6
+ * Make nodetool exit with non-0 status on failure (CASSANDRA-9569)
  * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
  * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
  * Use ProtocolError code instead of ServerError code for native protocol

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7d02e39/src/java/org/apache/cassandra/tools/NodeTool.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeTool.java 
b/src/java/org/apache/cassandra/tools/NodeTool.java
index 86b5f52..a2d4ead 100644
--- a/src/java/org/apache/cassandra/tools/NodeTool.java
+++ b/src/java/org/apache/cassandra/tools/NodeTool.java
@@ -286,7 +286,9 @@ public class NodeTool
 try (NodeProbe probe = connect())
 {
 execute(probe);
-} 
+if (probe.isFailed())
+throw new RuntimeException("nodetool failed, check server 
logs");
+}
 catch (IOException e)
 {
 throw new RuntimeException("Error while closing JMX 
connection", e);



[jira] [Created] (CASSANDRA-9579) Add JMX / nodetool command to refresh system.size_estimates

2015-06-11 Thread JIRA
Piotr Kołaczkowski created CASSANDRA-9579:
-

 Summary: Add JMX / nodetool command to refresh 
system.size_estimates
 Key: CASSANDRA-9579
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9579
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Reporter: Piotr Kołaczkowski
Priority: Minor


CASSANDRA-7688 added dumping size estimates at a fixed interval. However, in 
some cases, e.g. after inserting huge amounts of data or truncating a table, 
size estimates may become severely incorrect for the interval time. In this 
case being able to manually trigger the recalculation of the estimates would be 
very useful. It would be also useful for any automated testing requiring fresh 
size estimates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)