[jira] [Commented] (CASSANDRA-6102) CassandraStorage broken for bigints and ints

2013-10-11 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13793265#comment-13793265
 ] 

Mikhail Stepura commented on CASSANDRA-6102:


It looks like 
https://github.com/apache/cassandra/commit/bc8e2475fa71f4bbbf95d4294d78b96a1aa1211c
 broke the trunk
{noformat}
[javac] 
C:\Users\mishail\workspace\cassandra\src\java\org\apache\cassandra\hadoop\pig\AbstractCassandraStorage.java:135:
 error: variable validators is already defined in method 
columnToTuple(IColumn,AbstractCassandraStorage.CfInfo,AbstractType)
[javac] Map validators = 
getValidatorMap(cfDef);
[javac]  ^
[javac] 
C:\Users\mishail\workspace\cassandra\src\java\org\apache\cassandra\hadoop\pig\AbstractCassandraStorage.java:157:
 error: cannot find symbol
[javac] for (IColumn subcol : col.getSubColumns())
[javac]  ^
[javac]   symbol:   class IColumn
[javac]   location: class AbstractCassandraStorage
{noformat}

> CassandraStorage broken for bigints and ints
> 
>
> Key: CASSANDRA-6102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6102
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
> Environment: Cassandra 1.2.9 & 1.2.10, Pig 0.11.1, OSX 10.8.x
>Reporter: Janne Jalkanen
>Assignee: Alex Liu
> Fix For: 1.2.11
>
> Attachments: 6102-1.2-branch.txt, 6102-2.0-branch.txt, 6102-v2.txt, 
> 6102-v3.txt, 6102-v4.txt, 6102-v5.txt
>
>
> I am seeing something rather strange in the way Cass 1.2 + Pig seem to handle 
> integer values.
> Setup: Cassandra 1.2.10, OSX 10.8, JDK 1.7u40, Pig 0.11.1.  Single node for 
> testing this. 
> First a table:
> {noformat}
> > CREATE TABLE testc (
>  key text PRIMARY KEY,
>  ivalue int,
>  svalue text,
>  value bigint
> ) WITH COMPACT STORAGE;
> > insert into testc (key,ivalue,svalue,value) values ('foo',10,'bar',65);
> > select * from testc;
> key | ivalue | svalue | value
> -+++---
> foo | 10 |bar | 65
> {noformat}
> For my Pig setup, I then use libraries from different C* versions to actually 
> talk to my database (which stays on 1.2.10 all the time).
> Cassandra 1.0.12 (using cassandra_storage.jar):
> {noformat}
> testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> (foo,(svalue,bar),(ivalue,10),(value,65),{})
> {noformat}
> Cassandra 1.1.10:
> {noformat}
> testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> (foo,(svalue,bar),(ivalue,10),(value,65),{})
> {noformat}
> Cassandra 1.2.10:
> {noformat}
> (testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> foo,{(ivalue,
> ),(svalue,bar),(value,A)})
> {noformat}
> To me it appears that ints and bigints are interpreted as ascii values in 
> cass 1.2.10.  Did something change for CassandraStorage, is there a 
> regression, or am I doing something wrong?  Quick perusal of the JIRA didn't 
> reveal anything that I could directly pin on this.
> Note that using compact storage does not seem to affect the issue, though it 
> obviously changes the resulting pig format.
> In addition, trying to use Pygmalion 
> {noformat}
> tf = foreach testc generate key, 
> flatten(FromCassandraBag('ivalue,svalue,value',columns)) as 
> (ivalue:int,svalue:chararray,lvalue:long);
> dump tf
> (foo,
> ,bar,A)
> {noformat}
> So no help there. Explicitly casting the values to (long) or (int) just 
> results in a ClassCastException.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6068) Support login/password auth in cassandra-stress

2013-10-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6068:
--

Reviewer: Aleksey Yeschenko

> Support login/password auth in cassandra-stress
> ---
>
> Key: CASSANDRA-6068
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6068
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Assignee: Mikhail Stepura
>Priority: Trivial
> Fix For: 2.0.2
>
> Attachments: cassandra-2.0-6068.patch
>
>
> Support login/password auth in cassandra-stress



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (CASSANDRA-6068) Support login/password auth in cassandra-stress

2013-10-11 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura reassigned CASSANDRA-6068:
--

Assignee: Mikhail Stepura

> Support login/password auth in cassandra-stress
> ---
>
> Key: CASSANDRA-6068
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6068
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Assignee: Mikhail Stepura
>Priority: Trivial
> Fix For: 2.0.2
>
> Attachments: cassandra-2.0-6068.patch
>
>
> Support login/password auth in cassandra-stress



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6068) Support login/password auth in cassandra-stress

2013-10-11 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6068:
---

Attachment: cassandra-2.0-6068.patch

{{un}} and {{pw}} options for authentication

> Support login/password auth in cassandra-stress
> ---
>
> Key: CASSANDRA-6068
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6068
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Priority: Trivial
> Fix For: 2.0.2
>
> Attachments: cassandra-2.0-6068.patch
>
>
> Support login/password auth in cassandra-stress



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-5695) Convert pig smoke tests into real PigUnit tests

2013-10-11 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-5695:


Attachment: 5695-1.2-branch.txt

> Convert pig smoke tests into real PigUnit tests
> ---
>
> Key: CASSANDRA-5695
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5695
> Project: Cassandra
>  Issue Type: Test
>  Components: Hadoop
>Reporter: Brandon Williams
>Assignee: Alex Liu
>Priority: Minor
> Fix For: 2.0.2
>
> Attachments: 5695-1.2-branch.txt
>
>
> Currently, we have some ghetto pig tests in examples/pig/test, but there's 
> currently no way to continuously integrate these since a human needs to check 
> that the output isn't wrong, not just that the tests ran successfully.  We've 
> had garbled output problems in the past, so it would be nice to formalize our 
> tests to catch this.  PigUnit appears to be a good choice for this.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-5695) Convert pig smoke tests into real PigUnit tests

2013-10-11 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-5695:


Attachment: (was: 5965-1.2-branch.txt)

> Convert pig smoke tests into real PigUnit tests
> ---
>
> Key: CASSANDRA-5695
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5695
> Project: Cassandra
>  Issue Type: Test
>  Components: Hadoop
>Reporter: Brandon Williams
>Assignee: Alex Liu
>Priority: Minor
> Fix For: 2.0.2
>
>
> Currently, we have some ghetto pig tests in examples/pig/test, but there's 
> currently no way to continuously integrate these since a human needs to check 
> that the output isn't wrong, not just that the tests ran successfully.  We've 
> had garbled output problems in the past, so it would be nice to formalize our 
> tests to catch this.  PigUnit appears to be a good choice for this.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5695) Convert pig smoke tests into real PigUnit tests

2013-10-11 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13793194#comment-13793194
 ] 

Alex Liu commented on CASSANDRA-5695:
-

Command to run  the pig tests
{code} ant pig-test {code}

> Convert pig smoke tests into real PigUnit tests
> ---
>
> Key: CASSANDRA-5695
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5695
> Project: Cassandra
>  Issue Type: Test
>  Components: Hadoop
>Reporter: Brandon Williams
>Assignee: Alex Liu
>Priority: Minor
> Fix For: 2.0.2
>
> Attachments: 5965-1.2-branch.txt
>
>
> Currently, we have some ghetto pig tests in examples/pig/test, but there's 
> currently no way to continuously integrate these since a human needs to check 
> that the output isn't wrong, not just that the tests ran successfully.  We've 
> had garbled output problems in the past, so it would be nice to formalize our 
> tests to catch this.  PigUnit appears to be a good choice for this.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6102) CassandraStorage broken for bigints and ints

2013-10-11 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13793138#comment-13793138
 ] 

Alex Liu edited comment on CASSANDRA-6102 at 10/12/13 1:11 AM:
---

6102-2.0-branch.txt patch is on top of cassandra-2.0 branch




was (Author: alexliu68):
sure

> CassandraStorage broken for bigints and ints
> 
>
> Key: CASSANDRA-6102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6102
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
> Environment: Cassandra 1.2.9 & 1.2.10, Pig 0.11.1, OSX 10.8.x
>Reporter: Janne Jalkanen
>Assignee: Alex Liu
> Fix For: 1.2.11
>
> Attachments: 6102-1.2-branch.txt, 6102-2.0-branch.txt, 6102-v2.txt, 
> 6102-v3.txt, 6102-v4.txt, 6102-v5.txt
>
>
> I am seeing something rather strange in the way Cass 1.2 + Pig seem to handle 
> integer values.
> Setup: Cassandra 1.2.10, OSX 10.8, JDK 1.7u40, Pig 0.11.1.  Single node for 
> testing this. 
> First a table:
> {noformat}
> > CREATE TABLE testc (
>  key text PRIMARY KEY,
>  ivalue int,
>  svalue text,
>  value bigint
> ) WITH COMPACT STORAGE;
> > insert into testc (key,ivalue,svalue,value) values ('foo',10,'bar',65);
> > select * from testc;
> key | ivalue | svalue | value
> -+++---
> foo | 10 |bar | 65
> {noformat}
> For my Pig setup, I then use libraries from different C* versions to actually 
> talk to my database (which stays on 1.2.10 all the time).
> Cassandra 1.0.12 (using cassandra_storage.jar):
> {noformat}
> testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> (foo,(svalue,bar),(ivalue,10),(value,65),{})
> {noformat}
> Cassandra 1.1.10:
> {noformat}
> testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> (foo,(svalue,bar),(ivalue,10),(value,65),{})
> {noformat}
> Cassandra 1.2.10:
> {noformat}
> (testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> foo,{(ivalue,
> ),(svalue,bar),(value,A)})
> {noformat}
> To me it appears that ints and bigints are interpreted as ascii values in 
> cass 1.2.10.  Did something change for CassandraStorage, is there a 
> regression, or am I doing something wrong?  Quick perusal of the JIRA didn't 
> reveal anything that I could directly pin on this.
> Note that using compact storage does not seem to affect the issue, though it 
> obviously changes the resulting pig format.
> In addition, trying to use Pygmalion 
> {noformat}
> tf = foreach testc generate key, 
> flatten(FromCassandraBag('ivalue,svalue,value',columns)) as 
> (ivalue:int,svalue:chararray,lvalue:long);
> dump tf
> (foo,
> ,bar,A)
> {noformat}
> So no help there. Explicitly casting the values to (long) or (int) just 
> results in a ClassCastException.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6102) CassandraStorage broken for bigints and ints

2013-10-11 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-6102:


Attachment: 6102-2.0-branch.txt

> CassandraStorage broken for bigints and ints
> 
>
> Key: CASSANDRA-6102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6102
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
> Environment: Cassandra 1.2.9 & 1.2.10, Pig 0.11.1, OSX 10.8.x
>Reporter: Janne Jalkanen
>Assignee: Alex Liu
> Fix For: 1.2.11
>
> Attachments: 6102-1.2-branch.txt, 6102-2.0-branch.txt, 6102-v2.txt, 
> 6102-v3.txt, 6102-v4.txt, 6102-v5.txt
>
>
> I am seeing something rather strange in the way Cass 1.2 + Pig seem to handle 
> integer values.
> Setup: Cassandra 1.2.10, OSX 10.8, JDK 1.7u40, Pig 0.11.1.  Single node for 
> testing this. 
> First a table:
> {noformat}
> > CREATE TABLE testc (
>  key text PRIMARY KEY,
>  ivalue int,
>  svalue text,
>  value bigint
> ) WITH COMPACT STORAGE;
> > insert into testc (key,ivalue,svalue,value) values ('foo',10,'bar',65);
> > select * from testc;
> key | ivalue | svalue | value
> -+++---
> foo | 10 |bar | 65
> {noformat}
> For my Pig setup, I then use libraries from different C* versions to actually 
> talk to my database (which stays on 1.2.10 all the time).
> Cassandra 1.0.12 (using cassandra_storage.jar):
> {noformat}
> testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> (foo,(svalue,bar),(ivalue,10),(value,65),{})
> {noformat}
> Cassandra 1.1.10:
> {noformat}
> testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> (foo,(svalue,bar),(ivalue,10),(value,65),{})
> {noformat}
> Cassandra 1.2.10:
> {noformat}
> (testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> foo,{(ivalue,
> ),(svalue,bar),(value,A)})
> {noformat}
> To me it appears that ints and bigints are interpreted as ascii values in 
> cass 1.2.10.  Did something change for CassandraStorage, is there a 
> regression, or am I doing something wrong?  Quick perusal of the JIRA didn't 
> reveal anything that I could directly pin on this.
> Note that using compact storage does not seem to affect the issue, though it 
> obviously changes the resulting pig format.
> In addition, trying to use Pygmalion 
> {noformat}
> tf = foreach testc generate key, 
> flatten(FromCassandraBag('ivalue,svalue,value',columns)) as 
> (ivalue:int,svalue:chararray,lvalue:long);
> dump tf
> (foo,
> ,bar,A)
> {noformat}
> So no help there. Explicitly casting the values to (long) or (int) just 
> results in a ClassCastException.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-5695) Convert pig smoke tests into real PigUnit tests

2013-10-11 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-5695:


Attachment: 5965-1.2-branch.txt

> Convert pig smoke tests into real PigUnit tests
> ---
>
> Key: CASSANDRA-5695
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5695
> Project: Cassandra
>  Issue Type: Test
>  Components: Hadoop
>Reporter: Brandon Williams
>Assignee: Alex Liu
>Priority: Minor
> Fix For: 2.0.2
>
> Attachments: 5965-1.2-branch.txt
>
>
> Currently, we have some ghetto pig tests in examples/pig/test, but there's 
> currently no way to continuously integrate these since a human needs to check 
> that the output isn't wrong, not just that the tests ran successfully.  We've 
> had garbled output problems in the past, so it would be nice to formalize our 
> tests to catch this.  PigUnit appears to be a good choice for this.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5695) Convert pig smoke tests into real PigUnit tests

2013-10-11 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13793173#comment-13793173
 ] 

Alex Liu commented on CASSANDRA-5695:
-

5965-1.2-branch.txt patch on 1.2 branch is attached.


> Convert pig smoke tests into real PigUnit tests
> ---
>
> Key: CASSANDRA-5695
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5695
> Project: Cassandra
>  Issue Type: Test
>  Components: Hadoop
>Reporter: Brandon Williams
>Assignee: Alex Liu
>Priority: Minor
> Fix For: 2.0.2
>
> Attachments: 5965-1.2-branch.txt
>
>
> Currently, we have some ghetto pig tests in examples/pig/test, but there's 
> currently no way to continuously integrate these since a human needs to check 
> that the output isn't wrong, not just that the tests ran successfully.  We've 
> had garbled output problems in the past, so it would be nice to formalize our 
> tests to catch this.  PigUnit appears to be a good choice for this.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6187) Cassandra-2.0 node seems to be pausing periodically under normal operations

2013-10-11 Thread Li Zou (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13793141#comment-13793141
 ] 

Li Zou commented on CASSANDRA-6187:
---

This issue is only observed fairly recently when testing the Cassandra-2.0.

We have been using the same tool, the same configuration and the same set of 
machines for Cassandra-1.2.4 and it has never shown any sign of this issue.

> Cassandra-2.0 node seems to be pausing periodically under normal operations
> ---
>
> Key: CASSANDRA-6187
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6187
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: One data center of 4 Cassandra-2.0 nodes with default 
> configuration parameters deployed on 4 separate machines.
> A testing app (with either Astyanax or DataStax client driver) interacts with 
> the Cassandra data center.
> A traffic generator is sending traffic to the testing app for testing purpose.
>Reporter: Li Zou
>
> Under normal operation condition without any interruption, the traffic 
> generator  will periodically see a couple of seconds of zero (or very little) 
> transactions, i.e. the throughput can be quite high for a while, then comes 
> down to zero (or very little) transactions for 10 ~ 20 seconds.
> The Cassandra system log occasionally also logs message drops. But these 
> message drop events do not correlate in time to the observed transaction drop 
> events.
> Example of message dropping log:
> {noformat}
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,216 MessagingService.java (line 
> 812) 1191 MUTATION messages dropped in last 5000ms
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,217 MessagingService.java (line 
> 812) 502 READ_REPAIR messages dropped in last 5000ms
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,217 StatusLogger.java (line 55) 
> Pool NameActive   Pending  Completed   Blocked  All 
> Time Blocked
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,246 StatusLogger.java (line 70) 
> ReadStage 0 0 845326 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,246 StatusLogger.java (line 70) 
> RequestResponseStage  0 01643358 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,246 StatusLogger.java (line 70) 
> ReadRepairStage   0 0  61247 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,247 StatusLogger.java (line 70) 
> MutationStage 0 01155502 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,247 StatusLogger.java (line 70) 
> ReplicateOnWriteStage 0 0  0 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,247 StatusLogger.java (line 70) 
> GossipStage   0 0   5391 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,248 StatusLogger.java (line 70) 
> AntiEntropyStage  0 0  0 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,248 StatusLogger.java (line 70) 
> MigrationStage0 0 14 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,248 StatusLogger.java (line 70) 
> MemtablePostFlusher   0 0 99 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,248 StatusLogger.java (line 70) 
> MemoryMeter   0 0 58 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,249 StatusLogger.java (line 70) 
> FlushWriter   0 0 45 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,249 StatusLogger.java (line 70) 
> MiscStage 0 0  0 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,249 StatusLogger.java (line 70) 
> commitlog_archiver0 0  0 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,250 StatusLogger.java (line 70) 
> InternalResponseStage 0 0  3 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,250 StatusLogger.java (line 70) 
> HintedHandoff 0 0  7 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,250 StatusLogger.java (line 79) 
> CompactionManager 0 0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,250 StatusLogger.java (line 81) 
> Commitlog

[jira] [Commented] (CASSANDRA-6102) CassandraStorage broken for bigints and ints

2013-10-11 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13793138#comment-13793138
 ] 

Alex Liu commented on CASSANDRA-6102:
-

sure

> CassandraStorage broken for bigints and ints
> 
>
> Key: CASSANDRA-6102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6102
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
> Environment: Cassandra 1.2.9 & 1.2.10, Pig 0.11.1, OSX 10.8.x
>Reporter: Janne Jalkanen
>Assignee: Alex Liu
> Fix For: 1.2.11
>
> Attachments: 6102-1.2-branch.txt, 6102-v2.txt, 6102-v3.txt, 
> 6102-v4.txt, 6102-v5.txt
>
>
> I am seeing something rather strange in the way Cass 1.2 + Pig seem to handle 
> integer values.
> Setup: Cassandra 1.2.10, OSX 10.8, JDK 1.7u40, Pig 0.11.1.  Single node for 
> testing this. 
> First a table:
> {noformat}
> > CREATE TABLE testc (
>  key text PRIMARY KEY,
>  ivalue int,
>  svalue text,
>  value bigint
> ) WITH COMPACT STORAGE;
> > insert into testc (key,ivalue,svalue,value) values ('foo',10,'bar',65);
> > select * from testc;
> key | ivalue | svalue | value
> -+++---
> foo | 10 |bar | 65
> {noformat}
> For my Pig setup, I then use libraries from different C* versions to actually 
> talk to my database (which stays on 1.2.10 all the time).
> Cassandra 1.0.12 (using cassandra_storage.jar):
> {noformat}
> testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> (foo,(svalue,bar),(ivalue,10),(value,65),{})
> {noformat}
> Cassandra 1.1.10:
> {noformat}
> testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> (foo,(svalue,bar),(ivalue,10),(value,65),{})
> {noformat}
> Cassandra 1.2.10:
> {noformat}
> (testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> foo,{(ivalue,
> ),(svalue,bar),(value,A)})
> {noformat}
> To me it appears that ints and bigints are interpreted as ascii values in 
> cass 1.2.10.  Did something change for CassandraStorage, is there a 
> regression, or am I doing something wrong?  Quick perusal of the JIRA didn't 
> reveal anything that I could directly pin on this.
> Note that using compact storage does not seem to affect the issue, though it 
> obviously changes the resulting pig format.
> In addition, trying to use Pygmalion 
> {noformat}
> tf = foreach testc generate key, 
> flatten(FromCassandraBag('ivalue,svalue,value',columns)) as 
> (ivalue:int,svalue:chararray,lvalue:long);
> dump tf
> (foo,
> ,bar,A)
> {noformat}
> So no help there. Explicitly casting the values to (long) or (int) just 
> results in a ClassCastException.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6109) Consider coldness in STCS compaction

2013-10-11 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13793127#comment-13793127
 ] 

Tyler Hobbs commented on CASSANDRA-6109:


I've spent some more time thinking about this and it seems like we either need 
a more sophisticated approach in order to handle the various corner cases or we 
need to disable this feature by default.

If we disable the feature by default, then using a hotness percentile or 
something similar might be okay.

If we want to enable the feature by default, I've got a couple of more 
sophisticated approaches:

The first approach is fairly simple and uses two parameters:
* SSTables which receive less than X% of the reads/sec per key of the hottest 
sstable (for the whole CF) will be considered cold.
* If the cold sstables make up more than Y% of the total reads/sec, don't 
consider the warmest of the cold sstables cold. (In other words, go through the 
"cold" bucket and remove the warmest sstables until the cold bucket makes up 
less than %Y of the total reads/sec.)

This solves one problem of basing coldness on the mean rate, which is that if 
you have almost all cold sstables, the mean will be very low.  Comparing 
against the max deals well with this.  The second parameter acts as a hedge for 
the case you brought up where a large number of cold sstables can collectively 
account for a high percentage of the total reads.

The second approach is less hacky but more difficult to explain or tune; it's 
an bucket optimization measure that covers these concerns.  Ideally, we would 
optimize two things:
* Average sstable hotness of the bucket
* The percentage of the total CF reads that are included in the bucket

These two items are somewhat in opposition.  Optimizing only for the first 
measure would mean just compacting the two hottest sstables.  Optimizing only 
for the second would mean compacting all sstables.  We can combine the two 
measures with different weightings to get a pretty good bucket optimization 
measure.  I've played around with some different measures in python and have a 
script that makes approximately the same bucket choices I would.  However, as I 
mentioned, this would be pretty hard for operators to understand and tune 
intelligently, somewhat like phi_convict_threshold.  If you're still open to 
that, I can attach my script with some example runs.

> Consider coldness in STCS compaction
> 
>
> Key: CASSANDRA-6109
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6109
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Jonathan Ellis
>Assignee: Tyler Hobbs
> Fix For: 2.0.2
>
> Attachments: 6109-v1.patch, 6109-v2.patch
>
>
> I see two options:
> # Don't compact cold sstables at all
> # Compact cold sstables only if there is nothing more important to compact
> The latter is better if you have cold data that may become hot again...  but 
> it's confusing if you have a workload such that you can't keep up with *all* 
> compaction, but you can keep up with hot sstable.  (Compaction backlog stat 
> becomes useless since we fall increasingly behind.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6186) Can't add index with a name prefixed with 'index'

2013-10-11 Thread Ben Sykes (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13793116#comment-13793116
 ] 

Ben Sykes commented on CASSANDRA-6186:
--

This works for me:

{code}
cqlsh> CREATE KEYSPACE test_add_index WITH replication = {'class': 
'SimpleStrategy', 'replication_factor': 1} ;
cqlsh>
cqlsh> create table test_add_index.cf1 (a text PRIMARY KEY, b text , c text );
cqlsh> create index index1 on test_add_index.cf1 (c);
cqlsh> DESC KEYSPACE test_add_index;

CREATE KEYSPACE test_add_index WITH replication = {
  'class': 'SimpleStrategy',
  'replication_factor': '1'
};

USE test_add_index;

CREATE TABLE cf1 (
  a text,
  b text,
  c text,
  PRIMARY KEY (a)
) WITH
  bloom_filter_fp_chance=0.01 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.00 AND
  gc_grace_seconds=864000 AND
  index_interval=128 AND
  read_repair_chance=0.10 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  default_time_to_live=0 AND
  speculative_retry='NONE' AND
  memtable_flush_period_in_ms=0 AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={'sstable_compression': 'LZ4Compressor'};

CREATE INDEX index1 ON cf1 (c);

cqlsh>
{code}

> Can't add index with a name prefixed with 'index'
> -
>
> Key: CASSANDRA-6186
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6186
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nick Bailey
> Fix For: 2.0.2
>
>
> cqlsh code:
> {noformat}
> cqlsh> drop keyspace test_add_index;
> cqlsh> CREATE KEYSPACE test_add_index WITH replication = {'class': 
> 'SimpleStrategy', 'replication_factor': 1} ;
> cqlsh> create table test_add_index.cf1 (a text PRIMARY KEY, b text , c text );
> cqlsh> create index index1 on test_add_index.cf1 (c);
> Bad Request: Duplicate index name index1
> cqlsh> drop keyspace test_add_index;
> cqlsh> CREATE KEYSPACE test_add_index WITH replication = {'class': 
> 'SimpleStrategy', 'replication_factor': 1} ;
> cqlsh> create table test_add_index.cf1 (a text PRIMARY KEY, b text , c text );
> cqlsh> create index blah on test_add_index.cf1 (c);
> cqlsh>
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Reopened] (CASSANDRA-6102) CassandraStorage broken for bigints and ints

2013-10-11 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reopened CASSANDRA-6102:
-


Can you post a version against 2.0 or trunk?  git is giving me all kinds of 
problems trying to merge this.

> CassandraStorage broken for bigints and ints
> 
>
> Key: CASSANDRA-6102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6102
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
> Environment: Cassandra 1.2.9 & 1.2.10, Pig 0.11.1, OSX 10.8.x
>Reporter: Janne Jalkanen
>Assignee: Alex Liu
> Fix For: 1.2.11
>
> Attachments: 6102-1.2-branch.txt, 6102-v2.txt, 6102-v3.txt, 
> 6102-v4.txt, 6102-v5.txt
>
>
> I am seeing something rather strange in the way Cass 1.2 + Pig seem to handle 
> integer values.
> Setup: Cassandra 1.2.10, OSX 10.8, JDK 1.7u40, Pig 0.11.1.  Single node for 
> testing this. 
> First a table:
> {noformat}
> > CREATE TABLE testc (
>  key text PRIMARY KEY,
>  ivalue int,
>  svalue text,
>  value bigint
> ) WITH COMPACT STORAGE;
> > insert into testc (key,ivalue,svalue,value) values ('foo',10,'bar',65);
> > select * from testc;
> key | ivalue | svalue | value
> -+++---
> foo | 10 |bar | 65
> {noformat}
> For my Pig setup, I then use libraries from different C* versions to actually 
> talk to my database (which stays on 1.2.10 all the time).
> Cassandra 1.0.12 (using cassandra_storage.jar):
> {noformat}
> testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> (foo,(svalue,bar),(ivalue,10),(value,65),{})
> {noformat}
> Cassandra 1.1.10:
> {noformat}
> testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> (foo,(svalue,bar),(ivalue,10),(value,65),{})
> {noformat}
> Cassandra 1.2.10:
> {noformat}
> (testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> foo,{(ivalue,
> ),(svalue,bar),(value,A)})
> {noformat}
> To me it appears that ints and bigints are interpreted as ascii values in 
> cass 1.2.10.  Did something change for CassandraStorage, is there a 
> regression, or am I doing something wrong?  Quick perusal of the JIRA didn't 
> reveal anything that I could directly pin on this.
> Note that using compact storage does not seem to affect the issue, though it 
> obviously changes the resulting pig format.
> In addition, trying to use Pygmalion 
> {noformat}
> tf = foreach testc generate key, 
> flatten(FromCassandraBag('ivalue,svalue,value',columns)) as 
> (ivalue:int,svalue:chararray,lvalue:long);
> dump tf
> (foo,
> ,bar,A)
> {noformat}
> So no help there. Explicitly casting the values to (long) or (int) just 
> results in a ClassCastException.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[5/5] git commit: Merge branch 'cassandra-2.0' into trunk

2013-10-11 Thread brandonwilliams
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b89cce9c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b89cce9c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b89cce9c

Branch: refs/heads/trunk
Commit: b89cce9c8583d869cb3ca6fd26b4a9b47d71f108
Parents: bc8e247 e5dba3c
Author: Brandon Williams 
Authored: Fri Oct 11 17:41:45 2013 -0500
Committer: Brandon Williams 
Committed: Fri Oct 11 17:41:45 2013 -0500

--

--




[2/5] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-10-11 Thread brandonwilliams
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a3ad2e82
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a3ad2e82
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a3ad2e82

Branch: refs/heads/trunk
Commit: a3ad2e82249b88d4a05f24140948cdbc809d14f3
Parents: 8a506e6 639c01a
Author: Brandon Williams 
Authored: Fri Oct 11 15:30:27 2013 -0500
Committer: Brandon Williams 
Committed: Fri Oct 11 15:30:27 2013 -0500

--
 .../hadoop/pig/AbstractCassandraStorage.java| 97 ++--
 .../cassandra/hadoop/pig/CassandraStorage.java  | 55 +++
 .../apache/cassandra/hadoop/pig/CqlStorage.java |  8 +-
 3 files changed, 109 insertions(+), 51 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a3ad2e82/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
--
diff --cc src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
index c881734,dbebfb5..486c781
--- a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
+++ b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
@@@ -127,14 -128,36 +128,37 @@@ public abstract class AbstractCassandra
  setTupleValue(pair, 0, cassandraToObj(comparator, col.name()));
  
  // value
 -if (col instanceof Column)
 +Map validators = getValidatorMap(cfDef);
 +if (validators.get(col.name()) == null)
  {
- Map marshallers = 
getDefaultMarshallers(cfDef);
- setTupleValue(pair, 1, 
cassandraToObj(marshallers.get(MarshallerType.DEFAULT_VALIDATOR), col.value()));
+ // standard
+ Map validators = getValidatorMap(cfDef);
+ ByteBuffer colName;
+ if (cfInfo.cql3Table && !cfInfo.compactCqlTable)
+ {
+ ByteBuffer[] names = ((AbstractCompositeType) 
parseType(cfDef.comparator_type)).split(col.name());
+ colName = names[names.length-1];
+ }
+ else
+ colName = col.name();
+ if (validators.get(colName) == null)
+ {
+ Map marshallers = 
getDefaultMarshallers(cfDef);
+ setTupleValue(pair, 1, 
cassandraToObj(marshallers.get(MarshallerType.DEFAULT_VALIDATOR), col.value()));
+ }
+ else
+ setTupleValue(pair, 1, 
cassandraToObj(validators.get(colName), col.value()));
+ return pair;
  }
  else
- setTupleValue(pair, 1, cassandraToObj(validators.get(col.name()), 
col.value()));
+ {
+ // super
+ ArrayList subcols = new ArrayList();
+ for (IColumn subcol : col.getSubColumns())
+ subcols.add(columnToTuple(subcol, cfInfo, 
parseType(cfDef.getSubcomparator_type(;
+ 
+ pair.set(1, new DefaultDataBag(subcols));
+ }
  return pair;
  }
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a3ad2e82/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
--
diff --cc src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
index 4083236,a7cc1ad..d9c55a1
--- a/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
+++ b/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
@@@ -124,9 -123,9 +125,9 @@@ public class CassandraStorage extends A
  key = (ByteBuffer)reader.getCurrentKey();
  tuple = keyToTuple(key, cfDef, 
parseType(cfDef.getKey_validation_class()));
  }
 -for (Map.Entry entry : 
lastRow.entrySet())
 +for (Map.Entry entry : 
lastRow.entrySet())
  {
- bag.add(columnToTuple(entry.getValue(), cfDef, 
parseType(cfDef.getComparator_type(;
+ bag.add(columnToTuple(entry.getValue(), cfInfo, 
parseType(cfDef.getComparator_type(;
  }
  lastKey = null;
  lastRow = null;
@@@ -162,9 -161,9 +163,9 @@@
  tuple = keyToTuple(lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
  else
  addKeyToTuple(tuple, lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
 -for (Map.Entry entry : 
lastRow.entrySet())
 +for (Map.Entry entry : 
lastRow.entrySet())
  {
- bag.add(columnToTup

[1/5] git commit: Fix int/bigint in CassandraStorage Patch by Alex Liu, reviewed by brandonwilliams for CASSANDRA-6102

2013-10-11 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-2.0 a3ad2e822 -> e5dba3c62
  refs/heads/trunk bc8e2475f -> b89cce9c8


Fix int/bigint in CassandraStorage
Patch by Alex Liu, reviewed by brandonwilliams for CASSANDRA-6102


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/639c01a3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/639c01a3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/639c01a3

Branch: refs/heads/trunk
Commit: 639c01a3504ba2e2a55061093651a9973ad68d11
Parents: eee485e
Author: Brandon Williams 
Authored: Fri Oct 11 15:22:35 2013 -0500
Committer: Brandon Williams 
Committed: Fri Oct 11 15:22:35 2013 -0500

--
 .../hadoop/pig/AbstractCassandraStorage.java| 82 +---
 .../cassandra/hadoop/pig/CassandraStorage.java  | 45 ++-
 .../apache/cassandra/hadoop/pig/CqlStorage.java |  8 +-
 3 files changed, 85 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/639c01a3/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
--
diff --git 
a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java 
b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
index 6ad4f9e..dbebfb5 100644
--- a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
+++ b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
@@ -97,7 +97,7 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 protected String outputFormatClass;
 protected int splitSize = 64 * 1024;
 protected String partitionerClass;
-protected boolean usePartitionFilter = false; 
+protected boolean usePartitionFilter = false;
 
 public AbstractCassandraStorage()
 {
@@ -116,8 +116,9 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 }
 
 /** convert a column to a tuple */
-protected Tuple columnToTuple(IColumn col, CfDef cfDef, AbstractType 
comparator) throws IOException
+protected Tuple columnToTuple(IColumn col, CfInfo cfInfo, AbstractType 
comparator) throws IOException
 {
+CfDef cfDef = cfInfo.cfDef;
 Tuple pair = TupleFactory.getInstance().newTuple(2);
 
 // name
@@ -131,13 +132,21 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 {
 // standard
 Map validators = getValidatorMap(cfDef);
-if (validators.get(col.name()) == null)
+ByteBuffer colName;
+if (cfInfo.cql3Table && !cfInfo.compactCqlTable)
+{
+ByteBuffer[] names = ((AbstractCompositeType) 
parseType(cfDef.comparator_type)).split(col.name());
+colName = names[names.length-1];
+}
+else
+colName = col.name();
+if (validators.get(colName) == null)
 {
 Map marshallers = 
getDefaultMarshallers(cfDef);
 setTupleValue(pair, 1, 
cassandraToObj(marshallers.get(MarshallerType.DEFAULT_VALIDATOR), col.value()));
 }
 else
-setTupleValue(pair, 1, 
cassandraToObj(validators.get(col.name()), col.value()));
+setTupleValue(pair, 1, cassandraToObj(validators.get(colName), 
col.value()));
 return pair;
 }
 else
@@ -145,7 +154,7 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 // super
 ArrayList subcols = new ArrayList();
 for (IColumn subcol : col.getSubColumns())
-subcols.add(columnToTuple(subcol, cfDef, 
parseType(cfDef.getSubcomparator_type(;
+subcols.add(columnToTuple(subcol, cfInfo, 
parseType(cfDef.getSubcomparator_type(;
 
 pair.set(1, new DefaultDataBag(subcols));
 }
@@ -168,11 +177,16 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 }
 
 /** get the columnfamily definition for the signature */
-protected CfDef getCfDef(String signature) throws IOException
+protected CfInfo getCfInfo(String signature) throws IOException
 {
 UDFContext context = UDFContext.getUDFContext();
 Properties property = 
context.getUDFProperties(AbstractCassandraStorage.class);
-return cfdefFromString(property.getProperty(signature));
+String prop = property.getProperty(signature);
+CfInfo cfInfo = new CfInfo();
+cfInfo.cfDef = cfdefFromString(prop.substring(2));
+cfInfo.compactCqlTable = prop.charAt(0) == '1' ? true : false;
+cfInfo.cql3Table = prop.charAt(1) == '1' ? true : false;
+return

[4/5] git commit: Revert "Merge branch 'cassandra-1.2' into cassandra-2.0"

2013-10-11 Thread brandonwilliams
Revert "Merge branch 'cassandra-1.2' into cassandra-2.0"

This reverts commit a3ad2e82249b88d4a05f24140948cdbc809d14f3, reversing
changes made to 8a506e66a66c004a7a253e3dd28845517da8a967.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e5dba3c6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e5dba3c6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e5dba3c6

Branch: refs/heads/cassandra-2.0
Commit: e5dba3c6250eb7285777490a937b66a7092afa77
Parents: a3ad2e8
Author: Brandon Williams 
Authored: Fri Oct 11 17:41:22 2013 -0500
Committer: Brandon Williams 
Committed: Fri Oct 11 17:41:22 2013 -0500

--
 .../hadoop/pig/AbstractCassandraStorage.java| 97 ++--
 .../cassandra/hadoop/pig/CassandraStorage.java  | 55 ---
 .../apache/cassandra/hadoop/pig/CqlStorage.java |  8 +-
 3 files changed, 51 insertions(+), 109 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5dba3c6/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
--
diff --git 
a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java 
b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
index 486c781..c881734 100644
--- a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
+++ b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
@@ -97,7 +97,7 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 protected String outputFormatClass;
 protected int splitSize = 64 * 1024;
 protected String partitionerClass;
-protected boolean usePartitionFilter = false;
+protected boolean usePartitionFilter = false; 
 
 public AbstractCassandraStorage()
 {
@@ -116,9 +116,8 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 }
 
 /** convert a column to a tuple */
-protected Tuple columnToTuple(IColumn col, CfInfo cfInfo, AbstractType 
comparator) throws IOException
+protected Tuple columnToTuple(Column col, CfDef cfDef, AbstractType 
comparator) throws IOException
 {
-CfDef cfDef = cfInfo.cfDef;
 Tuple pair = TupleFactory.getInstance().newTuple(2);
 
 // name
@@ -131,34 +130,11 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 Map validators = getValidatorMap(cfDef);
 if (validators.get(col.name()) == null)
 {
-// standard
-Map validators = getValidatorMap(cfDef);
-ByteBuffer colName;
-if (cfInfo.cql3Table && !cfInfo.compactCqlTable)
-{
-ByteBuffer[] names = ((AbstractCompositeType) 
parseType(cfDef.comparator_type)).split(col.name());
-colName = names[names.length-1];
-}
-else
-colName = col.name();
-if (validators.get(colName) == null)
-{
-Map marshallers = 
getDefaultMarshallers(cfDef);
-setTupleValue(pair, 1, 
cassandraToObj(marshallers.get(MarshallerType.DEFAULT_VALIDATOR), col.value()));
-}
-else
-setTupleValue(pair, 1, cassandraToObj(validators.get(colName), 
col.value()));
-return pair;
+Map marshallers = 
getDefaultMarshallers(cfDef);
+setTupleValue(pair, 1, 
cassandraToObj(marshallers.get(MarshallerType.DEFAULT_VALIDATOR), col.value()));
 }
 else
-{
-// super
-ArrayList subcols = new ArrayList();
-for (IColumn subcol : col.getSubColumns())
-subcols.add(columnToTuple(subcol, cfInfo, 
parseType(cfDef.getSubcomparator_type(;
-
-pair.set(1, new DefaultDataBag(subcols));
-}
+setTupleValue(pair, 1, cassandraToObj(validators.get(col.name()), 
col.value()));
 return pair;
 }
 
@@ -178,16 +154,11 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 }
 
 /** get the columnfamily definition for the signature */
-protected CfInfo getCfInfo(String signature) throws IOException
+protected CfDef getCfDef(String signature) throws IOException
 {
 UDFContext context = UDFContext.getUDFContext();
 Properties property = 
context.getUDFProperties(AbstractCassandraStorage.class);
-String prop = property.getProperty(signature);
-CfInfo cfInfo = new CfInfo();
-cfInfo.cfDef = cfdefFromString(prop.substring(2));
-cfInfo.compactCqlTable = prop.charAt(0) == '1' ? true : false;
-cfInfo.cql3Table = prop.charAt(1) == '1' ? true : false;
-return cfInfo;
+return

[3/5] git commit: Revert "Merge branch 'cassandra-1.2' into cassandra-2.0"

2013-10-11 Thread brandonwilliams
Revert "Merge branch 'cassandra-1.2' into cassandra-2.0"

This reverts commit a3ad2e82249b88d4a05f24140948cdbc809d14f3, reversing
changes made to 8a506e66a66c004a7a253e3dd28845517da8a967.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e5dba3c6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e5dba3c6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e5dba3c6

Branch: refs/heads/trunk
Commit: e5dba3c6250eb7285777490a937b66a7092afa77
Parents: a3ad2e8
Author: Brandon Williams 
Authored: Fri Oct 11 17:41:22 2013 -0500
Committer: Brandon Williams 
Committed: Fri Oct 11 17:41:22 2013 -0500

--
 .../hadoop/pig/AbstractCassandraStorage.java| 97 ++--
 .../cassandra/hadoop/pig/CassandraStorage.java  | 55 ---
 .../apache/cassandra/hadoop/pig/CqlStorage.java |  8 +-
 3 files changed, 51 insertions(+), 109 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5dba3c6/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
--
diff --git 
a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java 
b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
index 486c781..c881734 100644
--- a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
+++ b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
@@ -97,7 +97,7 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 protected String outputFormatClass;
 protected int splitSize = 64 * 1024;
 protected String partitionerClass;
-protected boolean usePartitionFilter = false;
+protected boolean usePartitionFilter = false; 
 
 public AbstractCassandraStorage()
 {
@@ -116,9 +116,8 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 }
 
 /** convert a column to a tuple */
-protected Tuple columnToTuple(IColumn col, CfInfo cfInfo, AbstractType 
comparator) throws IOException
+protected Tuple columnToTuple(Column col, CfDef cfDef, AbstractType 
comparator) throws IOException
 {
-CfDef cfDef = cfInfo.cfDef;
 Tuple pair = TupleFactory.getInstance().newTuple(2);
 
 // name
@@ -131,34 +130,11 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 Map validators = getValidatorMap(cfDef);
 if (validators.get(col.name()) == null)
 {
-// standard
-Map validators = getValidatorMap(cfDef);
-ByteBuffer colName;
-if (cfInfo.cql3Table && !cfInfo.compactCqlTable)
-{
-ByteBuffer[] names = ((AbstractCompositeType) 
parseType(cfDef.comparator_type)).split(col.name());
-colName = names[names.length-1];
-}
-else
-colName = col.name();
-if (validators.get(colName) == null)
-{
-Map marshallers = 
getDefaultMarshallers(cfDef);
-setTupleValue(pair, 1, 
cassandraToObj(marshallers.get(MarshallerType.DEFAULT_VALIDATOR), col.value()));
-}
-else
-setTupleValue(pair, 1, cassandraToObj(validators.get(colName), 
col.value()));
-return pair;
+Map marshallers = 
getDefaultMarshallers(cfDef);
+setTupleValue(pair, 1, 
cassandraToObj(marshallers.get(MarshallerType.DEFAULT_VALIDATOR), col.value()));
 }
 else
-{
-// super
-ArrayList subcols = new ArrayList();
-for (IColumn subcol : col.getSubColumns())
-subcols.add(columnToTuple(subcol, cfInfo, 
parseType(cfDef.getSubcomparator_type(;
-
-pair.set(1, new DefaultDataBag(subcols));
-}
+setTupleValue(pair, 1, cassandraToObj(validators.get(col.name()), 
col.value()));
 return pair;
 }
 
@@ -178,16 +154,11 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 }
 
 /** get the columnfamily definition for the signature */
-protected CfInfo getCfInfo(String signature) throws IOException
+protected CfDef getCfDef(String signature) throws IOException
 {
 UDFContext context = UDFContext.getUDFContext();
 Properties property = 
context.getUDFProperties(AbstractCassandraStorage.class);
-String prop = property.getProperty(signature);
-CfInfo cfInfo = new CfInfo();
-cfInfo.cfDef = cfdefFromString(prop.substring(2));
-cfInfo.compactCqlTable = prop.charAt(0) == '1' ? true : false;
-cfInfo.cql3Table = prop.charAt(1) == '1' ? true : false;
-return cfInfo;
+return cfdefFr

[jira] [Assigned] (CASSANDRA-6188) The JMX stats for Speculative Retry stops moving during a node failure outage period

2013-10-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-6188:
-

Assignee: Ryan McGuire

Ryan, does SR still work in latest 2.0 branch?

> The JMX stats for Speculative Retry stops moving during a node failure outage 
> period
> 
>
> Key: CASSANDRA-6188
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6188
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: One data center of 4 Cassandra-2.0 nodes with default 
> configuration parameters deployed on 4 separate machines. A testing app (with 
> either Astyanax or DataStax client driver) interacts with the Cassandra data 
> center. A traffic generator is sending traffic to the testing app for testing 
> purpose.
>Reporter: Li Zou
>Assignee: Ryan McGuire
>
> Under normal testing traffic level with default Cassandra Speculative Retry 
> configuration for each table (i.e. 99 percentile), JConsole shows that the 
> JMX stats for Speculative Retry increments slowly. However, during the node 
> failure outage period (i.e. immediately after the node was killed and before 
> the gossip figures out that the node is down), JConsole shows that the JMX 
> stats for Speculative Retry stops moving. That is, for around 20 seconds, the 
> JMX stats for Speculative Retry does not move.
> This is true for all other Speculative Retry options. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-6187) Cassandra-2.0 node seems to be pausing periodically under normal operations

2013-10-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-6187.
---

Resolution: Not A Problem

Messages dropped means Cassandra is load shedding to keep from falling over.  
You should add capacity or throttle your client.

> Cassandra-2.0 node seems to be pausing periodically under normal operations
> ---
>
> Key: CASSANDRA-6187
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6187
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: One data center of 4 Cassandra-2.0 nodes with default 
> configuration parameters deployed on 4 separate machines.
> A testing app (with either Astyanax or DataStax client driver) interacts with 
> the Cassandra data center.
> A traffic generator is sending traffic to the testing app for testing purpose.
>Reporter: Li Zou
>
> Under normal operation condition without any interruption, the traffic 
> generator  will periodically see a couple of seconds of zero (or very little) 
> transactions, i.e. the throughput can be quite high for a while, then comes 
> down to zero (or very little) transactions for 10 ~ 20 seconds.
> The Cassandra system log occasionally also logs message drops. But these 
> message drop events do not correlate in time to the observed transaction drop 
> events.
> Example of message dropping log:
> {noformat}
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,216 MessagingService.java (line 
> 812) 1191 MUTATION messages dropped in last 5000ms
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,217 MessagingService.java (line 
> 812) 502 READ_REPAIR messages dropped in last 5000ms
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,217 StatusLogger.java (line 55) 
> Pool NameActive   Pending  Completed   Blocked  All 
> Time Blocked
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,246 StatusLogger.java (line 70) 
> ReadStage 0 0 845326 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,246 StatusLogger.java (line 70) 
> RequestResponseStage  0 01643358 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,246 StatusLogger.java (line 70) 
> ReadRepairStage   0 0  61247 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,247 StatusLogger.java (line 70) 
> MutationStage 0 01155502 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,247 StatusLogger.java (line 70) 
> ReplicateOnWriteStage 0 0  0 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,247 StatusLogger.java (line 70) 
> GossipStage   0 0   5391 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,248 StatusLogger.java (line 70) 
> AntiEntropyStage  0 0  0 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,248 StatusLogger.java (line 70) 
> MigrationStage0 0 14 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,248 StatusLogger.java (line 70) 
> MemtablePostFlusher   0 0 99 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,248 StatusLogger.java (line 70) 
> MemoryMeter   0 0 58 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,249 StatusLogger.java (line 70) 
> FlushWriter   0 0 45 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,249 StatusLogger.java (line 70) 
> MiscStage 0 0  0 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,249 StatusLogger.java (line 70) 
> commitlog_archiver0 0  0 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,250 StatusLogger.java (line 70) 
> InternalResponseStage 0 0  3 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,250 StatusLogger.java (line 70) 
> HintedHandoff 0 0  7 0
>  0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,250 StatusLogger.java (line 79) 
> CompactionManager 0 0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,250 StatusLogger.java (line 81) 
> Commitlog   n/a 0
>  INFO [ScheduledTasks:1] 2013-10-11 16:36:12,251 StatusLogger.java (line 93) 
> 

[jira] [Created] (CASSANDRA-6188) The JMX stats for Speculative Retry stops moving during a node failure outage period

2013-10-11 Thread Li Zou (JIRA)
Li Zou created CASSANDRA-6188:
-

 Summary: The JMX stats for Speculative Retry stops moving during a 
node failure outage period
 Key: CASSANDRA-6188
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6188
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: One data center of 4 Cassandra-2.0 nodes with default 
configuration parameters deployed on 4 separate machines. A testing app (with 
either Astyanax or DataStax client driver) interacts with the Cassandra data 
center. A traffic generator is sending traffic to the testing app for testing 
purpose.
Reporter: Li Zou


Under normal testing traffic level with default Cassandra Speculative Retry 
configuration for each table (i.e. 99 percentile), JConsole shows that the JMX 
stats for Speculative Retry increments slowly. However, during the node failure 
outage period (i.e. immediately after the node was killed and before the gossip 
figures out that the node is down), JConsole shows that the JMX stats for 
Speculative Retry stops moving. That is, for around 20 seconds, the JMX stats 
for Speculative Retry does not move.

This is true for all other Speculative Retry options. 




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6187) Cassandra-2.0 node seems to be pausing periodically under normal operations

2013-10-11 Thread Li Zou (JIRA)
Li Zou created CASSANDRA-6187:
-

 Summary: Cassandra-2.0 node seems to be pausing periodically under 
normal operations
 Key: CASSANDRA-6187
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6187
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: One data center of 4 Cassandra-2.0 nodes with default 
configuration parameters deployed on 4 separate machines.

A testing app (with either Astyanax or DataStax client driver) interacts with 
the Cassandra data center.

A traffic generator is sending traffic to the testing app for testing purpose.
Reporter: Li Zou


Under normal operation condition without any interruption, the traffic 
generator  will periodically see a couple of seconds of zero (or very little) 
transactions, i.e. the throughput can be quite high for a while, then comes 
down to zero (or very little) transactions for 10 ~ 20 seconds.

The Cassandra system log occasionally also logs message drops. But these 
message drop events do not correlate in time to the observed transaction drop 
events.

Example of message dropping log:

{noformat}
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,216 MessagingService.java (line 
812) 1191 MUTATION messages dropped in last 5000ms
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,217 MessagingService.java (line 
812) 502 READ_REPAIR messages dropped in last 5000ms
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,217 StatusLogger.java (line 55) 
Pool NameActive   Pending  Completed   Blocked  All 
Time Blocked
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,246 StatusLogger.java (line 70) 
ReadStage 0 0 845326 0  
   0
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,246 StatusLogger.java (line 70) 
RequestResponseStage  0 01643358 0  
   0
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,246 StatusLogger.java (line 70) 
ReadRepairStage   0 0  61247 0  
   0
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,247 StatusLogger.java (line 70) 
MutationStage 0 01155502 0  
   0
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,247 StatusLogger.java (line 70) 
ReplicateOnWriteStage 0 0  0 0  
   0
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,247 StatusLogger.java (line 70) 
GossipStage   0 0   5391 0  
   0
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,248 StatusLogger.java (line 70) 
AntiEntropyStage  0 0  0 0  
   0
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,248 StatusLogger.java (line 70) 
MigrationStage0 0 14 0  
   0
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,248 StatusLogger.java (line 70) 
MemtablePostFlusher   0 0 99 0  
   0
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,248 StatusLogger.java (line 70) 
MemoryMeter   0 0 58 0  
   0
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,249 StatusLogger.java (line 70) 
FlushWriter   0 0 45 0  
   0
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,249 StatusLogger.java (line 70) 
MiscStage 0 0  0 0  
   0
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,249 StatusLogger.java (line 70) 
commitlog_archiver0 0  0 0  
   0
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,250 StatusLogger.java (line 70) 
InternalResponseStage 0 0  3 0  
   0
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,250 StatusLogger.java (line 70) 
HintedHandoff 0 0  7 0  
   0
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,250 StatusLogger.java (line 79) 
CompactionManager 0 0
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,250 StatusLogger.java (line 81) 
Commitlog   n/a 0
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,251 StatusLogger.java (line 93) 
MessagingServicen/a   0,0
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,251 StatusLogger.java (line 103) 
Cache Type Size Capacity   
KeysToSave
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,251 StatusLogger.java (line 105) 
KeyCache  74140104857600
  all
 INFO [ScheduledTasks:1] 2013-10-11 16:36:12,251 StatusLogger.java (line 111) 
RowCache  0  

[jira] [Commented] (CASSANDRA-6152) Assertion error in 2.0.1 at db.ColumnSerializer.serialize(ColumnSerializer.java:56)

2013-10-11 Thread Donald Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13793070#comment-13793070
 ] 

Donald Smith commented on CASSANDRA-6152:
-

I have a hunch that when the column name is "" and the Memtable flushes to an 
SSTable is when this bug bites.   I notice it happens at about the same 
iteration of the *for* loop in BugMain.java.

> Assertion error in 2.0.1 at 
> db.ColumnSerializer.serialize(ColumnSerializer.java:56)
> ---
>
> Key: CASSANDRA-6152
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6152
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: CentOS release 6.2 (Final)
> With default set up on single node.
> I also saw this exception in 2.0.0 on a three node cluster.
>Reporter: Donald Smith
>Assignee: Sylvain Lebresne
> Fix For: 2.0.2
>
>
> {noformat}
> ERROR [COMMIT-LOG-WRITER] 2013-10-06 12:12:36,845 CassandraDaemon.java (line 
> 185) Exception in thread Thread[COMMIT-LOG-WRITER,5,main]
> java.lang.AssertionError
> at 
> org.apache.cassandra.db.ColumnSerializer.serialize(ColumnSerializer.java:56)
> at 
> org.apache.cassandra.db.ColumnFamilySerializer.serialize(ColumnFamilySerializer.java:77)
> at 
> org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:268)
> at 
> org.apache.cassandra.db.commitlog.CommitLogSegment.write(CommitLogSegment.java:229)
> at 
> org.apache.cassandra.db.commitlog.CommitLog$LogRecordAdder.run(CommitLog.java:352)
> at 
> org.apache.cassandra.db.commitlog.PeriodicCommitLogExecutorService$1.runMayThrow(PeriodicCommitLogExecutorService.java:48)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6186) Can't add index with a name prefixed with 'index'

2013-10-11 Thread Nick Bailey (JIRA)
Nick Bailey created CASSANDRA-6186:
--

 Summary: Can't add index with a name prefixed with 'index'
 Key: CASSANDRA-6186
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6186
 Project: Cassandra
  Issue Type: Bug
Reporter: Nick Bailey
 Fix For: 2.0.2


cqlsh code:

{noformat}
cqlsh> drop keyspace test_add_index;
cqlsh> CREATE KEYSPACE test_add_index WITH replication = {'class': 
'SimpleStrategy', 'replication_factor': 1} ;
cqlsh> create table test_add_index.cf1 (a text PRIMARY KEY, b text , c text );
cqlsh> create index index1 on test_add_index.cf1 (c);
Bad Request: Duplicate index name index1
cqlsh> drop keyspace test_add_index;
cqlsh> CREATE KEYSPACE test_add_index WITH replication = {'class': 
'SimpleStrategy', 'replication_factor': 1} ;
cqlsh> create table test_add_index.cf1 (a text PRIMARY KEY, b text , c text );
cqlsh> create index blah on test_add_index.cf1 (c);
cqlsh>
{noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (CASSANDRA-6185) Can't update int column to blob type.

2013-10-11 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-6185:
---

Assignee: Sylvain Lebresne

> Can't update int column to blob type.
> -
>
> Key: CASSANDRA-6185
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6185
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nick Bailey
>Assignee: Sylvain Lebresne
> Fix For: 1.2.11, 2.0.2
>
>
> Patch for dtests:
> {noformat}
> diff --git a/cql_tests.py b/cql_tests.py
> index 11461e4..405c998 100644
> --- a/cql_tests.py
> +++ b/cql_tests.py
> @@ -1547,35 +1547,35 @@ class TestCQL(Tester):
>  CREATE TABLE test (
>  k text,
>  c text,
> -v text,
> +v int,
>  PRIMARY KEY (k, c)
>  )
>  """)
> -req = "INSERT INTO test (k, c, v) VALUES ('%s', '%s', '%s')"
> +req = "INSERT INTO test (k, c, v) VALUES ('%s', '%s', %d)"
>  # using utf8 character so that we can see the transition to BytesType
> -cursor.execute(req % ('ɸ', 'ɸ', 'ɸ'))
> +cursor.execute(req % ('ɸ', 'ɸ', 1))
>  cursor.execute("SELECT * FROM test")
>  cursor.execute("SELECT * FROM test")
>  res = cursor.fetchall()
> -assert res == [[u'ɸ', u'ɸ', u'ɸ']], res
> +assert res == [[u'ɸ', u'ɸ', 1]], res
>  cursor.execute("ALTER TABLE test ALTER v TYPE blob")
>  cursor.execute("SELECT * FROM test")
>  res = cursor.fetchall()
>  # the last should not be utf8 but a raw string
> -assert res == [[u'ɸ', u'ɸ', 'ɸ']], res
> +assert res == [[u'ɸ', u'ɸ', '\x00\x00\x00\x01']], res
>  cursor.execute("ALTER TABLE test ALTER k TYPE blob")
>  cursor.execute("SELECT * FROM test")
>  res = cursor.fetchall()
> -assert res == [['ɸ', u'ɸ', 'ɸ']], res
> +assert res == [['ɸ', u'ɸ', '\x00\x00\x00\x01']], res
>  cursor.execute("ALTER TABLE test ALTER c TYPE blob")
>  cursor.execute("SELECT * FROM test")
>  res = cursor.fetchall()
> -assert res == [['ɸ', 'ɸ', 'ɸ']], res
> +assert res == [['ɸ', 'ɸ', '\x00\x00\x00\x01']], res
>  @since('1.2')
>  def composite_row_key_test(self):
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6185) Can't update int column to blob type.

2013-10-11 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13793041#comment-13793041
 ] 

Brandon Williams commented on CASSANDRA-6185:
-

CASSANDRA-5882 is responsible.

> Can't update int column to blob type.
> -
>
> Key: CASSANDRA-6185
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6185
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nick Bailey
> Fix For: 1.2.11, 2.0.2
>
>
> Patch for dtests:
> {noformat}
> diff --git a/cql_tests.py b/cql_tests.py
> index 11461e4..405c998 100644
> --- a/cql_tests.py
> +++ b/cql_tests.py
> @@ -1547,35 +1547,35 @@ class TestCQL(Tester):
>  CREATE TABLE test (
>  k text,
>  c text,
> -v text,
> +v int,
>  PRIMARY KEY (k, c)
>  )
>  """)
> -req = "INSERT INTO test (k, c, v) VALUES ('%s', '%s', '%s')"
> +req = "INSERT INTO test (k, c, v) VALUES ('%s', '%s', %d)"
>  # using utf8 character so that we can see the transition to BytesType
> -cursor.execute(req % ('ɸ', 'ɸ', 'ɸ'))
> +cursor.execute(req % ('ɸ', 'ɸ', 1))
>  cursor.execute("SELECT * FROM test")
>  cursor.execute("SELECT * FROM test")
>  res = cursor.fetchall()
> -assert res == [[u'ɸ', u'ɸ', u'ɸ']], res
> +assert res == [[u'ɸ', u'ɸ', 1]], res
>  cursor.execute("ALTER TABLE test ALTER v TYPE blob")
>  cursor.execute("SELECT * FROM test")
>  res = cursor.fetchall()
>  # the last should not be utf8 but a raw string
> -assert res == [[u'ɸ', u'ɸ', 'ɸ']], res
> +assert res == [[u'ɸ', u'ɸ', '\x00\x00\x00\x01']], res
>  cursor.execute("ALTER TABLE test ALTER k TYPE blob")
>  cursor.execute("SELECT * FROM test")
>  res = cursor.fetchall()
> -assert res == [['ɸ', u'ɸ', 'ɸ']], res
> +assert res == [['ɸ', u'ɸ', '\x00\x00\x00\x01']], res
>  cursor.execute("ALTER TABLE test ALTER c TYPE blob")
>  cursor.execute("SELECT * FROM test")
>  res = cursor.fetchall()
> -assert res == [['ɸ', 'ɸ', 'ɸ']], res
> +assert res == [['ɸ', 'ɸ', '\x00\x00\x00\x01']], res
>  @since('1.2')
>  def composite_row_key_test(self):
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6185) Can't update int column to blob type.

2013-10-11 Thread Nick Bailey (JIRA)
Nick Bailey created CASSANDRA-6185:
--

 Summary: Can't update int column to blob type.
 Key: CASSANDRA-6185
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6185
 Project: Cassandra
  Issue Type: Bug
Reporter: Nick Bailey
 Fix For: 1.2.11, 2.0.2


Patch for dtests:

{noformat}
diff --git a/cql_tests.py b/cql_tests.py
index 11461e4..405c998 100644
--- a/cql_tests.py
+++ b/cql_tests.py
@@ -1547,35 +1547,35 @@ class TestCQL(Tester):
 CREATE TABLE test (
 k text,
 c text,
-v text,
+v int,
 PRIMARY KEY (k, c)
 )
 """)

-req = "INSERT INTO test (k, c, v) VALUES ('%s', '%s', '%s')"
+req = "INSERT INTO test (k, c, v) VALUES ('%s', '%s', %d)"
 # using utf8 character so that we can see the transition to BytesType
-cursor.execute(req % ('ɸ', 'ɸ', 'ɸ'))
+cursor.execute(req % ('ɸ', 'ɸ', 1))

 cursor.execute("SELECT * FROM test")
 cursor.execute("SELECT * FROM test")
 res = cursor.fetchall()
-assert res == [[u'ɸ', u'ɸ', u'ɸ']], res
+assert res == [[u'ɸ', u'ɸ', 1]], res

 cursor.execute("ALTER TABLE test ALTER v TYPE blob")
 cursor.execute("SELECT * FROM test")
 res = cursor.fetchall()
 # the last should not be utf8 but a raw string
-assert res == [[u'ɸ', u'ɸ', 'ɸ']], res
+assert res == [[u'ɸ', u'ɸ', '\x00\x00\x00\x01']], res

 cursor.execute("ALTER TABLE test ALTER k TYPE blob")
 cursor.execute("SELECT * FROM test")
 res = cursor.fetchall()
-assert res == [['ɸ', u'ɸ', 'ɸ']], res
+assert res == [['ɸ', u'ɸ', '\x00\x00\x00\x01']], res

 cursor.execute("ALTER TABLE test ALTER c TYPE blob")
 cursor.execute("SELECT * FROM test")
 res = cursor.fetchall()
-assert res == [['ɸ', 'ɸ', 'ɸ']], res
+assert res == [['ɸ', 'ɸ', '\x00\x00\x00\x01']], res

 @since('1.2')
 def composite_row_key_test(self):
{noformat}





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6152) Assertion error in 2.0.1 at db.ColumnSerializer.serialize(ColumnSerializer.java:56)

2013-10-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6152:
--

  Description: 
{noformat}
ERROR [COMMIT-LOG-WRITER] 2013-10-06 12:12:36,845 CassandraDaemon.java (line 
185) Exception in thread Thread[COMMIT-LOG-WRITER,5,main]
java.lang.AssertionError
at 
org.apache.cassandra.db.ColumnSerializer.serialize(ColumnSerializer.java:56)
at 
org.apache.cassandra.db.ColumnFamilySerializer.serialize(ColumnFamilySerializer.java:77)
at 
org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:268)
at 
org.apache.cassandra.db.commitlog.CommitLogSegment.write(CommitLogSegment.java:229)
at 
org.apache.cassandra.db.commitlog.CommitLog$LogRecordAdder.run(CommitLog.java:352)
at 
org.apache.cassandra.db.commitlog.PeriodicCommitLogExecutorService$1.runMayThrow(PeriodicCommitLogExecutorService.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Thread.java:722)
{noformat}

  was:

{noformat}
ERROR [COMMIT-LOG-WRITER] 2013-10-06 12:12:36,845 CassandraDaemon.java (line 
185) Exception in thread Thread[COMMIT-LOG-WRITER,5,main]
java.lang.AssertionError
at 
org.apache.cassandra.db.ColumnSerializer.serialize(ColumnSerializer.java:56)
at 
org.apache.cassandra.db.ColumnFamilySerializer.serialize(ColumnFamilySerializer.java:77)
at 
org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:268)
at 
org.apache.cassandra.db.commitlog.CommitLogSegment.write(CommitLogSegment.java:229)
at 
org.apache.cassandra.db.commitlog.CommitLog$LogRecordAdder.run(CommitLog.java:352)
at 
org.apache.cassandra.db.commitlog.PeriodicCommitLogExecutorService$1.runMayThrow(PeriodicCommitLogExecutorService.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Thread.java:722)
{noformat}

Fix Version/s: 2.0.2

> Assertion error in 2.0.1 at 
> db.ColumnSerializer.serialize(ColumnSerializer.java:56)
> ---
>
> Key: CASSANDRA-6152
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6152
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: CentOS release 6.2 (Final)
> With default set up on single node.
> I also saw this exception in 2.0.0 on a three node cluster.
>Reporter: Donald Smith
>Assignee: Sylvain Lebresne
> Fix For: 2.0.2
>
>
> {noformat}
> ERROR [COMMIT-LOG-WRITER] 2013-10-06 12:12:36,845 CassandraDaemon.java (line 
> 185) Exception in thread Thread[COMMIT-LOG-WRITER,5,main]
> java.lang.AssertionError
> at 
> org.apache.cassandra.db.ColumnSerializer.serialize(ColumnSerializer.java:56)
> at 
> org.apache.cassandra.db.ColumnFamilySerializer.serialize(ColumnFamilySerializer.java:77)
> at 
> org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:268)
> at 
> org.apache.cassandra.db.commitlog.CommitLogSegment.write(CommitLogSegment.java:229)
> at 
> org.apache.cassandra.db.commitlog.CommitLog$LogRecordAdder.run(CommitLog.java:352)
> at 
> org.apache.cassandra.db.commitlog.PeriodicCommitLogExecutorService$1.runMayThrow(PeriodicCommitLogExecutorService.java:48)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (CASSANDRA-6152) Assertion error in 2.0.1 at db.ColumnSerializer.serialize(ColumnSerializer.java:56)

2013-10-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-6152:
-

Assignee: Sylvain Lebresne

Thanks, we'll have a look.

> Assertion error in 2.0.1 at 
> db.ColumnSerializer.serialize(ColumnSerializer.java:56)
> ---
>
> Key: CASSANDRA-6152
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6152
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: CentOS release 6.2 (Final)
> With default set up on single node.
> I also saw this exception in 2.0.0 on a three node cluster.
>Reporter: Donald Smith
>Assignee: Sylvain Lebresne
> Fix For: 2.0.2
>
>
> {noformat}
> ERROR [COMMIT-LOG-WRITER] 2013-10-06 12:12:36,845 CassandraDaemon.java (line 
> 185) Exception in thread Thread[COMMIT-LOG-WRITER,5,main]
> java.lang.AssertionError
> at 
> org.apache.cassandra.db.ColumnSerializer.serialize(ColumnSerializer.java:56)
> at 
> org.apache.cassandra.db.ColumnFamilySerializer.serialize(ColumnFamilySerializer.java:77)
> at 
> org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:268)
> at 
> org.apache.cassandra.db.commitlog.CommitLogSegment.write(CommitLogSegment.java:229)
> at 
> org.apache.cassandra.db.commitlog.CommitLog$LogRecordAdder.run(CommitLog.java:352)
> at 
> org.apache.cassandra.db.commitlog.PeriodicCommitLogExecutorService$1.runMayThrow(PeriodicCommitLogExecutorService.java:48)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6152) Assertion error in 2.0.1 at db.ColumnSerializer.serialize(ColumnSerializer.java:56)

2013-10-11 Thread Donald Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13793025#comment-13793025
 ] 

Donald Smith commented on CASSANDRA-6152:
-

I found a *simple* example of the bug.  

If I insert an empty string ("") into the table it causes the AssertionError. 
If I insert a non-empty string there's no AssertionError!

{noformat}
create keyspace if not exists bug with replication = {'class':'SimpleStrategy', 
'replication_factor':1};


create table if not exists bug.bug_table ( -- compact; column values are 
ordered by item_name
report_id   uuid,
item_name   text,
item_value  text,
primary key (report_id, item_name)) with compact storage;
}
{noformat}

BugMain.java:
{noformat}
package bug;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

public class BugMain {
private static String CASSANDRA_HOST = 
System.getProperty("cassandraServer","172.17.1.169"); 
//"donalds01lx.uscorp.audsci.com";
private static BugInterface dao = new BugImpl(CASSANDRA_HOST);

public static void bug() throws IOException {
List items = new ArrayList();
items.add(new BugItem("",1,2,3));   // if you change the empty string 
"" to a non-empty string, the AssertionError goes away!
items.add(new BugItem("twp",2,2,3));
items.add(new BugItem("three",3,2,3));
items.add(new BugItem("four",4,2,3));
dao.saveReport(items);
}

public static void main(String [] args) throws IOException { 
   try {
   for(int i=0;i<1000;i++) {
   System.out.println("\ndas: iteration " + i + "\n");
   bug();
   }
   } finally {
   dao.shutdown();
   }
}
}
{noformat}

BugItem.java:
{noformat}
package bug;

public class BugItem {
public String name;
public long long1; 
public long long2;
public long long3; 
public BugItem(String string, long i, long j, long k) {
name=string;
long1 = i;
long2= j;
long3 = k;
}
public String toString() {return "Item with name = " + name + ", long1 = " 
+ long1 + ", long2 = " + long2 + ", long3 = " + long3;}
}
{noformat}

BugInterface.java:
{noformat}
package bug;

import java.util.List;


public interface BugInterface {
public static final String VALUE_DELIMITER = ":";
public static final String HIERARCHY_DELIMITER = " > ";
void saveReport(List item);

void connect();
void shutdown();
}
{noformat}

BugImpl.java:
{noformat}
package bug;

import java.text.NumberFormat;
import java.util.List;
import java.util.UUID;

import org.apache.log4j.Logger;

import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.PreparedStatement;
import com.datastax.driver.core.Session;
import com.datastax.driver.core.querybuilder.Insert;
import com.datastax.driver.core.querybuilder.QueryBuilder;

public class BugImpl implements BugInterface {
private static final String CASSANDRA_NODE_PROPERTY="CASSANDRA_NODE";
private static final Logger L = Logger.getLogger(new Throwable()
.getStackTrace()[0].getClassName());
private static final String KEYSPACE_NAME = "bug";
private static final String REPORT_DATA_TABLE_NAME = "bug_table";
private static NumberFormat numberFormat = NumberFormat.getInstance();
private Cluster m_cluster;
private Session m_session;
private int m_writeBatchSize = 64;
private String m_cassandraNode = "";

static {
numberFormat.setMaximumFractionDigits(1);
}

public BugImpl() {
m_cassandraNode=System.getProperty(CASSANDRA_NODE_PROPERTY, 
m_cassandraNode); // Get from command line
}
public BugImpl(String cassandraNode) {
m_cassandraNode=cassandraNode;
}
@Override
public void shutdown() {
if (m_session!=null) {m_session.shutdown();}
if (m_cluster!=null) {m_cluster.shutdown();}
}
@Override
public void connect() {
 m_cluster = 
Cluster.builder().addContactPoint(m_cassandraNode).build();
 m_session = m_cluster.connect();
}
// 
-
@Override
public void saveReport(List items) {
final long time1 = System.currentTimeMillis();
if (m_session==null) {
connect();
}
UUID reportId = UUID.randomUUID(); 
saveReportAux(items,reportId);
final long time2 = System.currentTimeMillis();
L.info("saveReport: t=" + 
numberFormat.format((double)(time2-time1) * 0.001) + " seconds");
}

public void 

git commit: Fix int/bigint in CassandraStorage Patch by Alex Liu, reviewed by brandonwilliams for CASSANDRA-6102

2013-10-11 Thread brandonwilliams
Updated Branches:
  refs/heads/trunk 6e070e1ce -> bc8e2475f


Fix int/bigint in CassandraStorage
Patch by Alex Liu, reviewed by brandonwilliams for CASSANDRA-6102


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bc8e2475
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bc8e2475
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bc8e2475

Branch: refs/heads/trunk
Commit: bc8e2475fa71f4bbbf95d4294d78b96a1aa1211c
Parents: 6e070e1c
Author: Brandon Williams 
Authored: Fri Oct 11 15:22:35 2013 -0500
Committer: Brandon Williams 
Committed: Fri Oct 11 15:34:47 2013 -0500

--
 .../hadoop/pig/AbstractCassandraStorage.java| 97 ++--
 .../cassandra/hadoop/pig/CassandraStorage.java  | 55 +++
 .../apache/cassandra/hadoop/pig/CqlStorage.java |  8 +-
 3 files changed, 109 insertions(+), 51 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc8e2475/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
--
diff --git 
a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java 
b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
index c881734..486c781 100644
--- a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
+++ b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
@@ -97,7 +97,7 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 protected String outputFormatClass;
 protected int splitSize = 64 * 1024;
 protected String partitionerClass;
-protected boolean usePartitionFilter = false; 
+protected boolean usePartitionFilter = false;
 
 public AbstractCassandraStorage()
 {
@@ -116,8 +116,9 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 }
 
 /** convert a column to a tuple */
-protected Tuple columnToTuple(Column col, CfDef cfDef, AbstractType 
comparator) throws IOException
+protected Tuple columnToTuple(IColumn col, CfInfo cfInfo, AbstractType 
comparator) throws IOException
 {
+CfDef cfDef = cfInfo.cfDef;
 Tuple pair = TupleFactory.getInstance().newTuple(2);
 
 // name
@@ -130,11 +131,34 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 Map validators = getValidatorMap(cfDef);
 if (validators.get(col.name()) == null)
 {
-Map marshallers = 
getDefaultMarshallers(cfDef);
-setTupleValue(pair, 1, 
cassandraToObj(marshallers.get(MarshallerType.DEFAULT_VALIDATOR), col.value()));
+// standard
+Map validators = getValidatorMap(cfDef);
+ByteBuffer colName;
+if (cfInfo.cql3Table && !cfInfo.compactCqlTable)
+{
+ByteBuffer[] names = ((AbstractCompositeType) 
parseType(cfDef.comparator_type)).split(col.name());
+colName = names[names.length-1];
+}
+else
+colName = col.name();
+if (validators.get(colName) == null)
+{
+Map marshallers = 
getDefaultMarshallers(cfDef);
+setTupleValue(pair, 1, 
cassandraToObj(marshallers.get(MarshallerType.DEFAULT_VALIDATOR), col.value()));
+}
+else
+setTupleValue(pair, 1, cassandraToObj(validators.get(colName), 
col.value()));
+return pair;
 }
 else
-setTupleValue(pair, 1, cassandraToObj(validators.get(col.name()), 
col.value()));
+{
+// super
+ArrayList subcols = new ArrayList();
+for (IColumn subcol : col.getSubColumns())
+subcols.add(columnToTuple(subcol, cfInfo, 
parseType(cfDef.getSubcomparator_type(;
+
+pair.set(1, new DefaultDataBag(subcols));
+}
 return pair;
 }
 
@@ -154,11 +178,16 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 }
 
 /** get the columnfamily definition for the signature */
-protected CfDef getCfDef(String signature) throws IOException
+protected CfInfo getCfInfo(String signature) throws IOException
 {
 UDFContext context = UDFContext.getUDFContext();
 Properties property = 
context.getUDFProperties(AbstractCassandraStorage.class);
-return cfdefFromString(property.getProperty(signature));
+String prop = property.getProperty(signature);
+CfInfo cfInfo = new CfInfo();
+cfInfo.cfDef = cfdefFromString(prop.substring(2));
+cfInfo.compactCqlTable = prop.charAt(0) == '1' ? true : false;
+cfInfo.cql3Table = prop.charAt(1) == '1' ? true : false;
+

[jira] [Updated] (CASSANDRA-5957) Cannot drop keyspace Keyspace1 after running cassandra-stress

2013-10-11 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-5957:
---

Attachment: 5957-1.2-v2.patch

5957-1.2-v2.patch (and 
[branch|https://github.com/thobbs/cassandra/tree/CASSANDRA-5957-v2]) disables 
the "was not marked compacted" assertion when cleaning up sstables for a 
dropped table and unreferences sstables when compaction tasks finish (or are 
interrupted) if the table has been dropped.

Regarding flushing, a blocking flush is forced after unregistering the CF but 
prior to the CFS being invalidated (which is when sstables get unreferenced), 
so we shouldn't see a similar problem there (and I haven't seen one while 
testing this).

> Cannot drop keyspace Keyspace1 after running cassandra-stress
> -
>
> Key: CASSANDRA-5957
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5957
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra 1.2.9 freshly built from cassandra-1.2 branch 
> (f5b224cf9aa0f319d51078ef4b78d55e36613963)
>Reporter: Piotr Kołaczkowski
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 1.2.11
>
> Attachments: 5957-1.2-v1.patch, 5957-1.2-v2.patch, system.log
>
>
> Steps to reproduce:
> # Set MAX_HEAP="2G", HEAP_NEWSIZE="400M"
> # Run ./cassandra-stress -n 5 -c 400 -S 256
> # The test should complete despite several warnings about low heap memory.
> # Try to drop keyspace:
> {noformat}
> cqlsh> drop keyspace Keyspace1;
> TSocket read 0 bytes
> {noformat}
> system.log:
> {noformat}
>  INFO 15:10:46,516 Enqueuing flush of 
> Memtable-schema_columnfamilies@2127258371(0/0 serialized/live bytes, 1 ops)
>  INFO 15:10:46,516 Writing Memtable-schema_columnfamilies@2127258371(0/0 
> serialized/live bytes, 1 ops)
>  INFO 15:10:46,690 Completed flushing 
> /var/lib/cassandra/data/system/schema_columnfamilies/system-schema_columnfamilies-ic-6-Data.db
>  (38 bytes) for commitlog position ReplayPosition(segmentId=1377867520699, 
> position=19794574)
>  INFO 15:10:46,692 Enqueuing flush of Memtable-schema_columns@1997964959(0/0 
> serialized/live bytes, 1 ops)
>  INFO 15:10:46,693 Writing Memtable-schema_columns@1997964959(0/0 
> serialized/live bytes, 1 ops)
>  INFO 15:10:46,857 Completed flushing 
> /var/lib/cassandra/data/system/schema_columns/system-schema_columns-ic-6-Data.db
>  (38 bytes) for commitlog position ReplayPosition(segmentId=1377867520699, 
> position=19794574)
>  INFO 15:10:46,897 Enqueuing flush of Memtable-local@1366216652(98/98 
> serialized/live bytes, 3 ops)
>  INFO 15:10:46,898 Writing Memtable-local@1366216652(98/98 serialized/live 
> bytes, 3 ops)
>  INFO 15:10:47,064 Completed flushing 
> /var/lib/cassandra/data/system/local/system-local-ic-12-Data.db (139 bytes) 
> for commitlog position ReplayPosition(segmentId=1377867520699, 
> position=19794845)
>  INFO 15:10:48,956 Enqueuing flush of Memtable-local@432522279(46/46 
> serialized/live bytes, 1 ops)
>  INFO 15:10:48,957 Writing Memtable-local@432522279(46/46 serialized/live 
> bytes, 1 ops)
>  INFO 15:10:49,132 Compaction interrupted: 
> Compaction@4d331c44-f018-302b-91c2-2dcf94c4bfad(Keyspace1, Standard1, 
> 400882073/1094043713)bytes
>  INFO 15:10:49,175 Compaction interrupted: 
> Compaction@4d331c44-f018-302b-91c2-2dcf94c4bfad(Keyspace1, Standard1, 
> 147514075/645675954)bytes
>  INFO 15:10:49,185 Compaction interrupted: 
> Compaction@4d331c44-f018-302b-91c2-2dcf94c4bfad(Keyspace1, Standard1, 
> 223249644/609072261)bytes
>  INFO 15:10:49,202 Compaction interrupted: 
> Compaction@4d331c44-f018-302b-91c2-2dcf94c4bfad(Keyspace1, Standard1, 
> 346471085/990388210)bytes
>  INFO 15:10:49,215 Compaction interrupted: 
> Compaction@4d331c44-f018-302b-91c2-2dcf94c4bfad(Keyspace1, Standard1, 
> 294748503/2092376617)bytes
>  INFO 15:10:49,257 Compaction interrupted: 
> Compaction@4d331c44-f018-302b-91c2-2dcf94c4bfad(Keyspace1, Standard1, 
> 692722235/739328646)bytes
>  INFO 15:10:49,285 Completed flushing 
> /var/lib/cassandra/data/system/local/system-local-ic-13-Data.db (82 bytes) 
> for commitlog position ReplayPosition(segmentId=1377867520699, 
> position=19794974)
>  INFO 15:10:49,286 Compacting 
> [SSTableReader(path='/var/lib/cassandra/data/system/local/system-local-ic-10-Data.db'),
>  
> SSTableReader(path='/var/lib/cassandra/data/system/local/system-local-ic-13-Data.db'),
>  
> SSTableReader(path='/var/lib/cassandra/data/system/local/system-local-ic-12-Data.db'),
>  
> SSTableReader(path='/var/lib/cassandra/data/system/local/system-local-ic-11-Data.db')]
> ERROR 15:10:49,287 Error occurred during processing of message.
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.AssertionError: 
> SSTableReader(path='/var/lib/cassandra/data/Key

[3/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-10-11 Thread brandonwilliams
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a3ad2e82
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a3ad2e82
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a3ad2e82

Branch: refs/heads/cassandra-2.0
Commit: a3ad2e82249b88d4a05f24140948cdbc809d14f3
Parents: 8a506e6 639c01a
Author: Brandon Williams 
Authored: Fri Oct 11 15:30:27 2013 -0500
Committer: Brandon Williams 
Committed: Fri Oct 11 15:30:27 2013 -0500

--
 .../hadoop/pig/AbstractCassandraStorage.java| 97 ++--
 .../cassandra/hadoop/pig/CassandraStorage.java  | 55 +++
 .../apache/cassandra/hadoop/pig/CqlStorage.java |  8 +-
 3 files changed, 109 insertions(+), 51 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a3ad2e82/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
--
diff --cc src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
index c881734,dbebfb5..486c781
--- a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
+++ b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
@@@ -127,14 -128,36 +128,37 @@@ public abstract class AbstractCassandra
  setTupleValue(pair, 0, cassandraToObj(comparator, col.name()));
  
  // value
 -if (col instanceof Column)
 +Map validators = getValidatorMap(cfDef);
 +if (validators.get(col.name()) == null)
  {
- Map marshallers = 
getDefaultMarshallers(cfDef);
- setTupleValue(pair, 1, 
cassandraToObj(marshallers.get(MarshallerType.DEFAULT_VALIDATOR), col.value()));
+ // standard
+ Map validators = getValidatorMap(cfDef);
+ ByteBuffer colName;
+ if (cfInfo.cql3Table && !cfInfo.compactCqlTable)
+ {
+ ByteBuffer[] names = ((AbstractCompositeType) 
parseType(cfDef.comparator_type)).split(col.name());
+ colName = names[names.length-1];
+ }
+ else
+ colName = col.name();
+ if (validators.get(colName) == null)
+ {
+ Map marshallers = 
getDefaultMarshallers(cfDef);
+ setTupleValue(pair, 1, 
cassandraToObj(marshallers.get(MarshallerType.DEFAULT_VALIDATOR), col.value()));
+ }
+ else
+ setTupleValue(pair, 1, 
cassandraToObj(validators.get(colName), col.value()));
+ return pair;
  }
  else
- setTupleValue(pair, 1, cassandraToObj(validators.get(col.name()), 
col.value()));
+ {
+ // super
+ ArrayList subcols = new ArrayList();
+ for (IColumn subcol : col.getSubColumns())
+ subcols.add(columnToTuple(subcol, cfInfo, 
parseType(cfDef.getSubcomparator_type(;
+ 
+ pair.set(1, new DefaultDataBag(subcols));
+ }
  return pair;
  }
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a3ad2e82/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
--
diff --cc src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
index 4083236,a7cc1ad..d9c55a1
--- a/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
+++ b/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
@@@ -124,9 -123,9 +125,9 @@@ public class CassandraStorage extends A
  key = (ByteBuffer)reader.getCurrentKey();
  tuple = keyToTuple(key, cfDef, 
parseType(cfDef.getKey_validation_class()));
  }
 -for (Map.Entry entry : 
lastRow.entrySet())
 +for (Map.Entry entry : 
lastRow.entrySet())
  {
- bag.add(columnToTuple(entry.getValue(), cfDef, 
parseType(cfDef.getComparator_type(;
+ bag.add(columnToTuple(entry.getValue(), cfInfo, 
parseType(cfDef.getComparator_type(;
  }
  lastKey = null;
  lastRow = null;
@@@ -162,9 -161,9 +163,9 @@@
  tuple = keyToTuple(lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
  else
  addKeyToTuple(tuple, lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
 -for (Map.Entry entry : 
lastRow.entrySet())
 +for (Map.Entry entry : 
lastRow.entrySet())
  {
- bag.add(col

[1/3] git commit: Fix int/bigint in CassandraStorage Patch by Alex Liu, reviewed by brandonwilliams for CASSANDRA-6102

2013-10-11 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-1.2 eee485eb6 -> 639c01a35
  refs/heads/cassandra-2.0 8a506e66a -> a3ad2e822


Fix int/bigint in CassandraStorage
Patch by Alex Liu, reviewed by brandonwilliams for CASSANDRA-6102


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/639c01a3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/639c01a3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/639c01a3

Branch: refs/heads/cassandra-1.2
Commit: 639c01a3504ba2e2a55061093651a9973ad68d11
Parents: eee485e
Author: Brandon Williams 
Authored: Fri Oct 11 15:22:35 2013 -0500
Committer: Brandon Williams 
Committed: Fri Oct 11 15:22:35 2013 -0500

--
 .../hadoop/pig/AbstractCassandraStorage.java| 82 +---
 .../cassandra/hadoop/pig/CassandraStorage.java  | 45 ++-
 .../apache/cassandra/hadoop/pig/CqlStorage.java |  8 +-
 3 files changed, 85 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/639c01a3/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
--
diff --git 
a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java 
b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
index 6ad4f9e..dbebfb5 100644
--- a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
+++ b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
@@ -97,7 +97,7 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 protected String outputFormatClass;
 protected int splitSize = 64 * 1024;
 protected String partitionerClass;
-protected boolean usePartitionFilter = false; 
+protected boolean usePartitionFilter = false;
 
 public AbstractCassandraStorage()
 {
@@ -116,8 +116,9 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 }
 
 /** convert a column to a tuple */
-protected Tuple columnToTuple(IColumn col, CfDef cfDef, AbstractType 
comparator) throws IOException
+protected Tuple columnToTuple(IColumn col, CfInfo cfInfo, AbstractType 
comparator) throws IOException
 {
+CfDef cfDef = cfInfo.cfDef;
 Tuple pair = TupleFactory.getInstance().newTuple(2);
 
 // name
@@ -131,13 +132,21 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 {
 // standard
 Map validators = getValidatorMap(cfDef);
-if (validators.get(col.name()) == null)
+ByteBuffer colName;
+if (cfInfo.cql3Table && !cfInfo.compactCqlTable)
+{
+ByteBuffer[] names = ((AbstractCompositeType) 
parseType(cfDef.comparator_type)).split(col.name());
+colName = names[names.length-1];
+}
+else
+colName = col.name();
+if (validators.get(colName) == null)
 {
 Map marshallers = 
getDefaultMarshallers(cfDef);
 setTupleValue(pair, 1, 
cassandraToObj(marshallers.get(MarshallerType.DEFAULT_VALIDATOR), col.value()));
 }
 else
-setTupleValue(pair, 1, 
cassandraToObj(validators.get(col.name()), col.value()));
+setTupleValue(pair, 1, cassandraToObj(validators.get(colName), 
col.value()));
 return pair;
 }
 else
@@ -145,7 +154,7 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 // super
 ArrayList subcols = new ArrayList();
 for (IColumn subcol : col.getSubColumns())
-subcols.add(columnToTuple(subcol, cfDef, 
parseType(cfDef.getSubcomparator_type(;
+subcols.add(columnToTuple(subcol, cfInfo, 
parseType(cfDef.getSubcomparator_type(;
 
 pair.set(1, new DefaultDataBag(subcols));
 }
@@ -168,11 +177,16 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 }
 
 /** get the columnfamily definition for the signature */
-protected CfDef getCfDef(String signature) throws IOException
+protected CfInfo getCfInfo(String signature) throws IOException
 {
 UDFContext context = UDFContext.getUDFContext();
 Properties property = 
context.getUDFProperties(AbstractCassandraStorage.class);
-return cfdefFromString(property.getProperty(signature));
+String prop = property.getProperty(signature);
+CfInfo cfInfo = new CfInfo();
+cfInfo.cfDef = cfdefFromString(prop.substring(2));
+cfInfo.compactCqlTable = prop.charAt(0) == '1' ? true : false;
+cfInfo.cql3Table = prop.charAt(1) == '1' ? true : false;

[2/3] git commit: Fix int/bigint in CassandraStorage Patch by Alex Liu, reviewed by brandonwilliams for CASSANDRA-6102

2013-10-11 Thread brandonwilliams
Fix int/bigint in CassandraStorage
Patch by Alex Liu, reviewed by brandonwilliams for CASSANDRA-6102


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/639c01a3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/639c01a3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/639c01a3

Branch: refs/heads/cassandra-2.0
Commit: 639c01a3504ba2e2a55061093651a9973ad68d11
Parents: eee485e
Author: Brandon Williams 
Authored: Fri Oct 11 15:22:35 2013 -0500
Committer: Brandon Williams 
Committed: Fri Oct 11 15:22:35 2013 -0500

--
 .../hadoop/pig/AbstractCassandraStorage.java| 82 +---
 .../cassandra/hadoop/pig/CassandraStorage.java  | 45 ++-
 .../apache/cassandra/hadoop/pig/CqlStorage.java |  8 +-
 3 files changed, 85 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/639c01a3/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
--
diff --git 
a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java 
b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
index 6ad4f9e..dbebfb5 100644
--- a/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
+++ b/src/java/org/apache/cassandra/hadoop/pig/AbstractCassandraStorage.java
@@ -97,7 +97,7 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 protected String outputFormatClass;
 protected int splitSize = 64 * 1024;
 protected String partitionerClass;
-protected boolean usePartitionFilter = false; 
+protected boolean usePartitionFilter = false;
 
 public AbstractCassandraStorage()
 {
@@ -116,8 +116,9 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 }
 
 /** convert a column to a tuple */
-protected Tuple columnToTuple(IColumn col, CfDef cfDef, AbstractType 
comparator) throws IOException
+protected Tuple columnToTuple(IColumn col, CfInfo cfInfo, AbstractType 
comparator) throws IOException
 {
+CfDef cfDef = cfInfo.cfDef;
 Tuple pair = TupleFactory.getInstance().newTuple(2);
 
 // name
@@ -131,13 +132,21 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 {
 // standard
 Map validators = getValidatorMap(cfDef);
-if (validators.get(col.name()) == null)
+ByteBuffer colName;
+if (cfInfo.cql3Table && !cfInfo.compactCqlTable)
+{
+ByteBuffer[] names = ((AbstractCompositeType) 
parseType(cfDef.comparator_type)).split(col.name());
+colName = names[names.length-1];
+}
+else
+colName = col.name();
+if (validators.get(colName) == null)
 {
 Map marshallers = 
getDefaultMarshallers(cfDef);
 setTupleValue(pair, 1, 
cassandraToObj(marshallers.get(MarshallerType.DEFAULT_VALIDATOR), col.value()));
 }
 else
-setTupleValue(pair, 1, 
cassandraToObj(validators.get(col.name()), col.value()));
+setTupleValue(pair, 1, cassandraToObj(validators.get(colName), 
col.value()));
 return pair;
 }
 else
@@ -145,7 +154,7 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 // super
 ArrayList subcols = new ArrayList();
 for (IColumn subcol : col.getSubColumns())
-subcols.add(columnToTuple(subcol, cfDef, 
parseType(cfDef.getSubcomparator_type(;
+subcols.add(columnToTuple(subcol, cfInfo, 
parseType(cfDef.getSubcomparator_type(;
 
 pair.set(1, new DefaultDataBag(subcols));
 }
@@ -168,11 +177,16 @@ public abstract class AbstractCassandraStorage extends 
LoadFunc implements Store
 }
 
 /** get the columnfamily definition for the signature */
-protected CfDef getCfDef(String signature) throws IOException
+protected CfInfo getCfInfo(String signature) throws IOException
 {
 UDFContext context = UDFContext.getUDFContext();
 Properties property = 
context.getUDFProperties(AbstractCassandraStorage.class);
-return cfdefFromString(property.getProperty(signature));
+String prop = property.getProperty(signature);
+CfInfo cfInfo = new CfInfo();
+cfInfo.cfDef = cfdefFromString(prop.substring(2));
+cfInfo.compactCqlTable = prop.charAt(0) == '1' ? true : false;
+cfInfo.cql3Table = prop.charAt(1) == '1' ? true : false;
+return cfInfo;
 }
 
 /** construct a map to store the mashaller type to cassandra data type 
mapping *

[jira] [Commented] (CASSANDRA-6092) Leveled Compaction after ALTER TABLE creates pending but does not actually begin

2013-10-11 Thread Tomas Salfischberger (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13793012#comment-13793012
 ] 

Tomas Salfischberger commented on CASSANDRA-6092:
-

Update: The workaround to call "nodetool compact" only works when there is at 
least some activity, causing there to be more than 1 sstable. I've just tested 
with a cluster that was not receiving any reads nor writes. In that case 
calling "nodetool compact" does not do anything, so there is no way to work 
around this when there are no writes at all. Maybe it's worth adding a special 
case of ignoring the tombstone ratio threshold for single sstable compaction in 
the case of explicitly calling nodetool compact?

> Leveled Compaction after ALTER TABLE creates pending but does not actually 
> begin
> 
>
> Key: CASSANDRA-6092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6092
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 1.2.10
> Oracle Java 1.7.0_u40
> RHEL6.4
>Reporter: Karl Mueller
>Assignee: Daniel Meyer
>
> Running Cassandra 1.2.10.  N=5, RF=3
> On this Column Family (ProductGenomeDev/Node), it's been major compacted into 
> a single, large sstable.
> There's no activity on the table at the time of the ALTER command. I changed 
> it to Leveled Compaction with the command below.
> cqlsh:ProductGenomeDev> alter table "Node" with compaction = { 'class' : 
> 'LeveledCompactionStrategy', 'sstable_size_in_mb' : 160 };
> Log entries confirm the change happened.
> [...]column_metadata={},compactionStrategyClass=class 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy,compactionStrategyOptions={sstable_size_in_mb=160}
>  [...]
> nodetool compactionstats shows pending compactions, but there's no activity:
> pending tasks: 750
> 12 hours later, nothing has still happened, same number pending. The 
> expectation would be that compactions would proceed immediately to convert 
> everything to Leveled Compaction as soon as the ALTER TABLE command goes.
> I try a simple write into the CF, and then flush the nodes. This kicks off 
> compaction on 3 nodes. (RF=3)
> cqlsh:ProductGenomeDev> insert into "Node" (key, column1, value) values 
> ('test123', 'test123', 'test123');
> cqlsh:ProductGenomeDev> select * from "Node" where key = 'test123';
>  key | column1 | value
> -+-+-
>  test123 | test123 | test123
> cqlsh:ProductGenomeDev> delete from "Node" where key = 'test123';
> After a flush on every node, now I see:
> [cassandra@dev-cass00 ~]$ cas exec nt compactionstats
> *** dev-cass00 (0) ***
> pending tasks: 750
> Active compaction remaining time :n/a
> *** dev-cass04 (0) ***
> pending tasks: 752
>   compaction typekeyspace   column family   completed 
>   total  unit  progress
>CompactionProductGenomeDevNode  341881
> 643290447928 bytes 0.53%
> Active compaction remaining time :n/a
> *** dev-cass01 (0) ***
> pending tasks: 750
> Active compaction remaining time :n/a
> *** dev-cass02 (0) ***
> pending tasks: 751
>   compaction typekeyspace   column family   completed 
>   total  unit  progress
>CompactionProductGenomeDevNode  3374975141
> 642764512481 bytes 0.53%
> Active compaction remaining time :n/a
> *** dev-cass03 (0) ***
> pending tasks: 751
>   compaction typekeyspace   column family   completed 
>   total  unit  progress
>CompactionProductGenomeDevNode  3591320948
> 643017643573 bytes 0.56%
> Active compaction remaining time :n/a
> After inserting and deleting more columns, enough that all nodes have new 
> data, and flushing, now compactions are proceeding on all nodes.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6102) CassandraStorage broken for bigints and ints

2013-10-11 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-6102:


Attachment: 6102-v5.txt

> CassandraStorage broken for bigints and ints
> 
>
> Key: CASSANDRA-6102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6102
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
> Environment: Cassandra 1.2.9 & 1.2.10, Pig 0.11.1, OSX 10.8.x
>Reporter: Janne Jalkanen
>Assignee: Alex Liu
> Attachments: 6102-1.2-branch.txt, 6102-v2.txt, 6102-v3.txt, 
> 6102-v4.txt, 6102-v5.txt
>
>
> I am seeing something rather strange in the way Cass 1.2 + Pig seem to handle 
> integer values.
> Setup: Cassandra 1.2.10, OSX 10.8, JDK 1.7u40, Pig 0.11.1.  Single node for 
> testing this. 
> First a table:
> {noformat}
> > CREATE TABLE testc (
>  key text PRIMARY KEY,
>  ivalue int,
>  svalue text,
>  value bigint
> ) WITH COMPACT STORAGE;
> > insert into testc (key,ivalue,svalue,value) values ('foo',10,'bar',65);
> > select * from testc;
> key | ivalue | svalue | value
> -+++---
> foo | 10 |bar | 65
> {noformat}
> For my Pig setup, I then use libraries from different C* versions to actually 
> talk to my database (which stays on 1.2.10 all the time).
> Cassandra 1.0.12 (using cassandra_storage.jar):
> {noformat}
> testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> (foo,(svalue,bar),(ivalue,10),(value,65),{})
> {noformat}
> Cassandra 1.1.10:
> {noformat}
> testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> (foo,(svalue,bar),(ivalue,10),(value,65),{})
> {noformat}
> Cassandra 1.2.10:
> {noformat}
> (testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> foo,{(ivalue,
> ),(svalue,bar),(value,A)})
> {noformat}
> To me it appears that ints and bigints are interpreted as ascii values in 
> cass 1.2.10.  Did something change for CassandraStorage, is there a 
> regression, or am I doing something wrong?  Quick perusal of the JIRA didn't 
> reveal anything that I could directly pin on this.
> Note that using compact storage does not seem to affect the issue, though it 
> obviously changes the resulting pig format.
> In addition, trying to use Pygmalion 
> {noformat}
> tf = foreach testc generate key, 
> flatten(FromCassandraBag('ivalue,svalue,value',columns)) as 
> (ivalue:int,svalue:chararray,lvalue:long);
> dump tf
> (foo,
> ,bar,A)
> {noformat}
> So no help there. Explicitly casting the values to (long) or (int) just 
> results in a ClassCastException.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6102) CassandraStorage broken for bigints and ints

2013-10-11 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792987#comment-13792987
 ] 

Alex Liu commented on CASSANDRA-6102:
-

6102-v3.txt patch is attached which reverts back the UUIDType mappings

> CassandraStorage broken for bigints and ints
> 
>
> Key: CASSANDRA-6102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6102
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
> Environment: Cassandra 1.2.9 & 1.2.10, Pig 0.11.1, OSX 10.8.x
>Reporter: Janne Jalkanen
>Assignee: Alex Liu
> Attachments: 6102-1.2-branch.txt, 6102-v2.txt, 6102-v3.txt, 
> 6102-v4.txt
>
>
> I am seeing something rather strange in the way Cass 1.2 + Pig seem to handle 
> integer values.
> Setup: Cassandra 1.2.10, OSX 10.8, JDK 1.7u40, Pig 0.11.1.  Single node for 
> testing this. 
> First a table:
> {noformat}
> > CREATE TABLE testc (
>  key text PRIMARY KEY,
>  ivalue int,
>  svalue text,
>  value bigint
> ) WITH COMPACT STORAGE;
> > insert into testc (key,ivalue,svalue,value) values ('foo',10,'bar',65);
> > select * from testc;
> key | ivalue | svalue | value
> -+++---
> foo | 10 |bar | 65
> {noformat}
> For my Pig setup, I then use libraries from different C* versions to actually 
> talk to my database (which stays on 1.2.10 all the time).
> Cassandra 1.0.12 (using cassandra_storage.jar):
> {noformat}
> testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> (foo,(svalue,bar),(ivalue,10),(value,65),{})
> {noformat}
> Cassandra 1.1.10:
> {noformat}
> testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> (foo,(svalue,bar),(ivalue,10),(value,65),{})
> {noformat}
> Cassandra 1.2.10:
> {noformat}
> (testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> foo,{(ivalue,
> ),(svalue,bar),(value,A)})
> {noformat}
> To me it appears that ints and bigints are interpreted as ascii values in 
> cass 1.2.10.  Did something change for CassandraStorage, is there a 
> regression, or am I doing something wrong?  Quick perusal of the JIRA didn't 
> reveal anything that I could directly pin on this.
> Note that using compact storage does not seem to affect the issue, though it 
> obviously changes the resulting pig format.
> In addition, trying to use Pygmalion 
> {noformat}
> tf = foreach testc generate key, 
> flatten(FromCassandraBag('ivalue,svalue,value',columns)) as 
> (ivalue:int,svalue:chararray,lvalue:long);
> dump tf
> (foo,
> ,bar,A)
> {noformat}
> So no help there. Explicitly casting the values to (long) or (int) just 
> results in a ClassCastException.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6102) CassandraStorage broken for bigints and ints

2013-10-11 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792987#comment-13792987
 ] 

Alex Liu edited comment on CASSANDRA-6102 at 10/11/13 7:46 PM:
---

6102-v4.txt patch is attached which reverts back the UUIDType mappings


was (Author: alexliu68):
6102-v3.txt patch is attached which reverts back the UUIDType mappings

> CassandraStorage broken for bigints and ints
> 
>
> Key: CASSANDRA-6102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6102
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
> Environment: Cassandra 1.2.9 & 1.2.10, Pig 0.11.1, OSX 10.8.x
>Reporter: Janne Jalkanen
>Assignee: Alex Liu
> Attachments: 6102-1.2-branch.txt, 6102-v2.txt, 6102-v3.txt, 
> 6102-v4.txt
>
>
> I am seeing something rather strange in the way Cass 1.2 + Pig seem to handle 
> integer values.
> Setup: Cassandra 1.2.10, OSX 10.8, JDK 1.7u40, Pig 0.11.1.  Single node for 
> testing this. 
> First a table:
> {noformat}
> > CREATE TABLE testc (
>  key text PRIMARY KEY,
>  ivalue int,
>  svalue text,
>  value bigint
> ) WITH COMPACT STORAGE;
> > insert into testc (key,ivalue,svalue,value) values ('foo',10,'bar',65);
> > select * from testc;
> key | ivalue | svalue | value
> -+++---
> foo | 10 |bar | 65
> {noformat}
> For my Pig setup, I then use libraries from different C* versions to actually 
> talk to my database (which stays on 1.2.10 all the time).
> Cassandra 1.0.12 (using cassandra_storage.jar):
> {noformat}
> testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> (foo,(svalue,bar),(ivalue,10),(value,65),{})
> {noformat}
> Cassandra 1.1.10:
> {noformat}
> testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> (foo,(svalue,bar),(ivalue,10),(value,65),{})
> {noformat}
> Cassandra 1.2.10:
> {noformat}
> (testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> foo,{(ivalue,
> ),(svalue,bar),(value,A)})
> {noformat}
> To me it appears that ints and bigints are interpreted as ascii values in 
> cass 1.2.10.  Did something change for CassandraStorage, is there a 
> regression, or am I doing something wrong?  Quick perusal of the JIRA didn't 
> reveal anything that I could directly pin on this.
> Note that using compact storage does not seem to affect the issue, though it 
> obviously changes the resulting pig format.
> In addition, trying to use Pygmalion 
> {noformat}
> tf = foreach testc generate key, 
> flatten(FromCassandraBag('ivalue,svalue,value',columns)) as 
> (ivalue:int,svalue:chararray,lvalue:long);
> dump tf
> (foo,
> ,bar,A)
> {noformat}
> So no help there. Explicitly casting the values to (long) or (int) just 
> results in a ClassCastException.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6102) CassandraStorage broken for bigints and ints

2013-10-11 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-6102:


Attachment: 6102-v4.txt

> CassandraStorage broken for bigints and ints
> 
>
> Key: CASSANDRA-6102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6102
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
> Environment: Cassandra 1.2.9 & 1.2.10, Pig 0.11.1, OSX 10.8.x
>Reporter: Janne Jalkanen
>Assignee: Alex Liu
> Attachments: 6102-1.2-branch.txt, 6102-v2.txt, 6102-v3.txt, 
> 6102-v4.txt
>
>
> I am seeing something rather strange in the way Cass 1.2 + Pig seem to handle 
> integer values.
> Setup: Cassandra 1.2.10, OSX 10.8, JDK 1.7u40, Pig 0.11.1.  Single node for 
> testing this. 
> First a table:
> {noformat}
> > CREATE TABLE testc (
>  key text PRIMARY KEY,
>  ivalue int,
>  svalue text,
>  value bigint
> ) WITH COMPACT STORAGE;
> > insert into testc (key,ivalue,svalue,value) values ('foo',10,'bar',65);
> > select * from testc;
> key | ivalue | svalue | value
> -+++---
> foo | 10 |bar | 65
> {noformat}
> For my Pig setup, I then use libraries from different C* versions to actually 
> talk to my database (which stays on 1.2.10 all the time).
> Cassandra 1.0.12 (using cassandra_storage.jar):
> {noformat}
> testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> (foo,(svalue,bar),(ivalue,10),(value,65),{})
> {noformat}
> Cassandra 1.1.10:
> {noformat}
> testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> (foo,(svalue,bar),(ivalue,10),(value,65),{})
> {noformat}
> Cassandra 1.2.10:
> {noformat}
> (testc = LOAD 'cassandra://keyspace/testc' USING CassandraStorage();
> dump testc
> foo,{(ivalue,
> ),(svalue,bar),(value,A)})
> {noformat}
> To me it appears that ints and bigints are interpreted as ascii values in 
> cass 1.2.10.  Did something change for CassandraStorage, is there a 
> regression, or am I doing something wrong?  Quick perusal of the JIRA didn't 
> reveal anything that I could directly pin on this.
> Note that using compact storage does not seem to affect the issue, though it 
> obviously changes the resulting pig format.
> In addition, trying to use Pygmalion 
> {noformat}
> tf = foreach testc generate key, 
> flatten(FromCassandraBag('ivalue,svalue,value',columns)) as 
> (ivalue:int,svalue:chararray,lvalue:long);
> dump tf
> (foo,
> ,bar,A)
> {noformat}
> So no help there. Explicitly casting the values to (long) or (int) just 
> results in a ClassCastException.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Reopened] (CASSANDRA-6184) Need a nodetool command to purge hints

2013-10-11 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reopened CASSANDRA-6184:
-


> Need a nodetool command to purge hints
> --
>
> Key: CASSANDRA-6184
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6184
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Production
>Reporter: Vishy Kasar
> Fix For: 1.2.11
>
>
> Our operation requires us to purge the hints. The only way to do this 
> currently is to truncate the system/HintsColumnFamily across our entire 
> cluster. That is error prone. We want a purgehints [keyspace] [cfnames] 
> option on the nodetool command.  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-6184) Need a nodetool command to purge hints

2013-10-11 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-6184.
-

Resolution: Duplicate

> Need a nodetool command to purge hints
> --
>
> Key: CASSANDRA-6184
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6184
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Production
>Reporter: Vishy Kasar
> Fix For: 1.2.11
>
>
> Our operation requires us to purge the hints. The only way to do this 
> currently is to truncate the system/HintsColumnFamily across our entire 
> cluster. That is error prone. We want a purgehints [keyspace] [cfnames] 
> option on the nodetool command.  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-6184) Need a nodetool command to purge hints

2013-10-11 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-6184.
-

Resolution: Fixed

Dupe of 6158.

> Need a nodetool command to purge hints
> --
>
> Key: CASSANDRA-6184
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6184
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Production
>Reporter: Vishy Kasar
> Fix For: 1.2.11
>
>
> Our operation requires us to purge the hints. The only way to do this 
> currently is to truncate the system/HintsColumnFamily across our entire 
> cluster. That is error prone. We want a purgehints [keyspace] [cfnames] 
> option on the nodetool command.  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6184) Need a nodetool command to purge hints

2013-10-11 Thread Vishy Kasar (JIRA)
Vishy Kasar created CASSANDRA-6184:
--

 Summary: Need a nodetool command to purge hints
 Key: CASSANDRA-6184
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6184
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Production
Reporter: Vishy Kasar
 Fix For: 1.2.11



Our operation requires us to purge the hints. The only way to do this currently 
is to truncate the system/HintsColumnFamily across our entire cluster. That is 
error prone. We want a purgehints [keyspace] [cfnames] option on the nodetool 
command.  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5543) Ant issues when building gen-cql2-grammar

2013-10-11 Thread Will Oberman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792956#comment-13792956
 ] 

Will Oberman commented on CASSANDRA-5543:
-

For me, it's extremely consistent in success on 2.4.1 (in that it's never 
failed).  Are there other environment issues I should check on when it comes to 
compiling cassandra?

And for projects with two major versions (Java 6 vs 7), it's *possible* that 
the same issue gets patched into both branches if it was a severe bug (causing 
the same regression).

> Ant issues when building gen-cql2-grammar
> -
>
> Key: CASSANDRA-5543
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5543
> Project: Cassandra
>  Issue Type: Bug
>Affects Versions: 1.2.3
>Reporter: Joaquin Casares
>Assignee: Dave Brosius
>Priority: Trivial
>
> Below are the commands and outputs that were returned.
> The first `ant` command fails on gen-cql2-grammar, but if I don't run `ant 
> realclean` then it works fine after a second pass.
> {CODE}
> ubuntu@ip-10-196-153-29:~/.ccm/repository/1.2.3$ ant realclean
> Buildfile: /home/ubuntu/.ccm/repository/1.2.3/build.xml
> clean:
>[delete] Deleting directory /home/ubuntu/.ccm/repository/1.2.3/build/test
>[delete] Deleting directory 
> /home/ubuntu/.ccm/repository/1.2.3/build/classes
>[delete] Deleting directory /home/ubuntu/.ccm/repository/1.2.3/src/gen-java
>[delete] Deleting: /home/ubuntu/.ccm/repository/1.2.3/build/internode.avpr
> realclean:
>[delete] Deleting directory /home/ubuntu/.ccm/repository/1.2.3/build
> BUILD SUCCESSFUL
> Total time: 0 seconds
> {CODE}
> {CODE}
> ubuntu@ip-10-196-153-29:~/.ccm/repository/1.2.3$ ant
> Buildfile: /home/ubuntu/.ccm/repository/1.2.3/build.xml
> maven-ant-tasks-localrepo:
> maven-ant-tasks-download:
>  [echo] Downloading Maven ANT Tasks...
> [mkdir] Created dir: /home/ubuntu/.ccm/repository/1.2.3/build
>   [get] Getting: 
> http://repo2.maven.org/maven2/org/apache/maven/maven-ant-tasks/2.1.3/maven-ant-tasks-2.1.3.jar
>   [get] To: 
> /home/ubuntu/.ccm/repository/1.2.3/build/maven-ant-tasks-2.1.3.jar
> maven-ant-tasks-init:
> [mkdir] Created dir: /home/ubuntu/.ccm/repository/1.2.3/build/lib
> maven-declare-dependencies:
> maven-ant-tasks-retrieve-build:
> [artifact:dependencies] Downloading: asm/asm/3.2/asm-3.2-sources.jar from 
> repository central at http://repo1.maven.org/maven2
> 
> [artifact:dependencies] [INFO] Unable to find resource 
> 'hsqldb:hsqldb:java-source:sources:1.8.0.10' in repository java.net2 
> (http://download.java.net/maven/2)
> [artifact:dependencies] Building ant file: 
> /home/ubuntu/.ccm/repository/1.2.3/build/build-dependencies.xml
>  [copy] Copying 45 files to 
> /home/ubuntu/.ccm/repository/1.2.3/build/lib/jars
>  [copy] Copying 35 files to 
> /home/ubuntu/.ccm/repository/1.2.3/build/lib/sources
> init:
> [mkdir] Created dir: /home/ubuntu/.ccm/repository/1.2.3/build/classes/main
> [mkdir] Created dir: 
> /home/ubuntu/.ccm/repository/1.2.3/build/classes/thrift
> [mkdir] Created dir: /home/ubuntu/.ccm/repository/1.2.3/build/test/lib
> [mkdir] Created dir: /home/ubuntu/.ccm/repository/1.2.3/build/test/classes
> [mkdir] Created dir: /home/ubuntu/.ccm/repository/1.2.3/src/gen-java
> check-avro-generate:
> avro-interface-generate-internode:
>  [echo] Generating Avro internode code...
> avro-generate:
> build-subprojects:
> check-gen-cli-grammar:
> gen-cli-grammar:
>  [echo] Building Grammar 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cli/Cli.g  
> 
> check-gen-cql2-grammar:
> gen-cql2-grammar:
>  [echo] Building Grammar 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cql/Cql.g  
> ...
>  [java] warning(200): 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cql/Cql.g:479:20:
>  Decision can match input such as "IDENT" using multiple alternatives: 1, 2
>  [java] As a result, alternative(s) 2 were disabled for that input
>  [java] warning(200): 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cql/Cql.g:479:20:
>  Decision can match input such as "K_KEY" using multiple alternatives: 1, 2
>  [java] As a result, alternative(s) 2 were disabled for that input
>  [java] warning(200): 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cql/Cql.g:479:20:
>  Decision can match input such as "QMARK" using multiple alternatives: 1, 2
>  [java] As a result, alternative(s) 2 were disabled for that input
>  [java] warning(200): 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cql/Cql.g:479:20:
>  Decision can match input such as "FLOAT" using multiple alternatives: 1, 2
>  [java] As a result, alternative(s) 2 were disabled for

[jira] [Commented] (CASSANDRA-5543) Ant issues when building gen-cql2-grammar

2013-10-11 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792937#comment-13792937
 ] 

Brandon Williams commented on CASSANDRA-5543:
-

I've had this problem on both java6 and java7, so I don't think that's it. :(

> Ant issues when building gen-cql2-grammar
> -
>
> Key: CASSANDRA-5543
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5543
> Project: Cassandra
>  Issue Type: Bug
>Affects Versions: 1.2.3
>Reporter: Joaquin Casares
>Assignee: Dave Brosius
>Priority: Trivial
>
> Below are the commands and outputs that were returned.
> The first `ant` command fails on gen-cql2-grammar, but if I don't run `ant 
> realclean` then it works fine after a second pass.
> {CODE}
> ubuntu@ip-10-196-153-29:~/.ccm/repository/1.2.3$ ant realclean
> Buildfile: /home/ubuntu/.ccm/repository/1.2.3/build.xml
> clean:
>[delete] Deleting directory /home/ubuntu/.ccm/repository/1.2.3/build/test
>[delete] Deleting directory 
> /home/ubuntu/.ccm/repository/1.2.3/build/classes
>[delete] Deleting directory /home/ubuntu/.ccm/repository/1.2.3/src/gen-java
>[delete] Deleting: /home/ubuntu/.ccm/repository/1.2.3/build/internode.avpr
> realclean:
>[delete] Deleting directory /home/ubuntu/.ccm/repository/1.2.3/build
> BUILD SUCCESSFUL
> Total time: 0 seconds
> {CODE}
> {CODE}
> ubuntu@ip-10-196-153-29:~/.ccm/repository/1.2.3$ ant
> Buildfile: /home/ubuntu/.ccm/repository/1.2.3/build.xml
> maven-ant-tasks-localrepo:
> maven-ant-tasks-download:
>  [echo] Downloading Maven ANT Tasks...
> [mkdir] Created dir: /home/ubuntu/.ccm/repository/1.2.3/build
>   [get] Getting: 
> http://repo2.maven.org/maven2/org/apache/maven/maven-ant-tasks/2.1.3/maven-ant-tasks-2.1.3.jar
>   [get] To: 
> /home/ubuntu/.ccm/repository/1.2.3/build/maven-ant-tasks-2.1.3.jar
> maven-ant-tasks-init:
> [mkdir] Created dir: /home/ubuntu/.ccm/repository/1.2.3/build/lib
> maven-declare-dependencies:
> maven-ant-tasks-retrieve-build:
> [artifact:dependencies] Downloading: asm/asm/3.2/asm-3.2-sources.jar from 
> repository central at http://repo1.maven.org/maven2
> 
> [artifact:dependencies] [INFO] Unable to find resource 
> 'hsqldb:hsqldb:java-source:sources:1.8.0.10' in repository java.net2 
> (http://download.java.net/maven/2)
> [artifact:dependencies] Building ant file: 
> /home/ubuntu/.ccm/repository/1.2.3/build/build-dependencies.xml
>  [copy] Copying 45 files to 
> /home/ubuntu/.ccm/repository/1.2.3/build/lib/jars
>  [copy] Copying 35 files to 
> /home/ubuntu/.ccm/repository/1.2.3/build/lib/sources
> init:
> [mkdir] Created dir: /home/ubuntu/.ccm/repository/1.2.3/build/classes/main
> [mkdir] Created dir: 
> /home/ubuntu/.ccm/repository/1.2.3/build/classes/thrift
> [mkdir] Created dir: /home/ubuntu/.ccm/repository/1.2.3/build/test/lib
> [mkdir] Created dir: /home/ubuntu/.ccm/repository/1.2.3/build/test/classes
> [mkdir] Created dir: /home/ubuntu/.ccm/repository/1.2.3/src/gen-java
> check-avro-generate:
> avro-interface-generate-internode:
>  [echo] Generating Avro internode code...
> avro-generate:
> build-subprojects:
> check-gen-cli-grammar:
> gen-cli-grammar:
>  [echo] Building Grammar 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cli/Cli.g  
> 
> check-gen-cql2-grammar:
> gen-cql2-grammar:
>  [echo] Building Grammar 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cql/Cql.g  
> ...
>  [java] warning(200): 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cql/Cql.g:479:20:
>  Decision can match input such as "IDENT" using multiple alternatives: 1, 2
>  [java] As a result, alternative(s) 2 were disabled for that input
>  [java] warning(200): 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cql/Cql.g:479:20:
>  Decision can match input such as "K_KEY" using multiple alternatives: 1, 2
>  [java] As a result, alternative(s) 2 were disabled for that input
>  [java] warning(200): 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cql/Cql.g:479:20:
>  Decision can match input such as "QMARK" using multiple alternatives: 1, 2
>  [java] As a result, alternative(s) 2 were disabled for that input
>  [java] warning(200): 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cql/Cql.g:479:20:
>  Decision can match input such as "FLOAT" using multiple alternatives: 1, 2
>  [java] As a result, alternative(s) 2 were disabled for that input
>  [java] warning(200): 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cql/Cql.g:479:20:
>  Decision can match input such as "STRING_LITERAL" using multiple 
> alternatives: 1, 2
>  [java] As a result, alternative(s) 2 were disable

[jira] [Comment Edited] (CASSANDRA-6137) CQL3 SELECT IN CLAUSE inconsistent

2013-10-11 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13791607#comment-13791607
 ] 

Constance Eustace edited comment on CASSANDRA-6137 at 10/11/13 6:40 PM:


It is now occurring in prod for other columns. There appears to be some hash 
key impacts here...

[10/10/13 12:13:19 AM] AGaal: wasn't a regression; DB corruption on a product 
entity
[10/10/13 12:13:20 AM] AGaal: cqlsh> SELECT 
e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,p_val,p_vallinks,p_vars 
FROM internal_submission.Entity_Product WHERE e_entid = 
'0d5acd67-3131-11e3-85d7-126aad0075d4-PROD'  AND p_prop IN 
('__CPSYS_type','__CPSYS_name','urn:bby:pcm:job:id');

(0 rows)

cqlsh> SELECT 
e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,p_val,p_vallinks,p_vars 
FROM internal_submission.Entity_Product WHERE e_entid = 
'0d5acd67-3131-11e3-85d7-126aad0075d4-PROD'  AND p_prop IN 
('__CPSYS_type','__CPSYS_name');

 e_entid   | e_entname  
| e_enttype   | p_prop  
 | p_flags | p_propid | p_val | p_vallinks | p_vars
---++-+--+-+--+---++
 0d5acd67-3131-11e3-85d7-126aad0075d4-PROD | 1 ft Cat5e Non Booted UTP 
Unshielded Network Patch Cable  757120254621|NEW |null | 
__CPSYS_name |null | null |  null |   null |   null
 0d5acd67-3131-11e3-85d7-126aad0075d4-PROD |
   null | urn:bby:pcm:product | 
__CPSYS_type |null | null |  null |   null |   null

(2 rows)
[10/10/13 12:20:12 AM] AGaal: cqlsh> SELECT 
e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,p_val,p_vallinks,p_vars 
FROM internal_submission.Entity_Product WHERE e_entid = 
'0d5acd67-3131-11e3-85d7-126aad0075d4-PROD'  AND p_prop IN 
('urn:bby:pcm:job:id');

(0 rows)
[10/10/13 12:20:12 AM] AGaal: note that in this example 'urn:bby:cpm:job:id' 
does not exist yet, so asking just for that correctly returns 0 rows:
[10/10/13 12:20:42 AM] AGaal: but if it's included in a where in() with 2 other 
properties that do exist, then 0 rows are also returned there too, which is bad
[10/10/13 12:26:50 AM] AGaal: another work-around for where in() might be to do 
a select for each desired property, so in this case there would have been 3 
selects; could this be faster / more efficient than selecting all?
[10/10/13 12:37:51 AM] AGaal: we might be able to get some traction here by 
enabling some cassandra logging and playing with the query
[10/10/13 12:38:29 AM] AGaal: like if the property name is shortened to 
'urn:bby:pcm:', it returns the expected 2 rows
[10/10/13 12:39:08 AM] AGaal: but if it's 'urn:bby:pcm:j' or ''urn:bby:pcm:d ' 
it finds 0
[10/10/13 12:42:41 AM] AGaal: and if the last letter after urn:bby:cpm: is an 
'a' or 'b' or 'c' it also returns 2…. and it's consistent with this.  So it's 
finding some sort of match in certain strings… like via a hash or startsWith or 
something


was (Author: cowardlydragon):
It is now occurring in prod for other columns. There appears to be some hash 
key impacts here...

[10/10/13 12:13:19 AM] Aaron Gaalswyk: wasn't a regression; DB corruption on a 
product entity
[10/10/13 12:13:20 AM] Aaron Gaalswyk: cqlsh> SELECT 
e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,p_val,p_vallinks,p_vars 
FROM internal_submission.Entity_Product WHERE e_entid = 
'0d5acd67-3131-11e3-85d7-126aad0075d4-PROD'  AND p_prop IN 
('__CPSYS_type','__CPSYS_name','urn:bby:pcm:job:id');

(0 rows)

cqlsh> SELECT 
e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,p_val,p_vallinks,p_vars 
FROM internal_submission.Entity_Product WHERE e_entid = 
'0d5acd67-3131-11e3-85d7-126aad0075d4-PROD'  AND p_prop IN 
('__CPSYS_type','__CPSYS_name');

 e_entid   | e_entname  
| e_enttype   | p_prop  
 | p_flags | p_propid | p_val | p_vallinks | p_vars
---++-+--+-+--+---++
 0d5acd67-3131-11e3-85d7-126aad0075d4-PROD | 1 ft Cat5e Non Booted UTP 
Unshielded Network Patch Cable  757120254621|NEW |null | 
__CPSYS_name |null | null |  null |   null |   null
 0d5acd67-3131-11e3-85d7-126aad0075d4-PROD |
   null | urn:bby:pcm:product | 
__CPSYS_type |null | null |  null |   null |   null

(2 rows)
[10/10/13 12:20:12 AM] AGaa

[jira] [Commented] (CASSANDRA-6181) Replaying a commit led to java.lang.StackOverflowError and node crash

2013-10-11 Thread Jeffrey Damick (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792916#comment-13792916
 ] 

Jeffrey Damick commented on CASSANDRA-6181:
---

i sent you a link in email @ datastax.

> Replaying a commit led to java.lang.StackOverflowError and node crash
> -
>
> Key: CASSANDRA-6181
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6181
> Project: Cassandra
>  Issue Type: Bug
> Environment: 1.2.8 & 1.2.10 - ubuntu 12.04
>Reporter: Jeffrey Damick
>Assignee: Sylvain Lebresne
>Priority: Critical
>
> 2 of our nodes died after attempting to replay a commit.  I can attach the 
> commit log file if that helps.
> It was occurring on 1.2.8, after several failed attempts to start, we 
> attempted startup with 1.2.10.  This also yielded the same issue (below).  
> The only resolution was to physically move the commit log file out of the way 
> and then the nodes were able to start...  
> The replication factor was 3 so I'm hoping there was no data loss...
> {code}
>  INFO [main] 2013-10-11 14:50:35,891 CommitLogReplayer.java (line 119) 
> Replaying /ebs/cassandra/commitlog/CommitLog-2-1377542389560.log
> ERROR [MutationStage:18] 2013-10-11 14:50:37,387 CassandraDaemon.java (line 
> 191) Exception in thread Thread[MutationStage:18,5,main]
> java.lang.StackOverflowError
> at 
> org.apache.cassandra.db.marshal.TimeUUIDType.compareTimestampBytes(TimeUUIDType.java:68)
> at 
> org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:57)
> at 
> org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:29)
> at 
> org.apache.cassandra.db.marshal.AbstractType.compareCollectionMembers(AbstractType.java:229)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:81)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:31)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:439)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
>  etc over and over until 
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.add(RangeTombstoneList.java:144)
> at 
> org.apache.cassandra.db.RangeTombstoneList.addAll(RangeTombstoneList.java:186)
> at org.apache.cassandra.db.DeletionInfo.add(DeletionInfo.java:180)
> at 
> org.apache.cassandra.db.AtomicSortedColumns.addAllWithSizeDelta(AtomicSortedColumns.java:197)
> at 
> org.apache.cassandra.db.AbstractColumnContainer.addAllWithSizeDelta(AbstractColumnContainer.java:99)
> at org.apache.cassandra.db.Memtable.resolve(Memtable.java:207)
> at org.apache.cassandra.db.Memtable.put(Memtable.java:170)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:745)
> at org.apache.cassandra.db.Table.apply(Table.java:388)
> at org.apache.cassandra.db.Table.apply(Table.java:353)
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer$1.runMayThrow(CommitLogReplayer.java:258)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> {code}



--
This message was 

[jira] [Commented] (CASSANDRA-5543) Ant issues when building gen-cql2-grammar

2013-10-11 Thread Will Oberman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792879#comment-13792879
 ] 

Will Oberman commented on CASSANDRA-5543:
-

This issue just started for me.  I'm using AWS's (amazon web service) EMR 
(elastic map reduce, hadoop as a service), and compile cassandra inside of that 
environment so that I can ETL the results -> my cassandra using the pig 
integration.

On AWS EMR AMI (amazon machine image) 2.4.1 the compile always works.  On 2.4.2 
it "sometimes".  The only different I see is the installed java: 
2.4.1 = javac 1.7.0_25 
2.4.2 = javac 1.7.0_40

Hope this helps solve the mystery.

> Ant issues when building gen-cql2-grammar
> -
>
> Key: CASSANDRA-5543
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5543
> Project: Cassandra
>  Issue Type: Bug
>Affects Versions: 1.2.3
>Reporter: Joaquin Casares
>Assignee: Dave Brosius
>Priority: Trivial
>
> Below are the commands and outputs that were returned.
> The first `ant` command fails on gen-cql2-grammar, but if I don't run `ant 
> realclean` then it works fine after a second pass.
> {CODE}
> ubuntu@ip-10-196-153-29:~/.ccm/repository/1.2.3$ ant realclean
> Buildfile: /home/ubuntu/.ccm/repository/1.2.3/build.xml
> clean:
>[delete] Deleting directory /home/ubuntu/.ccm/repository/1.2.3/build/test
>[delete] Deleting directory 
> /home/ubuntu/.ccm/repository/1.2.3/build/classes
>[delete] Deleting directory /home/ubuntu/.ccm/repository/1.2.3/src/gen-java
>[delete] Deleting: /home/ubuntu/.ccm/repository/1.2.3/build/internode.avpr
> realclean:
>[delete] Deleting directory /home/ubuntu/.ccm/repository/1.2.3/build
> BUILD SUCCESSFUL
> Total time: 0 seconds
> {CODE}
> {CODE}
> ubuntu@ip-10-196-153-29:~/.ccm/repository/1.2.3$ ant
> Buildfile: /home/ubuntu/.ccm/repository/1.2.3/build.xml
> maven-ant-tasks-localrepo:
> maven-ant-tasks-download:
>  [echo] Downloading Maven ANT Tasks...
> [mkdir] Created dir: /home/ubuntu/.ccm/repository/1.2.3/build
>   [get] Getting: 
> http://repo2.maven.org/maven2/org/apache/maven/maven-ant-tasks/2.1.3/maven-ant-tasks-2.1.3.jar
>   [get] To: 
> /home/ubuntu/.ccm/repository/1.2.3/build/maven-ant-tasks-2.1.3.jar
> maven-ant-tasks-init:
> [mkdir] Created dir: /home/ubuntu/.ccm/repository/1.2.3/build/lib
> maven-declare-dependencies:
> maven-ant-tasks-retrieve-build:
> [artifact:dependencies] Downloading: asm/asm/3.2/asm-3.2-sources.jar from 
> repository central at http://repo1.maven.org/maven2
> 
> [artifact:dependencies] [INFO] Unable to find resource 
> 'hsqldb:hsqldb:java-source:sources:1.8.0.10' in repository java.net2 
> (http://download.java.net/maven/2)
> [artifact:dependencies] Building ant file: 
> /home/ubuntu/.ccm/repository/1.2.3/build/build-dependencies.xml
>  [copy] Copying 45 files to 
> /home/ubuntu/.ccm/repository/1.2.3/build/lib/jars
>  [copy] Copying 35 files to 
> /home/ubuntu/.ccm/repository/1.2.3/build/lib/sources
> init:
> [mkdir] Created dir: /home/ubuntu/.ccm/repository/1.2.3/build/classes/main
> [mkdir] Created dir: 
> /home/ubuntu/.ccm/repository/1.2.3/build/classes/thrift
> [mkdir] Created dir: /home/ubuntu/.ccm/repository/1.2.3/build/test/lib
> [mkdir] Created dir: /home/ubuntu/.ccm/repository/1.2.3/build/test/classes
> [mkdir] Created dir: /home/ubuntu/.ccm/repository/1.2.3/src/gen-java
> check-avro-generate:
> avro-interface-generate-internode:
>  [echo] Generating Avro internode code...
> avro-generate:
> build-subprojects:
> check-gen-cli-grammar:
> gen-cli-grammar:
>  [echo] Building Grammar 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cli/Cli.g  
> 
> check-gen-cql2-grammar:
> gen-cql2-grammar:
>  [echo] Building Grammar 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cql/Cql.g  
> ...
>  [java] warning(200): 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cql/Cql.g:479:20:
>  Decision can match input such as "IDENT" using multiple alternatives: 1, 2
>  [java] As a result, alternative(s) 2 were disabled for that input
>  [java] warning(200): 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cql/Cql.g:479:20:
>  Decision can match input such as "K_KEY" using multiple alternatives: 1, 2
>  [java] As a result, alternative(s) 2 were disabled for that input
>  [java] warning(200): 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cql/Cql.g:479:20:
>  Decision can match input such as "QMARK" using multiple alternatives: 1, 2
>  [java] As a result, alternative(s) 2 were disabled for that input
>  [java] warning(200): 
> /home/ubuntu/.ccm/repository/1.2.3/src/java/org/apache/cassandra/cql/Cql.g:479:20:
>  Decision can match in

[jira] [Comment Edited] (CASSANDRA-6152) Assertion error in 2.0.1 at db.ColumnSerializer.serialize(ColumnSerializer.java:56)

2013-10-11 Thread Donald Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792877#comment-13792877
 ] 

Donald Smith edited comment on CASSANDRA-6152 at 10/11/13 5:56 PM:
---

No, the test suite does *not* drop and create new tables (i.e., it does not 
call "DROP TABLE" and "CREATE TABLE").  It deletes rows from tables and 
re-inserts. I'm working right now on submitting a focused example that 
reproduces the bug.


was (Author: thinkerfeeler):
No, the test suite does *not* drop and create new tables (i.e., it does not 
call "DROP TABLE" and "CREATE TABLE").  It deletes tables and re-inserts. I'm 
working right now on submitting a focused example that reproduces the bug.

> Assertion error in 2.0.1 at 
> db.ColumnSerializer.serialize(ColumnSerializer.java:56)
> ---
>
> Key: CASSANDRA-6152
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6152
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: CentOS release 6.2 (Final)
> With default set up on single node.
> I also saw this exception in 2.0.0 on a three node cluster.
>Reporter: Donald Smith
>
> {noformat}
> ERROR [COMMIT-LOG-WRITER] 2013-10-06 12:12:36,845 CassandraDaemon.java (line 
> 185) Exception in thread Thread[COMMIT-LOG-WRITER,5,main]
> java.lang.AssertionError
> at 
> org.apache.cassandra.db.ColumnSerializer.serialize(ColumnSerializer.java:56)
> at 
> org.apache.cassandra.db.ColumnFamilySerializer.serialize(ColumnFamilySerializer.java:77)
> at 
> org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:268)
> at 
> org.apache.cassandra.db.commitlog.CommitLogSegment.write(CommitLogSegment.java:229)
> at 
> org.apache.cassandra.db.commitlog.CommitLog$LogRecordAdder.run(CommitLog.java:352)
> at 
> org.apache.cassandra.db.commitlog.PeriodicCommitLogExecutorService$1.runMayThrow(PeriodicCommitLogExecutorService.java:48)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6152) Assertion error in 2.0.1 at db.ColumnSerializer.serialize(ColumnSerializer.java:56)

2013-10-11 Thread Donald Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792877#comment-13792877
 ] 

Donald Smith commented on CASSANDRA-6152:
-

No, the test suite does *not* drop and create new tables (i.e., it does not 
call "DROP TABLE" and "CREATE TABLE").  It deletes tables and re-inserts. I'm 
working right now on submitting a focused example that reproduces the bug.

> Assertion error in 2.0.1 at 
> db.ColumnSerializer.serialize(ColumnSerializer.java:56)
> ---
>
> Key: CASSANDRA-6152
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6152
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: CentOS release 6.2 (Final)
> With default set up on single node.
> I also saw this exception in 2.0.0 on a three node cluster.
>Reporter: Donald Smith
>
> {noformat}
> ERROR [COMMIT-LOG-WRITER] 2013-10-06 12:12:36,845 CassandraDaemon.java (line 
> 185) Exception in thread Thread[COMMIT-LOG-WRITER,5,main]
> java.lang.AssertionError
> at 
> org.apache.cassandra.db.ColumnSerializer.serialize(ColumnSerializer.java:56)
> at 
> org.apache.cassandra.db.ColumnFamilySerializer.serialize(ColumnFamilySerializer.java:77)
> at 
> org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:268)
> at 
> org.apache.cassandra.db.commitlog.CommitLogSegment.write(CommitLogSegment.java:229)
> at 
> org.apache.cassandra.db.commitlog.CommitLog$LogRecordAdder.run(CommitLog.java:352)
> at 
> org.apache.cassandra.db.commitlog.PeriodicCommitLogExecutorService$1.runMayThrow(PeriodicCommitLogExecutorService.java:48)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-4549) Update the pig examples to include more recent pig/cassandra features

2013-10-11 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-4549:


Attachment: 4549-v1.txt

> Update the pig examples to include more recent pig/cassandra features
> -
>
> Key: CASSANDRA-4549
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4549
> Project: Cassandra
>  Issue Type: Task
>  Components: Hadoop
>Reporter: Jeremy Hanna
>Assignee: Alex Liu
>Priority: Minor
> Attachments: 4549-v1.txt
>
>
> Now that there is support for a variety of Cassandra features from Pig (esp 
> 1.1+), it would great to have some of them in the examples so that people can 
> see how to use them.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-4549) Update the pig examples to include more recent pig/cassandra features

2013-10-11 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-4549:


Attachment: 4549-v1.txt

> Update the pig examples to include more recent pig/cassandra features
> -
>
> Key: CASSANDRA-4549
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4549
> Project: Cassandra
>  Issue Type: Task
>  Components: Hadoop
>Reporter: Jeremy Hanna
>Assignee: Alex Liu
>Priority: Minor
>
> Now that there is support for a variety of Cassandra features from Pig (esp 
> 1.1+), it would great to have some of them in the examples so that people can 
> see how to use them.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-4549) Update the pig examples to include more recent pig/cassandra features

2013-10-11 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-4549:


Attachment: (was: 4549-v1.txt)

> Update the pig examples to include more recent pig/cassandra features
> -
>
> Key: CASSANDRA-4549
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4549
> Project: Cassandra
>  Issue Type: Task
>  Components: Hadoop
>Reporter: Jeremy Hanna
>Assignee: Alex Liu
>Priority: Minor
>
> Now that there is support for a variety of Cassandra features from Pig (esp 
> 1.1+), it would great to have some of them in the examples so that people can 
> see how to use them.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-4549) Update the pig examples to include more recent pig/cassandra features

2013-10-11 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-4549:


Attachment: (was: 4549-v1.txt)

> Update the pig examples to include more recent pig/cassandra features
> -
>
> Key: CASSANDRA-4549
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4549
> Project: Cassandra
>  Issue Type: Task
>  Components: Hadoop
>Reporter: Jeremy Hanna
>Assignee: Alex Liu
>Priority: Minor
>
> Now that there is support for a variety of Cassandra features from Pig (esp 
> 1.1+), it would great to have some of them in the examples so that people can 
> see how to use them.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-4549) Update the pig examples to include more recent pig/cassandra features

2013-10-11 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-4549:


Attachment: 4549-v1.txt

> Update the pig examples to include more recent pig/cassandra features
> -
>
> Key: CASSANDRA-4549
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4549
> Project: Cassandra
>  Issue Type: Task
>  Components: Hadoop
>Reporter: Jeremy Hanna
>Assignee: Alex Liu
>Priority: Minor
> Attachments: 4549-v1.txt
>
>
> Now that there is support for a variety of Cassandra features from Pig (esp 
> 1.1+), it would great to have some of them in the examples so that people can 
> see how to use them.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6183) Skip mutations that pass CRC but fail to deserialize

2013-10-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6183:
--

Attachment: 6183.txt

Patch will save skipped mutations out to separate files in case user wishes to 
perform forensics.

> Skip mutations that pass CRC but fail to deserialize
> 
>
> Key: CASSANDRA-6183
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6183
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jonathan Ellis
>Assignee: Jonathan Ellis
>Priority: Minor
> Fix For: 1.2.11
>
> Attachments: 6183.txt
>
>
> We've had a couple reports of CL replay failure that appear to be caused by 
> dropping and recreating the same table with a different schema, e.g. 
> CASSANDRA-5905.  While CASSANDRA-5202 is the "right" fix for this, it's too 
> involved for 2.0 let alone 1.2, so we need a stopgap until then.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6183) Skip mutations that pass CRC but fail to deserialize

2013-10-11 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-6183:
-

 Summary: Skip mutations that pass CRC but fail to deserialize
 Key: CASSANDRA-6183
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6183
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 1.2.11


We've had a couple reports of CL replay failure that appear to be caused by 
dropping and recreating the same table with a different schema, e.g. 
CASSANDRA-5905.  While CASSANDRA-5202 is the "right" fix for this, it's too 
involved for 2.0 let alone 1.2, so we need a stopgap until then.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6152) Assertion error in 2.0.1 at db.ColumnSerializer.serialize(ColumnSerializer.java:56)

2013-10-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792829#comment-13792829
 ] 

Jonathan Ellis commented on CASSANDRA-6152:
---

I assume your test suite involves dropping and recreating the same tables?

> Assertion error in 2.0.1 at 
> db.ColumnSerializer.serialize(ColumnSerializer.java:56)
> ---
>
> Key: CASSANDRA-6152
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6152
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: CentOS release 6.2 (Final)
> With default set up on single node.
> I also saw this exception in 2.0.0 on a three node cluster.
>Reporter: Donald Smith
>
> {noformat}
> ERROR [COMMIT-LOG-WRITER] 2013-10-06 12:12:36,845 CassandraDaemon.java (line 
> 185) Exception in thread Thread[COMMIT-LOG-WRITER,5,main]
> java.lang.AssertionError
> at 
> org.apache.cassandra.db.ColumnSerializer.serialize(ColumnSerializer.java:56)
> at 
> org.apache.cassandra.db.ColumnFamilySerializer.serialize(ColumnFamilySerializer.java:77)
> at 
> org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:268)
> at 
> org.apache.cassandra.db.commitlog.CommitLogSegment.write(CommitLogSegment.java:229)
> at 
> org.apache.cassandra.db.commitlog.CommitLog$LogRecordAdder.run(CommitLog.java:352)
> at 
> org.apache.cassandra.db.commitlog.PeriodicCommitLogExecutorService$1.runMayThrow(PeriodicCommitLogExecutorService.java:48)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6182) Unable to modify column_metadata via thrift

2013-10-11 Thread Nick Bailey (JIRA)
Nick Bailey created CASSANDRA-6182:
--

 Summary: Unable to modify column_metadata via thrift
 Key: CASSANDRA-6182
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6182
 Project: Cassandra
  Issue Type: Bug
Reporter: Nick Bailey
 Fix For: 2.0.2


Reproduced on 2.0 HEAD

{noformat}
[default@unknown] use opscenter;
Authenticated to keyspace: OpsCenter
[default@OpsCenter] create column family test with column_metadata = 
[{column_name: '', validation_class: LongType}];
637fffa1-a10f-3d89-8be6-8a316af05dd2
[default@OpsCenter] update column family test with column_metadata=[];
e49e435b-ba2a-3a08-8af0-32b897b872b8
[default@OpsCenter] show schema;



create column family test
  with column_type = 'Standard'
  and comparator = 'BytesType'
  and default_validation_class = 'BytesType'
  and key_validation_class = 'BytesType'
  and read_repair_chance = 0.1
  and dclocal_read_repair_chance = 0.0
  and populate_io_cache_on_flush = false
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and compaction_strategy = 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
  and caching = 'KEYS_ONLY'
  and default_time_to_live = 0
  and speculative_retry = 'NONE'
  and column_metadata = [
{column_name : '',
validation_class : LongType}]
  and compression_options = {'sstable_compression' : 
'org.apache.cassandra.io.compress.LZ4Compressor'}
  and index_interval = 128;
{noformat}




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6181) Replaying a commit led to java.lang.StackOverflowError and node crash

2013-10-11 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792806#comment-13792806
 ] 

Sylvain Lebresne commented on CASSANDRA-6181:
-

[~jdamick] Actually, the commit log would be useful if you have it, though 
ideally I'd need the schema too. Feel free to send that to me in private if you 
prefer.

> Replaying a commit led to java.lang.StackOverflowError and node crash
> -
>
> Key: CASSANDRA-6181
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6181
> Project: Cassandra
>  Issue Type: Bug
> Environment: 1.2.8 & 1.2.10 - ubuntu 12.04
>Reporter: Jeffrey Damick
>Assignee: Sylvain Lebresne
>Priority: Critical
>
> 2 of our nodes died after attempting to replay a commit.  I can attach the 
> commit log file if that helps.
> It was occurring on 1.2.8, after several failed attempts to start, we 
> attempted startup with 1.2.10.  This also yielded the same issue (below).  
> The only resolution was to physically move the commit log file out of the way 
> and then the nodes were able to start...  
> The replication factor was 3 so I'm hoping there was no data loss...
> {code}
>  INFO [main] 2013-10-11 14:50:35,891 CommitLogReplayer.java (line 119) 
> Replaying /ebs/cassandra/commitlog/CommitLog-2-1377542389560.log
> ERROR [MutationStage:18] 2013-10-11 14:50:37,387 CassandraDaemon.java (line 
> 191) Exception in thread Thread[MutationStage:18,5,main]
> java.lang.StackOverflowError
> at 
> org.apache.cassandra.db.marshal.TimeUUIDType.compareTimestampBytes(TimeUUIDType.java:68)
> at 
> org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:57)
> at 
> org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:29)
> at 
> org.apache.cassandra.db.marshal.AbstractType.compareCollectionMembers(AbstractType.java:229)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:81)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:31)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:439)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
>  etc over and over until 
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.add(RangeTombstoneList.java:144)
> at 
> org.apache.cassandra.db.RangeTombstoneList.addAll(RangeTombstoneList.java:186)
> at org.apache.cassandra.db.DeletionInfo.add(DeletionInfo.java:180)
> at 
> org.apache.cassandra.db.AtomicSortedColumns.addAllWithSizeDelta(AtomicSortedColumns.java:197)
> at 
> org.apache.cassandra.db.AbstractColumnContainer.addAllWithSizeDelta(AbstractColumnContainer.java:99)
> at org.apache.cassandra.db.Memtable.resolve(Memtable.java:207)
> at org.apache.cassandra.db.Memtable.put(Memtable.java:170)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:745)
> at org.apache.cassandra.db.Table.apply(Table.java:388)
> at org.apache.cassandra.db.Table.apply(Table.java:353)
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer$1.runMayThrow(CommitLogReplayer.java:258)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolEx

[jira] [Commented] (CASSANDRA-4338) Experiment with direct buffer in SequentialWriter

2013-10-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792787#comment-13792787
 ] 

Benedict commented on CASSANDRA-4338:
-

I've not deliberately tested out my patch on writes, but I wouldn't expect as 
dramatic an improvement in consistency once I/O starts entering the picture. 
Might well make some difference, though. For the read run, not sure what 
happened there on the Marcus branch. It looks to me like (maybe) some of the 
stress workers get ahead and finish first, leaving the cache less polluted for 
the remaining workers. Inconsistent worker count was the cause of persistent 
drops in performance for my read tests (but here it could explain peaks). If 
so, my patch will "fix" that, though could also try running with a lower thread 
count to confirm.

If you want to try with my patch (which will maintain same thread count 
throughout), any of the linked repos in ticket 
[CASSANDRA-4718|https://issues.apache.org/jira/browse/CASSANDRA-4718] will do.

Btw, have we considered benchmarking these snappy changes for messaging service 
connections? Might well reduce the software side of the network overhead, 
although not as dramatically. I do see most of the connection CPU being used in 
snappy native arrayCopy.


> Experiment with direct buffer in SequentialWriter
> -
>
> Key: CASSANDRA-4338
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4338
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jonathan Ellis
>Assignee: Marcus Eriksson
>Priority: Minor
>  Labels: performance
> Fix For: 2.1
>
> Attachments: 4338.benchmark.png, 4338.benchmark.snappycompressor.png, 
> 4338-gc.tar.gz, 4338.single_node.read.png, 4338.single_node.write.png, 
> gc-4338-patched.png, gc-trunk-me.png, gc-trunk.png, gc-with-patch-me.png
>
>
> Using a direct buffer instead of a heap-based byte[] should let us avoid a 
> copy into native memory when we flush the buffer.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6181) Replaying a commit led to java.lang.StackOverflowError and node crash

2013-10-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6181:
--

Assignee: Sylvain Lebresne

Pattern matching {{RangeTombstoneList}} to Sylvain.

> Replaying a commit led to java.lang.StackOverflowError and node crash
> -
>
> Key: CASSANDRA-6181
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6181
> Project: Cassandra
>  Issue Type: Bug
> Environment: 1.2.8 & 1.2.10 - ubuntu 12.04
>Reporter: Jeffrey Damick
>Assignee: Sylvain Lebresne
>Priority: Critical
>
> 2 of our nodes died after attempting to replay a commit.  I can attach the 
> commit log file if that helps.
> It was occurring on 1.2.8, after several failed attempts to start, we 
> attempted startup with 1.2.10.  This also yielded the same issue (below).  
> The only resolution was to physically move the commit log file out of the way 
> and then the nodes were able to start...  
> The replication factor was 3 so I'm hoping there was no data loss...
> {code}
>  INFO [main] 2013-10-11 14:50:35,891 CommitLogReplayer.java (line 119) 
> Replaying /ebs/cassandra/commitlog/CommitLog-2-1377542389560.log
> ERROR [MutationStage:18] 2013-10-11 14:50:37,387 CassandraDaemon.java (line 
> 191) Exception in thread Thread[MutationStage:18,5,main]
> java.lang.StackOverflowError
> at 
> org.apache.cassandra.db.marshal.TimeUUIDType.compareTimestampBytes(TimeUUIDType.java:68)
> at 
> org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:57)
> at 
> org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:29)
> at 
> org.apache.cassandra.db.marshal.AbstractType.compareCollectionMembers(AbstractType.java:229)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:81)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:31)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:439)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
>  etc over and over until 
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.add(RangeTombstoneList.java:144)
> at 
> org.apache.cassandra.db.RangeTombstoneList.addAll(RangeTombstoneList.java:186)
> at org.apache.cassandra.db.DeletionInfo.add(DeletionInfo.java:180)
> at 
> org.apache.cassandra.db.AtomicSortedColumns.addAllWithSizeDelta(AtomicSortedColumns.java:197)
> at 
> org.apache.cassandra.db.AbstractColumnContainer.addAllWithSizeDelta(AbstractColumnContainer.java:99)
> at org.apache.cassandra.db.Memtable.resolve(Memtable.java:207)
> at org.apache.cassandra.db.Memtable.put(Memtable.java:170)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:745)
> at org.apache.cassandra.db.Table.apply(Table.java:388)
> at org.apache.cassandra.db.Table.apply(Table.java:353)
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer$1.runMayThrow(CommitLogReplayer.java:258)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> {code}



--
This message was sent by Atla

[jira] [Commented] (CASSANDRA-4338) Experiment with direct buffer in SequentialWriter

2013-10-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792732#comment-13792732
 ] 

Jonathan Ellis commented on CASSANDRA-4338:
---

Maybe we need those stress improvements [~benedict] was working on.

> Experiment with direct buffer in SequentialWriter
> -
>
> Key: CASSANDRA-4338
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4338
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jonathan Ellis
>Assignee: Marcus Eriksson
>Priority: Minor
>  Labels: performance
> Fix For: 2.1
>
> Attachments: 4338.benchmark.png, 4338.benchmark.snappycompressor.png, 
> 4338-gc.tar.gz, 4338.single_node.read.png, 4338.single_node.write.png, 
> gc-4338-patched.png, gc-trunk-me.png, gc-trunk.png, gc-with-patch-me.png
>
>
> Using a direct buffer instead of a heap-based byte[] should let us avoid a 
> copy into native memory when we flush the buffer.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6137) CQL3 SELECT IN CLAUSE inconsistent

2013-10-11 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792715#comment-13792715
 ] 

Constance Eustace edited comment on CASSANDRA-6137 at 10/11/13 3:26 PM:


We have row caching off, but key caching is on.

So in general I believe:

- There are niggling character escaping bugs in the the key caching is not 
working correctly with keys that have ':' characters in them. 

- Those bugs seem to lead to cache inconsistency because of compaction or 
another underlying housekeeping process changes the key locations, or the 
additional colons in the composite keys mess up the code that would keep the 
keycache valid.

So queries that request partial keysets from a row try to utilize the cache 
(SELECT columnlist FROM table WHERE rowkey = ? and columnkey in (?,?,?,?) 
become inconsistent when one of the column key's composite columns is updated...

But queries that request all keys will just read the data and bypass the cache 
I'd guess, so they work. 

It appears that eventually the inconsistent results correct themselves, so 
again, perhaps some cache coherency process makes it consistent...wait for it, 
wait for it... eventually.

I'd guess that compaction is the most likely target. 

We now have time to examine source code, would anyone from DSE or mainline 
committers recommend anyplace to look given what is described? Anyone? Bueller? 
Bueller?


was (Author: cowardlydragon):
We have row caching off, but key caching is on.

So in general I believe:

- There are niggling character escaping bugs in the the key caching is not 
working correctly with keys that have ':' characters in them. 

- Those bugs seem to lead to cache inconsistency because of compaction or 
another underlying housekeeping process changes the key locations.

So queries that request partial keysets from a row try to utilize the cache 
(SELECT columnlist FROM table WHERE rowkey = ? and columnkey in (?,?,?,?) 
become inconsistent when one of the column key's composite columns is updated...

But queries that request all keys will just read the data and bypass the cache 
I'd guess, so they work. 

It appears that eventually the inconsistent results correct themselves, so 
again, perhaps some cache coherency process makes it consistent...wait for it, 
wait for it... eventually.

I'd guess that compaction is the most likely target. 

We now have time to examine source code, would anyone from DSE or mainline 
committers recommend anyplace to look given what is described? Anyone? Bueller? 
Bueller?

> CQL3 SELECT IN CLAUSE inconsistent
> --
>
> Key: CASSANDRA-6137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6137
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu AWS Cassandra 2.0.1 SINGLE NODE
>Reporter: Constance Eustace
> Fix For: 2.0.1
>
>
> We are encountering inconsistent results from CQL3 queries with column keys 
> using IN clause in WHERE. This has been reproduced in cqlsh and the jdbc 
> driver.
> Rowkey is e_entid
> Column key is p_prop
> This returns roughly 21 rows for 21 column keys that match p_prop.
> cqlsh> SELECT 
> e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
>  FROM internal_submission.Entity_Job WHERE e_entid = 
> '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB';
> These three queries each return one row for the requested single column key 
> in the IN clause:
> SELECT 
> e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
>  FROM internal_submission.Entity_Job WHERE e_entid = 
> '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
> ('urn:bby:pcm:job:ingest:content:complete:count');
> SELECT 
> e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
>  FROM internal_submission.Entity_Job WHERE e_entid = 
> '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
> ('urn:bby:pcm:job:ingest:content:all:count');
> SELECT 
> e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
>  FROM internal_submission.Entity_Job WHERE e_entid = 
> '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
> ('urn:bby:pcm:job:ingest:content:fail:count');
> This query returns ONLY ONE ROW (one column key), not three as I would expect 
> from the three-column-key IN clause:
> cqlsh> SELECT 
> e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
>  FROM internal_submission.Entity_Job WHERE e_entid = 
> '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
> ('urn:bby:pcm:job:ingest:content:complete:count','urn:bby:pcm:job:ingest:co

[jira] [Commented] (CASSANDRA-6137) CQL3 SELECT IN CLAUSE inconsistent

2013-10-11 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792715#comment-13792715
 ] 

Constance Eustace commented on CASSANDRA-6137:
--

We have row caching off, but key caching is on.

So in general I believe:

- There are niggling character escaping bugs in the the key caching is not 
working correctly with keys that have ':' characters in them. 

- Those bugs seem to lead to cache inconsistency because of compaction or 
another underlying housekeeping process changes the key locations.

So queries that request partial keysets from a row try to utilize the cache 
(SELECT columnlist FROM table WHERE rowkey = ? and columnkey in (?,?,?,?) 
become inconsistent when one of the column key's composite columns is updated...

But queries that request all keys will just read the data and bypass the cache 
I'd guess, so they work. 

It appears that eventually the inconsistent results correct themselves, so 
again, perhaps some cache coherency process makes it consistent...wait for it, 
wait for it... eventually.

I'd guess that compaction is the most likely target. 

We now have time to examine source code, would anyone from DSE or mainline 
committers recommend anyplace to look given what is described? Anyone? Bueller? 
Bueller?

> CQL3 SELECT IN CLAUSE inconsistent
> --
>
> Key: CASSANDRA-6137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6137
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu AWS Cassandra 2.0.1 SINGLE NODE
>Reporter: Constance Eustace
> Fix For: 2.0.1
>
>
> We are encountering inconsistent results from CQL3 queries with column keys 
> using IN clause in WHERE. This has been reproduced in cqlsh and the jdbc 
> driver.
> Rowkey is e_entid
> Column key is p_prop
> This returns roughly 21 rows for 21 column keys that match p_prop.
> cqlsh> SELECT 
> e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
>  FROM internal_submission.Entity_Job WHERE e_entid = 
> '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB';
> These three queries each return one row for the requested single column key 
> in the IN clause:
> SELECT 
> e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
>  FROM internal_submission.Entity_Job WHERE e_entid = 
> '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
> ('urn:bby:pcm:job:ingest:content:complete:count');
> SELECT 
> e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
>  FROM internal_submission.Entity_Job WHERE e_entid = 
> '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
> ('urn:bby:pcm:job:ingest:content:all:count');
> SELECT 
> e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
>  FROM internal_submission.Entity_Job WHERE e_entid = 
> '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
> ('urn:bby:pcm:job:ingest:content:fail:count');
> This query returns ONLY ONE ROW (one column key), not three as I would expect 
> from the three-column-key IN clause:
> cqlsh> SELECT 
> e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
>  FROM internal_submission.Entity_Job WHERE e_entid = 
> '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
> ('urn:bby:pcm:job:ingest:content:complete:count','urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count');
> This query does return two rows however for the requested two column keys:
> cqlsh> SELECT 
> e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
>  FROM internal_submission.Entity_Job WHERE e_entid = 
> '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in (  
>   
> 'urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count');
> cqlsh> describe table internal_submission.entity_job;
> CREATE TABLE entity_job (
>   e_entid text,
>   p_prop text,
>   describes text,
>   dndcondition text,
>   e_entlinks text,
>   e_entname text,
>   e_enttype text,
>   ingeststatus text,
>   ingeststatusdetail text,
>   p_flags text,
>   p_propid text,
>   p_proplinks text,
>   p_storage text,
>   p_subents text,
>   p_val text,
>   p_vallang text,
>   p_vallinks text,
>   p_valtype text,
>   p_valunit text,
>   p_vars text,
>   partnerid text,
>   referenceid text,
>   size int,
>   sourceip text,
>   submitdate bigint,
>   submitevent text,
>   userid text,
>   version text,
>   PRIMARY KEY (e_entid, p_prop)
> ) WITH
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   co

[jira] [Updated] (CASSANDRA-6181) Replaying a commit led to java.lang.StackOverflowError and node crash

2013-10-11 Thread Jeffrey Damick (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Damick updated CASSANDRA-6181:
--

Description: 
2 of our nodes died after attempting to replay a commit.  I can attach the 
commit log file if that helps.
It was occurring on 1.2.8, after several failed attempts to start, we attempted 
startup with 1.2.10.  This also yielded the same issue (below).  The only 
resolution was to physically move the commit log file out of the way and then 
the nodes were able to start...  

The replication factor was 3 so I'm hoping there was no data loss...

{code}
 INFO [main] 2013-10-11 14:50:35,891 CommitLogReplayer.java (line 119) 
Replaying /ebs/cassandra/commitlog/CommitLog-2-1377542389560.log
ERROR [MutationStage:18] 2013-10-11 14:50:37,387 CassandraDaemon.java (line 
191) Exception in thread Thread[MutationStage:18,5,main]
java.lang.StackOverflowError
at 
org.apache.cassandra.db.marshal.TimeUUIDType.compareTimestampBytes(TimeUUIDType.java:68)
at 
org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:57)
at 
org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:29)
at 
org.apache.cassandra.db.marshal.AbstractType.compareCollectionMembers(AbstractType.java:229)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:81)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:31)
at 
org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:439)
at 
org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
at 
org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
at 
org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
at 
org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
at 
org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
at 
org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
at 
org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
at 
org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)

 etc over and over until 

at 
org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
at 
org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
at 
org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
at 
org.apache.cassandra.db.RangeTombstoneList.add(RangeTombstoneList.java:144)
at 
org.apache.cassandra.db.RangeTombstoneList.addAll(RangeTombstoneList.java:186)
at org.apache.cassandra.db.DeletionInfo.add(DeletionInfo.java:180)
at 
org.apache.cassandra.db.AtomicSortedColumns.addAllWithSizeDelta(AtomicSortedColumns.java:197)
at 
org.apache.cassandra.db.AbstractColumnContainer.addAllWithSizeDelta(AbstractColumnContainer.java:99)
at org.apache.cassandra.db.Memtable.resolve(Memtable.java:207)
at org.apache.cassandra.db.Memtable.put(Memtable.java:170)
at 
org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:745)
at org.apache.cassandra.db.Table.apply(Table.java:388)
at org.apache.cassandra.db.Table.apply(Table.java:353)
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer$1.runMayThrow(CommitLogReplayer.java:258)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
{code}



  was:
2 of our nodes died after attempting to replay a commit.  I can attach the 
commit log file if that helps.
It was occurring on 1.2.8, after several failed attempts to start, we attempted 
startup with 1.2.10.  This also yielded the same issue (below).  The only 
resolution was to physically move the commit log file out of the way and then 
the nodes were able to start...  

The replication factor was 3 so I'm hoping there was no data loss...

{code}
 INFO [main] 2013-10-11 14:50:35,891 CommitLogReplayer.java (line 119) 
Replaying /ebs/cassandra/commitlog/CommitLog-2-1377542389560.log
ERROR [MutationStage:18] 2013-10-11 14:50:37,387 CassandraDaemon.java (line 
191) Exception in thread Thread[MutationStage:18,5,main]
java.l

[jira] [Commented] (CASSANDRA-4338) Experiment with direct buffer in SequentialWriter

2013-10-11 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792686#comment-13792686
 ] 

Ryan McGuire commented on CASSANDRA-4338:
-

Hmm, reading from a single node may not have a very high statistical 
significance:

[data from second 
attempt|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.4338.CompressedSequentialWriter.single_node.2.json&metric=interval_op_rate&operation=stress-read&smoothing=4]

> Experiment with direct buffer in SequentialWriter
> -
>
> Key: CASSANDRA-4338
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4338
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jonathan Ellis
>Assignee: Marcus Eriksson
>Priority: Minor
>  Labels: performance
> Fix For: 2.1
>
> Attachments: 4338.benchmark.png, 4338.benchmark.snappycompressor.png, 
> 4338-gc.tar.gz, 4338.single_node.read.png, 4338.single_node.write.png, 
> gc-4338-patched.png, gc-trunk-me.png, gc-trunk.png, gc-with-patch-me.png
>
>
> Using a direct buffer instead of a heap-based byte[] should let us avoid a 
> copy into native memory when we flush the buffer.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6181) Replaying a commit led to java.lang.StackOverflowError and node crash

2013-10-11 Thread Jeffrey Damick (JIRA)
Jeffrey Damick created CASSANDRA-6181:
-

 Summary: Replaying a commit led to java.lang.StackOverflowError 
and node crash
 Key: CASSANDRA-6181
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6181
 Project: Cassandra
  Issue Type: Bug
 Environment: 1.2.8 & 1.2.10 - ubuntu 12.04
Reporter: Jeffrey Damick
Priority: Critical


2 of our nodes died after attempting to replay a commit.  I can attach the 
commit log file if that helps.
It was occurring on 1.2.8, after several failed attempts to start, we attempted 
startup with 1.2.10.  This also yielded the same issue (below).  The only 
resolution was to physically move the commit log file out of the way and then 
the nodes were able to start...  

The replication factor was 3 so I'm hoping there was no data loss...

{code}
 INFO [main] 2013-10-11 14:50:35,891 CommitLogReplayer.java (line 119) 
Replaying /ebs/cassandra/commitlog/CommitLog-2-1377542389560.log
ERROR [MutationStage:18] 2013-10-11 14:50:37,387 CassandraDaemon.java (line 
191) Exception in thread Thread[MutationStage:18,5,main]
java.lang.StackOverflowError
at 
org.apache.cassandra.db.marshal.TimeUUIDType.compareTimestampBytes(TimeUUIDType.java:68)
at 
org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:57)
at 
org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:29)
at 
org.apache.cassandra.db.marshal.AbstractType.compareCollectionMembers(AbstractType.java:229)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:81)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:31)
at 
org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:439)
at 
org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
at 
org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
at 
org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
at 
org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
at 
org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
at 
org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
at 
org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
at 
org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
 etc
{code}





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6178) Consider allowing timestamp at the protocol level ... and deprecating server side timestamps

2013-10-11 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792631#comment-13792631
 ] 

Sylvain Lebresne commented on CASSANDRA-6178:
-

bq. Let's not forget that "Bad Things happen if not just your server clocks but 
also your app server clocks are not synchronized" confuses people too.

I don't really buy that "you need to synchronize your server clocks" would 
somewhat make perfect sense but doing so for the clients would be highly 
confusing.  Using server-side timestamps solves in *no* way whatsoever the 
situation that this stackoverflow user describe in particular.

I'm not really convinced that server side timestamps makes much of anything 
simpler for a user point of view (versus having the timestamp set by the driver 
that is).  Yes, you need to run ntpd on your clients too, but that sound hardly 
a big deal to me. In fact, provided you have any timeseries type usage, it's 
probably a good idea to synchronize your clients clock anyway for the sake of 
your (time-based) column names.

bq. What if we just made the client default to a single server per Session and 
only failover when necessary?

To be honest, I think that's a really bad idea.

First, that does not really work, since when you failover, you'll still 
potentially fail the "sequential updates made on the same client thread will be 
applied sequentially".

And secondly, that heavily constraint the architecture of clients. Concretely, 
that goes against the basic architecture of the java driver for instance: it 
breaks pretty much all of the load balancing policies provided, including 
things like token aware and latency based policies. And the driver does not 
distinguish between reads and writes because one of our most fundamental design 
choice from day one has been "we do not parse the query client side". It also 
makes it harder to manage your server connections efficiently (in particular, I 
don't know how to even code this with the current java driver API without 
relying on thread IDs, but there's no easy way to know when a thread dies so 
that would be a mess).

And all this for what? Because we're afraid asking people to set-up ntpd client 
side is too much?


> Consider allowing timestamp at the protocol level ... and deprecating server 
> side timestamps
> 
>
> Key: CASSANDRA-6178
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6178
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
>
> Generating timestamps server side by default for CQL has been done for 
> convenience, so that end-user don't have to provide one with every query.  
> However, doing it server side has the downside that updates made sequentially 
> by one single client (thread) are no guaranteed to have sequentially 
> increasing timestamps. Unless a client thread is always pinned to one 
> specific server connection that is, but no good client driver out (that is, 
> including thrit driver) there does that because that's contradictory to 
> abstracting fault tolerance to the driver user (and goes again most sane load 
> balancing strategy).
> Very concretely, this means that if you write a very trivial test program 
> that sequentially insert a value and then erase it (or overwrite it), then, 
> if you let CQL pick timestamp server side, the deletion might not erase the 
> just inserted value (because the delete might reach a different coordinator 
> than the insert and thus get a lower timestamp). From the user point of view, 
> this is a very confusing behavior, and understandably so: if timestamps are 
> optional, you'd hope that they are least respect the sequentiality of 
> operation from a single client thread.
> Of course we do support client-side assigned timestamps so it's not like the 
> test above is not fixable. And you could argue that's it's not a bug per-se.  
> Still, it's a very confusing "default" behavior for something very simple, 
> which suggest it's not the best default.
> You could also argue that inserting a value and deleting/overwriting right 
> away (in the same thread) is not something real program often do. And indeed, 
> it's likely that in practice server-side timestamps work fine for most real 
> application. Still, it's too easy to get counter-intuitive behavior with 
> server-side timestamps and I think we should consider moving away from them.
> So what I'd suggest is that we push back the job of providing timestamp 
> client side. But to make it easy for the driver to generate it (rather than 
> the end user), we should allow providing said timestamp at the protocol level.
> As a side note, letting the client provide the timestamp would also have the 
> advantage of 

[jira] [Commented] (CASSANDRA-6178) Consider allowing timestamp at the protocol level ... and deprecating server side timestamps

2013-10-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792588#comment-13792588
 ] 

Jonathan Ellis commented on CASSANDRA-6178:
---

I understand the problem, but I still think the cure may be worse than the 
disease.  Let's not forget that "Bad Things happen if not just your server 
clocks but also your app server clocks are not synchronized" confuses people 
too.  (For a recent example, 
http://stackoverflow.com/questions/19239633/why-cassandra-so-depend-on-client-local-timestamp.)

What if we just made the client default to a single server per Session and only 
failover when necessary?  Could make that configurable separately for reads and 
writes; giving up token-aware routing for writes is less of a performance hit 
than for CL.ONE reads.

> Consider allowing timestamp at the protocol level ... and deprecating server 
> side timestamps
> 
>
> Key: CASSANDRA-6178
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6178
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
>
> Generating timestamps server side by default for CQL has been done for 
> convenience, so that end-user don't have to provide one with every query.  
> However, doing it server side has the downside that updates made sequentially 
> by one single client (thread) are no guaranteed to have sequentially 
> increasing timestamps. Unless a client thread is always pinned to one 
> specific server connection that is, but no good client driver out (that is, 
> including thrit driver) there does that because that's contradictory to 
> abstracting fault tolerance to the driver user (and goes again most sane load 
> balancing strategy).
> Very concretely, this means that if you write a very trivial test program 
> that sequentially insert a value and then erase it (or overwrite it), then, 
> if you let CQL pick timestamp server side, the deletion might not erase the 
> just inserted value (because the delete might reach a different coordinator 
> than the insert and thus get a lower timestamp). From the user point of view, 
> this is a very confusing behavior, and understandably so: if timestamps are 
> optional, you'd hope that they are least respect the sequentiality of 
> operation from a single client thread.
> Of course we do support client-side assigned timestamps so it's not like the 
> test above is not fixable. And you could argue that's it's not a bug per-se.  
> Still, it's a very confusing "default" behavior for something very simple, 
> which suggest it's not the best default.
> You could also argue that inserting a value and deleting/overwriting right 
> away (in the same thread) is not something real program often do. And indeed, 
> it's likely that in practice server-side timestamps work fine for most real 
> application. Still, it's too easy to get counter-intuitive behavior with 
> server-side timestamps and I think we should consider moving away from them.
> So what I'd suggest is that we push back the job of providing timestamp 
> client side. But to make it easy for the driver to generate it (rather than 
> the end user), we should allow providing said timestamp at the protocol level.
> As a side note, letting the client provide the timestamp would also have the 
> advantage of making it easy for the driver to retry failed operations with 
> their initial timestamp, so that retries are truly idempotent.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5201) Cassandra/Hadoop does not support current Hadoop releases

2013-10-11 Thread Mck SembWever (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792563#comment-13792563
 ] 

Mck SembWever commented on CASSANDRA-5201:
--

I've updated the github project so to be a patch off the InputFormat and 
OutputFormat classes as found in cassandra-1.2.10
It works against hadoop-0.22.0

> Cassandra/Hadoop does not support current Hadoop releases
> -
>
> Key: CASSANDRA-5201
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5201
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Affects Versions: 1.2.0
>Reporter: Brian Jeltema
>Assignee: Dave Brosius
> Attachments: 5201_a.txt
>
>
> Using Hadoop 0.22.0 with Cassandra results in the stack trace below.
> It appears that version 0.21+ changed org.apache.hadoop.mapreduce.JobContext
> from a class to an interface.
> Exception in thread "main" java.lang.IncompatibleClassChangeError: Found 
> interface org.apache.hadoop.mapreduce.JobContext, but class was expected
>   at 
> org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSplits(ColumnFamilyInputFormat.java:103)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:445)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:462)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:357)
>   at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1045)
>   at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1042)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1042)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1062)
>   at MyHadoopApp.run(MyHadoopApp.java:163)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
>   at MyHadoopApp.main(MyHadoopApp.java:82)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:192)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6180) NPE in CqlRecordWriter: Related to AbstractCassandraStorage handling null values

2013-10-11 Thread Henning Kropp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henning Kropp updated CASSANDRA-6180:
-

Description: 
I encountered an issue with the {{CqlStorage}} and it's handling of null 
values. The {{CqlRecordWriter}} throws an NPE when a value is null. I found a 
related ticket CASSANDRA-5885 and applied the there stated fix to the 
{{AbstractCassandraStorage}}.
Instead of converting {{null}} values to {{ByteBuffer.wrap(new byte[0])}} 
{{AbstractCassandraStorage}} returns {{(ByteBuffer)null}}

This issue can be reproduced with the attached files: {{test_null.cql}}, 
{{test_null_data}}, {{null_test.pig}}

A fix can be found in the attached patch.

{code}
java.io.IOException: java.lang.NullPointerException
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:248)
Caused by: java.lang.NullPointerException
at 
org.apache.thrift.protocol.TBinaryProtocol.writeBinary(TBinaryProtocol.java:194)
at 
org.apache.cassandra.thrift.Cassandra$execute_prepared_cql3_query_args.write(Cassandra.java:41253)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:63)
at 
org.apache.cassandra.thrift.Cassandra$Client.send_execute_prepared_cql3_query(Cassandra.java:1683)
at 
org.apache.cassandra.thrift.Cassandra$Client.execute_prepared_cql3_query(Cassandra.java:1673)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:232)
{code}

  was:
I encountered an issue with the {{CqlStorage}} and it's handling of null 
values. The {{CqlRecordWriter}} throws an NPE when a value is null. I found a 
related ticket CASSANDRA-5885 and applied the there state fix to the 
{{AbstractCassandraStorage}}.
Instead of converting {{null}} values to {{ByteBuffer.wrap(new byte[0])}} 
{{AbstractCassandraStorage}} returns {{(ByteBuffer)null}}

This issue can be reproduced with the attached files: {{test_null.cql}}, 
{{test_null_data}}, {{null_test.pig}}

A fix can be found in the attached patch.

{code}
java.io.IOException: java.lang.NullPointerException
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:248)
Caused by: java.lang.NullPointerException
at 
org.apache.thrift.protocol.TBinaryProtocol.writeBinary(TBinaryProtocol.java:194)
at 
org.apache.cassandra.thrift.Cassandra$execute_prepared_cql3_query_args.write(Cassandra.java:41253)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:63)
at 
org.apache.cassandra.thrift.Cassandra$Client.send_execute_prepared_cql3_query(Cassandra.java:1683)
at 
org.apache.cassandra.thrift.Cassandra$Client.execute_prepared_cql3_query(Cassandra.java:1673)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:232)
{code}


> NPE in CqlRecordWriter: Related to AbstractCassandraStorage handling null 
> values
> 
>
> Key: CASSANDRA-6180
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6180
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
> Environment: Pig, CqlStorage
>Reporter: Henning Kropp
> Attachments: null_test.pig, patch.txt, test_null.cql, test_null_data
>
>
> I encountered an issue with the {{CqlStorage}} and it's handling of null 
> values. The {{CqlRecordWriter}} throws an NPE when a value is null. I found a 
> related ticket CASSANDRA-5885 and applied the there stated fix to the 
> {{AbstractCassandraStorage}}.
> Instead of converting {{null}} values to {{ByteBuffer.wrap(new byte[0])}} 
> {{AbstractCassandraStorage}} returns {{(ByteBuffer)null}}
> This issue can be reproduced with the attached files: {{test_null.cql}}, 
> {{test_null_data}}, {{null_test.pig}}
> A fix can be found in the attached patch.
> {code}
> java.io.IOException: java.lang.NullPointerException
>   at 
> org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:248)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.thrift.protocol.TBinaryProtocol.writeBinary(TBinaryProtocol.java:194)
>   at 
> org.apache.cassandra.thrift.Cassandra$execute_prepared_cql3_query_args.write(Cassandra.java:41253)
>   at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:63)
>   at 
> org.apache.cassandra.thrift.Cassandra$Client.send_execute_prepared_cql3_query(Cassandra.java:1683)
>   at 
> org.apache.cassandra.thrift.Cassandra$Client.execute_prepared_cql3_query(Cassandra.java:1673)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:232)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6180) NPE in CqlRecordWriter: Related to AbstractCassandraStorage handling null values

2013-10-11 Thread Henning Kropp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henning Kropp updated CASSANDRA-6180:
-

Attachment: patch.txt
test_null_data
test_null.cql
null_test.pig

Files to reproduce this issue and a patch.

> NPE in CqlRecordWriter: Related to AbstractCassandraStorage handling null 
> values
> 
>
> Key: CASSANDRA-6180
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6180
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
> Environment: Pig, CqlStorage
>Reporter: Henning Kropp
> Attachments: null_test.pig, patch.txt, test_null.cql, test_null_data
>
>
> I encountered an issue with the {{CqlStorage}} and it's handling of null 
> values. The {{CqlRecordWriter}} throws an NPE when a value is null. I found a 
> related ticket CASSANDRA-5885 and applied the there state fix to the 
> {{AbstractCassandraStorage}}.
> Instead of converting {{null}} values to {{ByteBuffer.wrap(new byte[0])}} 
> {{AbstractCassandraStorage}} returns {{(ByteBuffer)null}}
> This issue can be reproduced with the attached files: {{test_null.cql}}, 
> {{test_null_data}}, {{null_test.pig}}
> A fix can be found in the attached patch.
> {code}
> java.io.IOException: java.lang.NullPointerException
>   at 
> org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:248)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.thrift.protocol.TBinaryProtocol.writeBinary(TBinaryProtocol.java:194)
>   at 
> org.apache.cassandra.thrift.Cassandra$execute_prepared_cql3_query_args.write(Cassandra.java:41253)
>   at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:63)
>   at 
> org.apache.cassandra.thrift.Cassandra$Client.send_execute_prepared_cql3_query(Cassandra.java:1683)
>   at 
> org.apache.cassandra.thrift.Cassandra$Client.execute_prepared_cql3_query(Cassandra.java:1673)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:232)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6180) NPE in CqlRecordWriter: Related to AbstractCassandraStorage handling null values

2013-10-11 Thread Henning Kropp (JIRA)
Henning Kropp created CASSANDRA-6180:


 Summary: NPE in CqlRecordWriter: Related to 
AbstractCassandraStorage handling null values
 Key: CASSANDRA-6180
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6180
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
 Environment: Pig, CqlStorage
Reporter: Henning Kropp


I encountered an issue with the {{CqlStorage}} and it's handling of null 
values. The {{CqlRecordWriter}} throws an NPE when a value is null. I found a 
related ticket CASSANDRA-5885 and applied the there state fix to the 
{{AbstractCassandraStorage}}.
Instead of converting {{null}} values to {{ByteBuffer.wrap(new byte[0])}} 
{{AbstractCassandraStorage}} returns {{(ByteBuffer)null}}

This issue can be reproduced with the attached files: {{test_null.cql}}, 
{{test_null_data}}, {{null_test.pig}}

A fix can be found in the attached patch.

{code}
java.io.IOException: java.lang.NullPointerException
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:248)
Caused by: java.lang.NullPointerException
at 
org.apache.thrift.protocol.TBinaryProtocol.writeBinary(TBinaryProtocol.java:194)
at 
org.apache.cassandra.thrift.Cassandra$execute_prepared_cql3_query_args.write(Cassandra.java:41253)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:63)
at 
org.apache.cassandra.thrift.Cassandra$Client.send_execute_prepared_cql3_query(Cassandra.java:1683)
at 
org.apache.cassandra.thrift.Cassandra$Client.execute_prepared_cql3_query(Cassandra.java:1673)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:232)
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-5894) CQL-aware SSTableWriter

2013-10-11 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-5894:


Attachment: 5894-v2.txt

I had meant to test this autonomously just after having attached the initial 
patch but got distracted.

Anyway, attaching v2 that fix it working outside of a unit test. I'll note that 
one of the problem for that (not the one that was triggering the first 
exception but a problem nonetheless) is that post-CASSANDRA-5515, 
SSTableWriter.closeAndOpenReader() requires reading the system tables which is 
breaking the AbstractSSTableSimpleWriter (even the existing ones that is).  
Attached v2 fixes this (by adding a SSTableWriter.close() that don't try to 
open the reader and using that in AbstractSSTableSimpleWriter), but if we're 
not confident about commit this patch to 2.0.2 then we'd probably still need to 
extract that part of the patch.


> CQL-aware SSTableWriter
> ---
>
> Key: CASSANDRA-5894
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5894
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tools
>Reporter: Jonathan Ellis
>Assignee: Sylvain Lebresne
>Priority: Minor
> Fix For: 2.0.2
>
> Attachments: 5894.txt, 5894-v2.txt
>
>
> SSTableSimple[Un]SortedWriter requires defining raw comparators and inserting 
> raw data cells.  We should create a CQL-aware alternative.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6178) Consider allowing timestamp at the protocol level ... and deprecating server side timestamps

2013-10-11 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792421#comment-13792421
 ] 

Sylvain Lebresne commented on CASSANDRA-6178:
-

bq. sounds like a lot like "I want read-my-writes consistency" which is also 
only sane if restricted to a single session (connection)

I'm not totally sure I understand what you mean here, but I'm sure I disagree 
with that statement if session == server connection. Because if you restrict 
that to one server connection, that would mean we don't guarantee 
"read-my-writes consistency" in face of a server failure, while we do, and 
that's the important part.

Besides, neither Hector nor Astyanax (to cite only java driver) maps a client 
thread/session to a unique server connection in general, so if "read-my-writes 
consistency" was only sane for one server connection, none of those client 
would ever guarantee it, and they do.

So back to the issue at hand, I agree that "I want my operations to be 
sequential wrt to the order the client issued them" is only sane if restricted 
to one client thread/session, but I'm saying that with server side timestamp we 
do not guarantee that today since no serious client driver I know of maps a 
client thread to a unique server connection at all time (for very good reasons).

To be very concrete, we've had already 3 reports on the java driver (and some 
reports on the pythone one also) of people running as simple a test as:
{noformat}
session.execute(new SimpleStatement("INSERT INTO test (k, v) VALUES (0, 
1)").setConsistencyLevel(ConsistencyLevel.ALL));
session.execute(new SimpleStatement("INSERT INTO test (k, v) VALUES (0, 
2)").setConsistencyLevel(ConsistencyLevel.ALL));
{noformat}
and being surprised that at the end, the value was sometimes 2, but sometimes 
1. While this behavior can be explained by the fact that the timestamp are only 
assigned server side and that both queries might not reach the same 
coordinator, I have a very hard time considering this as a ok "default" 
behavior and I'm pretty sure any new user would consider that as a break of the 
consistency guarantees. And while I'd agree that inserting a value and 
overriding it right away is not too useful in real life, that's still something 
easy to run by when you're testing C* to try to understand the consistency 
guarantee it provides.


> Consider allowing timestamp at the protocol level ... and deprecating server 
> side timestamps
> 
>
> Key: CASSANDRA-6178
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6178
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
>
> Generating timestamps server side by default for CQL has been done for 
> convenience, so that end-user don't have to provide one with every query.  
> However, doing it server side has the downside that updates made sequentially 
> by one single client (thread) are no guaranteed to have sequentially 
> increasing timestamps. Unless a client thread is always pinned to one 
> specific server connection that is, but no good client driver out (that is, 
> including thrit driver) there does that because that's contradictory to 
> abstracting fault tolerance to the driver user (and goes again most sane load 
> balancing strategy).
> Very concretely, this means that if you write a very trivial test program 
> that sequentially insert a value and then erase it (or overwrite it), then, 
> if you let CQL pick timestamp server side, the deletion might not erase the 
> just inserted value (because the delete might reach a different coordinator 
> than the insert and thus get a lower timestamp). From the user point of view, 
> this is a very confusing behavior, and understandably so: if timestamps are 
> optional, you'd hope that they are least respect the sequentiality of 
> operation from a single client thread.
> Of course we do support client-side assigned timestamps so it's not like the 
> test above is not fixable. And you could argue that's it's not a bug per-se.  
> Still, it's a very confusing "default" behavior for something very simple, 
> which suggest it's not the best default.
> You could also argue that inserting a value and deleting/overwriting right 
> away (in the same thread) is not something real program often do. And indeed, 
> it's likely that in practice server-side timestamps work fine for most real 
> application. Still, it's too easy to get counter-intuitive behavior with 
> server-side timestamps and I think we should consider moving away from them.
> So what I'd suggest is that we push back the job of providing timestamp 
> client side. But to make it easy for the driver to generate it (rather than 
> the end user), we should allo