[jira] [Comment Edited] (CASSANDRA-10276) With DTCS, do STCS in windows if more than max_threshold sstables

2015-10-27 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977863#comment-14977863
 ] 

Jeff Jirsa edited comment on CASSANDRA-10276 at 10/28/15 6:51 AM:
--

I'm definitely not disagreeing with you, but like always, the devil's in the 
details. If there were 3 40MB files, 3 120M files, 3 500M files, 3 2G files, 3 
8G files, and 3 24G files all in one window, it might be nice to do one big 
rollup compaction (and that's a pretty extreme example, but let's look at the 
edge cases, because someone's bound to hit it - one single 40M flush away from 
having a single 100G file in normal STCS operation, but in an old window, that 
next 40M file may never show up, so we'll end up keeping 17 extra sstables 
around for no reason).  


was (Author: jjirsa):
I'm definitely not disagreeing with you, but like always, the devil's in the 
details. If there were 3 40MB files, 3 120M files, 3 500M files, 3 2G files, 3 
8G files, and 3 24G files all in one window, it might be nice to do one big 
rollup compaction (and that's a pretty extreme example, but let's look at the 
edge cases, because someone's bound to hit it - one single 40M flush away from 
having a single 100G file in normal STCS operation).  

> With DTCS, do STCS in windows if more than max_threshold sstables
> -
>
> Key: CASSANDRA-10276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10276
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.x, 2.1.x, 2.2.x
>
>
> To avoid constant recompaction of files in big ( > max threshold) DTCS 
> windows, we should do STCS of those files.
> Patch here: https://github.com/krummas/cassandra/commits/marcuse/dtcs_stcs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10276) With DTCS, do STCS in windows if more than max_threshold sstables

2015-10-27 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977863#comment-14977863
 ] 

Jeff Jirsa commented on CASSANDRA-10276:


I'm definitely not disagreeing with you, but like always, the devil's in the 
details. If there were 3 40MB files, 3 120M files, 3 500M files, 3 2G files, 3 
8G files, and 3 24G files all in one window, it might be nice to do one big 
rollup compaction (and that's a pretty extreme example, but let's look at the 
edge cases, because someone's bound to hit it - one single 40M flush away from 
having a single 100G file in normal STCS operation).  

> With DTCS, do STCS in windows if more than max_threshold sstables
> -
>
> Key: CASSANDRA-10276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10276
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.x, 2.1.x, 2.2.x
>
>
> To avoid constant recompaction of files in big ( > max threshold) DTCS 
> windows, we should do STCS of those files.
> Patch here: https://github.com/krummas/cassandra/commits/marcuse/dtcs_stcs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10607) Using reserved keyword for Type field crashes cqlsh

2015-10-27 Thread Mike Prince (JIRA)
Mike Prince created CASSANDRA-10607:
---

 Summary: Using reserved keyword for Type field crashes cqlsh
 Key: CASSANDRA-10607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10607
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Mac OS X El Capitan, Java 1.8.0_25-b17, fresh install of 
apache-cassandra-2.2.1-bin.tar.gz
Reporter: Mike Prince
Priority: Minor
 Fix For: 2.2.x


1) From a fresh cassandra node start, start cqlsh and execute:

CREATE KEYSPACE foospace
WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };

USE foospace;

CREATE TYPE Foo(
"from" text
);

CREATE TABLE Bar(
id text PRIMARY KEY,
foo frozen
);

2) select * from bar;

Traceback (most recent call last):
  File "bin/cqlsh.py", line 1166, in perform_simple_statement
rows = future.result(self.session.default_timeout)
  File 
"/Users/mike/mobido/servers/apache-cassandra-2.2.1/bin/../lib/cassandra-driver-internal-only-2.6.0c2.post.zip/cassandra-driver-2.6.0c2.post/cassandra/cluster.py",
 line 3296, in result
raise self._final_exception
ValueError: Type names and field names cannot be a keyword: 'from'

3) Exit cqlsh and try to run cqlsh again, this error occurs:

Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
ValueError("Don't know how to parse type string 
u'org.apache.cassandra.db.marshal.UserType(foospace,666f6f,66726f6d:org.apache.cassandra.db.marshal.UTF8Type)':
 Type names and field names cannot be a keyword: 'from'",)})







--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10580) On dropped mutations, more details should be logged.

2015-10-27 Thread Anubhav Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Kale updated CASSANDRA-10580:
-
Description: 
In our production cluster, we are seeing a large number of dropped mutations. 
At a minimum, we should print the time the thread took to get scheduled thereby 
dropping the mutation (We should also print the Message / Mutation so it helps 
in figuring out which column family was affected). This will help find the 
right tuning parameter for write_timeout_in_ms. 

The change is small and is in StorageProxy.java and MessagingTask.java. I will 
submit a patch shortly.



  was:
In our production cluster, we are seeing a large number of dropped mutations. 
At a minimum, we should print the time the thread took to get scheduled thereby 
dropping the mutation (We should also print the Message / Mutation so it helps 
in figuring out which column family was affected). This will help find the 
right tuning parameter for write_timeout_in_ms. 

The change will need to be done in StorageProxy.java and MessagingTask.java. It 
is straightforward, and I will submit a patch shortly.




> On dropped mutations, more details should be logged.
> 
>
> Key: CASSANDRA-10580
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10580
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Production
>Reporter: Anubhav Kale
>Priority: Minor
> Fix For: 2.1.x
>
>
> In our production cluster, we are seeing a large number of dropped mutations. 
> At a minimum, we should print the time the thread took to get scheduled 
> thereby dropping the mutation (We should also print the Message / Mutation so 
> it helps in figuring out which column family was affected). This will help 
> find the right tuning parameter for write_timeout_in_ms. 
> The change is small and is in StorageProxy.java and MessagingTask.java. I 
> will submit a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10580) On dropped mutations, more details should be dropped.

2015-10-27 Thread Anubhav Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Kale updated CASSANDRA-10580:
-
Description: 
In our production cluster, we are seeing a large number of dropped mutations. 
At a minimum, we should print the time the thread took to get scheduled thereby 
dropping the mutation (We should also print the Message / Mutation so it helps 
in figuring out which column family was affected). This will help find the 
right tuning parameter for write_timeout_in_ms. 

The change will need to be done in StorageProxy.java and MessagingTask.java. It 
is straightforward, and I will submit a patch shortly.



  was:
In our production cluster, we are seeing a large number of dropped mutations. 
At a minimum, we should print the time the thread took to get scheduled thereby 
dropping the mutation. This will help find the right tuning parameter for 
write_timeout_in_ms. 

The change will need to be done in StorageProxy.java and MessagingTask.java. It 
is easy, and I will submit a patch shortly.




> On dropped mutations, more details should be dropped.
> -
>
> Key: CASSANDRA-10580
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10580
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Production
>Reporter: Anubhav Kale
>Priority: Minor
> Fix For: 2.1.x
>
>
> In our production cluster, we are seeing a large number of dropped mutations. 
> At a minimum, we should print the time the thread took to get scheduled 
> thereby dropping the mutation (We should also print the Message / Mutation so 
> it helps in figuring out which column family was affected). This will help 
> find the right tuning parameter for write_timeout_in_ms. 
> The change will need to be done in StorageProxy.java and MessagingTask.java. 
> It is straightforward, and I will submit a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10580) On dropped mutations, more details should be logged.

2015-10-27 Thread Anubhav Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Kale updated CASSANDRA-10580:
-
Summary: On dropped mutations, more details should be logged.  (was: On 
dropped mutations, more details should be dropped.)

> On dropped mutations, more details should be logged.
> 
>
> Key: CASSANDRA-10580
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10580
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Production
>Reporter: Anubhav Kale
>Priority: Minor
> Fix For: 2.1.x
>
>
> In our production cluster, we are seeing a large number of dropped mutations. 
> At a minimum, we should print the time the thread took to get scheduled 
> thereby dropping the mutation (We should also print the Message / Mutation so 
> it helps in figuring out which column family was affected). This will help 
> find the right tuning parameter for write_timeout_in_ms. 
> The change will need to be done in StorageProxy.java and MessagingTask.java. 
> It is straightforward, and I will submit a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10580) On dropped mutations, more details should be dropped.

2015-10-27 Thread Anubhav Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Kale updated CASSANDRA-10580:
-
Issue Type: Bug  (was: New Feature)

> On dropped mutations, more details should be dropped.
> -
>
> Key: CASSANDRA-10580
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10580
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Production
>Reporter: Anubhav Kale
>Priority: Minor
> Fix For: 2.1.x
>
>
> In our production cluster, we are seeing a large number of dropped mutations. 
> At a minimum, we should print the time the thread took to get scheduled 
> thereby dropping the mutation. This will help find the right tuning parameter 
> for write_timeout_in_ms. 
> The change will need to be done in StorageProxy.java and MessagingTask.java. 
> It is easy, and I will submit a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10580) When mutations are dropped, the column family should be printed / have a counter per column family

2015-10-27 Thread Anubhav Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Kale updated CASSANDRA-10580:
-
Description: 
In our production cluster, we are seeing a large number of dropped mutations. 
At a minimum, we should print the time the thread took to get scheduled thereby 
dropping the mutation. This will help find the right tuning parameter for 
write_timeout_in_ms. 

The change will need to be done in StorageProxy.java and MessagingTask.java. It 
is easy, and I will submit a patch shortly.



  was:
In our production cluster, we are seeing a large number of dropped mutations. 
It would be really helpful to see which column families are really affected by 
this (either through logs or through a dedicated counter for every column 
family).

I have made a hack in StorageProxy (below) to help us with this. I am happy to 
extend this to a better solution (print the CF affected in as logger.debug and 
then manually grep) if experts agree this additional detail would be helpful in 
general. Any other suggestions are welcome.

private static abstract class LocalMutationRunnable implements Runnable
{
private final long constructionTime = System.currentTimeMillis();

private IMutation mutation;

public final void run()
{
if (System.currentTimeMillis() > constructionTime + 2000L)
{
long timeTaken = System.currentTimeMillis() - constructionTime;
logger.warn("Anubhav LocalMutationRunnable thread ran after " + 
timeTaken);

try
{
 for(ColumnFamily family : 
this.mutation.getColumnFamilies())
 {
if 
(family.toString().toLowerCase().contains("udsuserdailysnapshot"))
{

MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.USERDAILY);
}

else if 
(family.toString().toLowerCase().contains("udsuserhourlysnapshot"))
{

MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.USERHOURLY);
}

else if 
(family.toString().toLowerCase().contains("udstenantdailysnapshot"))
{

MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.TENANTDAILY);
}

else if 
(family.toString().toLowerCase().contains("udstenanthourlysnapshot"))
{

MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.TENANTHOURLY);
}

else if 
(family.toString().toLowerCase().contains("userdatasetraw"))
{

MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.USERDSRAW);
}

else if 
(family.toString().toLowerCase().contains("tenants"))
{

MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.TENANTS);
}

else if 
(family.toString().toLowerCase().contains("users"))
{

MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.USERS);
}

else if 
(family.toString().toLowerCase().contains("tenantactivity"))
{

MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.TENANTACTIVITY);
}

else if 
(family.getKeySpaceName().toLowerCase().contains("system"))
{

MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.SYSTEMKS);
}

else
{
logger.warn("Anubhav LocalMutationRunnable 
updating mutations for " + family.toString().toLowerCase());

MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.OTHERTBL);
}
 }  
}
catch (Exception e)
{
logger.error("Anubhav LocalMutationRunnab

[jira] [Updated] (CASSANDRA-10580) On dropped mutations, more details should be dropped.

2015-10-27 Thread Anubhav Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Kale updated CASSANDRA-10580:
-
Summary: On dropped mutations, more details should be dropped.  (was: When 
mutations are dropped, the column family should be printed / have a counter per 
column family)

> On dropped mutations, more details should be dropped.
> -
>
> Key: CASSANDRA-10580
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10580
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
> Environment: Production
>Reporter: Anubhav Kale
>Priority: Minor
> Fix For: 2.1.x
>
>
> In our production cluster, we are seeing a large number of dropped mutations. 
> At a minimum, we should print the time the thread took to get scheduled 
> thereby dropping the mutation. This will help find the right tuning parameter 
> for write_timeout_in_ms. 
> The change will need to be done in StorageProxy.java and MessagingTask.java. 
> It is easy, and I will submit a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10513) Update cqlsh for new driver execution API

2015-10-27 Thread Adam Holmberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977666#comment-14977666
 ] 

Adam Holmberg commented on CASSANDRA-10513:
---

bq. I wonder if we should bother upgrading the cqlsh internal driver on 2.1, 
since it will be EOL soon.
I'm in favor of avoiding 2.1 updates, and discussed the same with some other C* 
stakeholders earlier today.

New failures: I just scanned a couple, but it looks like we may need some dtest 
updates.

{{'TableMetadata' object has no attribute 'keyspace'}}: this attribute went 
away. Test needs to be updated (now use keyspace_name).

{code}
+  'CREATE INDEX test_val_idx ON test.test (val);',
   'CREATE INDEX test_col_idx ON test.test (col);',
-  'CREATE INDEX test_val_idx ON test.test (val);',
{code}
Test needs to be more robust -- index order is not relevant here.

> Update cqlsh for new driver execution API
> -
>
> Key: CASSANDRA-10513
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10513
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tools
>Reporter: Adam Holmberg
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.2.x, 3.0.x
>
> Attachments: 10513-2.1.txt, 10513-2.2.txt, 10513.txt
>
>
> The 3.0 Python driver will have a few tweaks to the execution API. We'll need 
> to update cqlsh in a couple of minor ways.
> [Results are always returned as an iterable 
> ResultSet|https://datastax-oss.atlassian.net/browse/PYTHON-368]
> [Trace data is always attached to the 
> ResultSet|https://datastax-oss.atlassian.net/browse/PYTHON-318] (instead of 
> being attached to a statement)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10276) With DTCS, do STCS in windows if more than max_threshold sstables

2015-10-27 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977533#comment-14977533
 ] 

Jeremiah Jordan commented on CASSANDRA-10276:
-

I think saving on write amplification is a good idea.  You don't want to be 
compacting the 2 5 GB files with the 2 5 MB file just because there are 4 files 
in the window.

> With DTCS, do STCS in windows if more than max_threshold sstables
> -
>
> Key: CASSANDRA-10276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10276
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.x, 2.1.x, 2.2.x
>
>
> To avoid constant recompaction of files in big ( > max threshold) DTCS 
> windows, we should do STCS of those files.
> Patch here: https://github.com/krummas/cassandra/commits/marcuse/dtcs_stcs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10276) With DTCS, do STCS in windows if more than max_threshold sstables

2015-10-27 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977509#comment-14977509
 ] 

Jeff Jirsa commented on CASSANDRA-10276:


I haven't convinced myself of the 'best' way to handle the case where you have 
more than min_threshold files, but no stcs compaction candidates - in TWCS, I 
compact them in older windows, justifying that decision with the rationale that 
it should be fairly rare, though I think it's also reasonable to say that a few 
sstables per window is fine, and it's better to save the write amplification. 
Given that I can make arguments on either side, I'm fine with your choice. 

[~krummas] - latest patch looks reasonable to me. +1



> With DTCS, do STCS in windows if more than max_threshold sstables
> -
>
> Key: CASSANDRA-10276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10276
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.x, 2.1.x, 2.2.x
>
>
> To avoid constant recompaction of files in big ( > max threshold) DTCS 
> windows, we should do STCS of those files.
> Patch here: https://github.com/krummas/cassandra/commits/marcuse/dtcs_stcs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10513) Update cqlsh for new driver execution API

2015-10-27 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977456#comment-14977456
 ] 

Paulo Motta commented on CASSANDRA-10513:
-

It seems there are some new test failures with {{cqlsh_tests.cqlsh_copy_tests}} 
on 
[3.0|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-10513-dtest/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_copy_tests/]
 and 
[2.2|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-10513-dtest/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_copy_tests/]
 (they do not appear in the original 
[2.2|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_copy_tests/]
 and 
[3.0|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_copy_tests/]
 branches).

On 
[2.1|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.1-10513-dtest/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe/]
 and 
[2.2|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-10513-dtest/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe/]
 there seems to be a new failure on 
{{cqlsh_tests.cqlsh_tests.TestCqlsh.test_describe}} probably due to the driver 
upgrade from 2.7.2 to 3.0.0.

On 2.1 there is also a new failure on 
[cqlsh_tests.cqlsh_tests.TestCqlsh.test_refresh_schema_on_timeout_error|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.1-10513-dtest/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_refresh_schema_on_timeout_error/].

Would you mind having a look on these? I wonder if we should bother upgrading 
the cqlsh internal driver on 2.1, since it will be EOL soon. I think we should 
probably stick to 2.2 and 3.0, what do you think [~aholmber] ?

> Update cqlsh for new driver execution API
> -
>
> Key: CASSANDRA-10513
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10513
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tools
>Reporter: Adam Holmberg
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.2.x, 3.0.x
>
> Attachments: 10513-2.1.txt, 10513-2.2.txt, 10513.txt
>
>
> The 3.0 Python driver will have a few tweaks to the execution API. We'll need 
> to update cqlsh in a couple of minor ways.
> [Results are always returned as an iterable 
> ResultSet|https://datastax-oss.atlassian.net/browse/PYTHON-368]
> [Trace data is always attached to the 
> ResultSet|https://datastax-oss.atlassian.net/browse/PYTHON-318] (instead of 
> being attached to a statement)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10313) cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_all_datatypes_read fails locally

2015-10-27 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977410#comment-14977410
 ] 

Paulo Motta commented on CASSANDRA-10313:
-

I had this same issue and fixed by removing the following directory 
{{/usr/local/lib/python2.7/dist-packages/cqlshlib}}. I don't know how it ended 
up there, probably I installed manually a long time ago. So it was probably 
picking an older version of cqlshlib before CASSANDRA-1.

> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_all_datatypes_read fails 
> locally
> 
>
> Key: CASSANDRA-10313
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10313
> Project: Cassandra
>  Issue Type: Test
> Environment: Linux
>Reporter: Stefania
>Assignee: Stefania
>  Labels: cqlsh
>
> I get this failure on my box with TZ at GMT+08:
> {code}
> ==
> FAIL: test_all_datatypes_read (cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest)
> --
> Traceback (most recent call last):
>   File 
> "/home/stefi/git/cstar/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", line 
> 674, in test_all_datatypes_read
> self.assertCsvResultEqual(self.tempfile.name, results)
>   File 
> "/home/stefi/git/cstar/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", line 
> 137, in assertCsvResultEqual
> raise e
> AssertionError: Element counts were not equal:
> First has 1, Second has 0:  ['ascii', '1099511627776', '0xbeef', 'True', 
> '3.140124344978758017532527446746826171875', '2.444', '1.1', 
> '127.0.0.1', '25', 
> '\xe3\x83\xbd(\xc2\xb4\xe3\x83\xbc\xef\xbd\x80)\xe3\x83\x8e', '2005-07-14 
> 12:30:00', '30757c2c-584a-11e5-b2d0-9cebe804ecbe', 
> '2471e7de-41e4-478f-a402-e99ed779be76', 'asdf', '36893488147419103232']
> First has 0, Second has 1:  ['ascii', '1099511627776', '0xbeef', 'True', 
> '3.140124344978758017532527446746826171875', '2.444', '1.1', 
> '127.0.0.1', '25', 
> '\xe3\x83\xbd(\xc2\xb4\xe3\x83\xbc\xef\xbd\x80)\xe3\x83\x8e', '2005-07-14 
> 04:30:00', '30757c2c-584a-11e5-b2d0-9cebe804ecbe', 
> '2471e7de-41e4-478f-a402-e99ed779be76', 'asdf', '36893488147419103232']
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-8VAvBl
> dtest: DEBUG: Importing from csv file: /tmp/tmpGwW8yB
> dtest: WARNING: Mismatch at index: 10
> dtest: WARNING: Value in csv: 2005-07-14 12:30:00
> dtest: WARNING: Value in result: 2005-07-14 04:30:00
> - >> end captured logging << -
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate

2015-10-27 Thread Sebastian Estevez (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Estevez updated CASSANDRA-10592:
--
Description: 
CORRECTION-
It turns out the exception occurs when running a read using a thrift jdbc 
driver. Once you have loaded the data with stress below, run 
SELECT * FROM "autogeneratedtest"."transaction_by_retailer" using this tool - 
http://www.aquafold.com/aquadatastudio_downloads.html
 
The following exception appeared in my logs while running a cassandra-stress 
workload on master. 

{code}
WARN  [SharedPool-Worker-1] 2015-10-22 12:58:20,792 
AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.RuntimeException: java.lang.IllegalArgumentException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366)
 ~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_60]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 ~[main/:na]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
Caused by: java.lang.IllegalArgumentException: null
at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60]
at 
org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63)
 ~[main/:na]
at 
org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57) 
~[main/:na]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
 ~[main/:na]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
 ~[main/:na]
at 
org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
 ~[main/:na]
at 
org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) 
~[main/:na]
at 
org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
~[main/:na]
at 
org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1697)
 ~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2362)
 ~[main/:na]
... 4 common frames omitted
{code}

I was running this command:

{code}
tools/bin/cassandra-stress user 
profile=~/Desktop/startup/stress/stress.yaml n=10 ops\(insert=1\) -rate 
threads=30
{code}

Here's the stress.yaml UPDATED!

{code}
### DML ### THIS IS UNDER CONSTRUCTION!!!

# Keyspace Name
keyspace: autogeneratedtest

# The CQL for creating a keyspace (optional if it already exists)
keyspace_definition: |
  CREATE KEYSPACE autogeneratedtest WITH replication = {'class': 
'SimpleStrategy', 'replication_factor': 1};
# Table name
table: test
# The CQL for creating a table you wish to stress (optional if it already 
exists)
table_definition:
  CREATE TABLE test (
  a int,
  b int,
  c int,
  d int,
  e int,
  f timestamp,
  g text,
  h bigint,
  i text,
  j text,
  k bigint,
  l text,
  m text,
  n float,
  o int,
  p float,
  q float,
  r text,
  s float,
  PRIMARY KEY ((a, c, d, b, e), m, f, g)
  );
### Column Distribution Specifications ###

columnspec:
  - name: a
size: uniform(4..4)
population: u

[jira] [Updated] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate

2015-10-27 Thread Sebastian Estevez (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Estevez updated CASSANDRA-10592:
--
Description: 
CORRECTION-
It turns out the exception occurs when running a read using a thrift jdbc 
driver. Once you have loaded the data with stress below, run 
SELECT * FROM "autogeneratedtest"."transaction_by_retailer" using this tool - 
http://www.aquafold.com/aquadatastudio_downloads.html
 
The exception:

{code}
WARN  [SharedPool-Worker-1] 2015-10-22 12:58:20,792 
AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.RuntimeException: java.lang.IllegalArgumentException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366)
 ~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_60]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 ~[main/:na]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
Caused by: java.lang.IllegalArgumentException: null
at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60]
at 
org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63)
 ~[main/:na]
at 
org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57) 
~[main/:na]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
 ~[main/:na]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
 ~[main/:na]
at 
org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
 ~[main/:na]
at 
org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) 
~[main/:na]
at 
org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
~[main/:na]
at 
org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1697)
 ~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2362)
 ~[main/:na]
... 4 common frames omitted
{code}

I was running this command:

{code}
tools/bin/cassandra-stress user 
profile=~/Desktop/startup/stress/stress.yaml n=10 ops\(insert=1\) -rate 
threads=30
{code}

Here's the stress.yaml UPDATED!

{code}
### DML ### THIS IS UNDER CONSTRUCTION!!!

# Keyspace Name
keyspace: autogeneratedtest

# The CQL for creating a keyspace (optional if it already exists)
keyspace_definition: |
  CREATE KEYSPACE autogeneratedtest WITH replication = {'class': 
'SimpleStrategy', 'replication_factor': 1};
# Table name
table: test
# The CQL for creating a table you wish to stress (optional if it already 
exists)
table_definition:
  CREATE TABLE test (
  a int,
  b int,
  c int,
  d int,
  e int,
  f timestamp,
  g text,
  h bigint,
  i text,
  j text,
  k bigint,
  l text,
  m text,
  n float,
  o int,
  p float,
  q float,
  r text,
  s float,
  PRIMARY KEY ((a, c, d, b, e), m, f, g)
  );
### Column Distribution Specifications ###

columnspec:
  - name: a
size: uniform(4..4)
population: uniform(1..500)

  - name: b
size: uniform(4..4)
population: uniform(2..3000)

[jira] [Updated] (CASSANDRA-10606) AbstractBTreePartition.rowCount() return the wrong number of rows for compact tables

2015-10-27 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-10606:
---
Attachment: 10606-3.0.txt

The patch takes into account the static row in the case where there is no other 
row.
CI:
* the unit test results are 
[here|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-10606-3.0-testall/]
* the dtest results are 
[here|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-10606-3.0-dtest/]


> AbstractBTreePartition.rowCount() return the wrong number of rows for compact 
> tables
> 
>
> Key: CASSANDRA-10606
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10606
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.0.0
>
> Attachments: 10606-3.0.txt
>
>
> For compact tables {{AbstractBTreePartition.rowCount()}} return the wrong 
> number of columns as it does not take into account static rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10605) MessagingService: COUNTER_MUTATION verb is associated with MUTATION stage

2015-10-27 Thread Anubhav Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Kale updated CASSANDRA-10605:
-
Attachment: trunk-10605.patch

> MessagingService: COUNTER_MUTATION verb is associated with MUTATION stage
> -
>
> Key: CASSANDRA-10605
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10605
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Anubhav Kale
>Priority: Minor
> Fix For: 2.1.10
>
> Attachments: trunk-10605.patch
>
>
> In MessagingService.java, I see that the verb COUNTER_MUTATION is associated 
> with COUNTER_MUTATION stage first, and later with MUTATION stage. We should 
> remove appropriate entry.
> I marked this as Major because I think the stage assignment is wrong.
> What should this be assigned to ?
> public static final EnumMap verbStages = new 
> EnumMap(MessagingService.Verb.class)
> {{
> put(Verb.MUTATION, Stage.MUTATION);
> put(Verb.COUNTER_MUTATION, Stage.COUNTER_MUTATION);*
> put(Verb.READ_REPAIR, Stage.MUTATION);
> put(Verb.HINT, Stage.MUTATION);
> put(Verb.TRUNCATE, Stage.MUTATION);
> put(Verb.PAXOS_PREPARE, Stage.MUTATION);
> put(Verb.PAXOS_PROPOSE, Stage.MUTATION);
> put(Verb.PAXOS_COMMIT, Stage.MUTATION);
> put(Verb.BATCH_STORE, Stage.MUTATION);
> put(Verb.BATCH_REMOVE, Stage.MUTATION);
> put(Verb.READ, Stage.READ);
> put(Verb.RANGE_SLICE, Stage.READ);
> put(Verb.INDEX_SCAN, Stage.READ);
> put(Verb.PAGED_RANGE, Stage.READ);
> put(Verb.REQUEST_RESPONSE, Stage.REQUEST_RESPONSE);
> put(Verb.INTERNAL_RESPONSE, Stage.INTERNAL_RESPONSE);
> put(Verb.STREAM_REPLY, Stage.MISC); // actually handled by 
> FileStreamTask and streamExecutors
> put(Verb.STREAM_REQUEST, Stage.MISC);
> put(Verb.REPLICATION_FINISHED, Stage.MISC);
> put(Verb.SNAPSHOT, Stage.MISC);
> put(Verb.TREE_REQUEST, Stage.ANTI_ENTROPY);
> put(Verb.TREE_RESPONSE, Stage.ANTI_ENTROPY);
> put(Verb.STREAMING_REPAIR_REQUEST, Stage.ANTI_ENTROPY);
> put(Verb.STREAMING_REPAIR_RESPONSE, Stage.ANTI_ENTROPY);
> put(Verb.REPAIR_MESSAGE, Stage.ANTI_ENTROPY);
> put(Verb.GOSSIP_DIGEST_ACK, Stage.GOSSIP);
> put(Verb.GOSSIP_DIGEST_ACK2, Stage.GOSSIP);
> put(Verb.GOSSIP_DIGEST_SYN, Stage.GOSSIP);
> put(Verb.GOSSIP_SHUTDOWN, Stage.GOSSIP);
> put(Verb.DEFINITIONS_UPDATE, Stage.MIGRATION);
> put(Verb.SCHEMA_CHECK, Stage.MIGRATION);
> put(Verb.MIGRATION_REQUEST, Stage.MIGRATION);
> put(Verb.INDEX_SCAN, Stage.READ);
> put(Verb.REPLICATION_FINISHED, Stage.MISC);
> *put(Verb.COUNTER_MUTATION, Stage.MUTATION);**
> put(Verb.SNAPSHOT, Stage.MISC);
> put(Verb.ECHO, Stage.GOSSIP);
> put(Verb.UNUSED_1, Stage.INTERNAL_RESPONSE);
> put(Verb.UNUSED_2, Stage.INTERNAL_RESPONSE);
> put(Verb.UNUSED_3, Stage.INTERNAL_RESPONSE);
> }};



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10605) MessagingService: COUNTER_MUTATION verb is associated with MUTATION stage

2015-10-27 Thread Anubhav Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Kale updated CASSANDRA-10605:
-
Attachment: (was: trunk-10605.txt)

> MessagingService: COUNTER_MUTATION verb is associated with MUTATION stage
> -
>
> Key: CASSANDRA-10605
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10605
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Anubhav Kale
>Priority: Minor
> Fix For: 2.1.10
>
> Attachments: trunk-10605.patch
>
>
> In MessagingService.java, I see that the verb COUNTER_MUTATION is associated 
> with COUNTER_MUTATION stage first, and later with MUTATION stage. We should 
> remove appropriate entry.
> I marked this as Major because I think the stage assignment is wrong.
> What should this be assigned to ?
> public static final EnumMap verbStages = new 
> EnumMap(MessagingService.Verb.class)
> {{
> put(Verb.MUTATION, Stage.MUTATION);
> put(Verb.COUNTER_MUTATION, Stage.COUNTER_MUTATION);*
> put(Verb.READ_REPAIR, Stage.MUTATION);
> put(Verb.HINT, Stage.MUTATION);
> put(Verb.TRUNCATE, Stage.MUTATION);
> put(Verb.PAXOS_PREPARE, Stage.MUTATION);
> put(Verb.PAXOS_PROPOSE, Stage.MUTATION);
> put(Verb.PAXOS_COMMIT, Stage.MUTATION);
> put(Verb.BATCH_STORE, Stage.MUTATION);
> put(Verb.BATCH_REMOVE, Stage.MUTATION);
> put(Verb.READ, Stage.READ);
> put(Verb.RANGE_SLICE, Stage.READ);
> put(Verb.INDEX_SCAN, Stage.READ);
> put(Verb.PAGED_RANGE, Stage.READ);
> put(Verb.REQUEST_RESPONSE, Stage.REQUEST_RESPONSE);
> put(Verb.INTERNAL_RESPONSE, Stage.INTERNAL_RESPONSE);
> put(Verb.STREAM_REPLY, Stage.MISC); // actually handled by 
> FileStreamTask and streamExecutors
> put(Verb.STREAM_REQUEST, Stage.MISC);
> put(Verb.REPLICATION_FINISHED, Stage.MISC);
> put(Verb.SNAPSHOT, Stage.MISC);
> put(Verb.TREE_REQUEST, Stage.ANTI_ENTROPY);
> put(Verb.TREE_RESPONSE, Stage.ANTI_ENTROPY);
> put(Verb.STREAMING_REPAIR_REQUEST, Stage.ANTI_ENTROPY);
> put(Verb.STREAMING_REPAIR_RESPONSE, Stage.ANTI_ENTROPY);
> put(Verb.REPAIR_MESSAGE, Stage.ANTI_ENTROPY);
> put(Verb.GOSSIP_DIGEST_ACK, Stage.GOSSIP);
> put(Verb.GOSSIP_DIGEST_ACK2, Stage.GOSSIP);
> put(Verb.GOSSIP_DIGEST_SYN, Stage.GOSSIP);
> put(Verb.GOSSIP_SHUTDOWN, Stage.GOSSIP);
> put(Verb.DEFINITIONS_UPDATE, Stage.MIGRATION);
> put(Verb.SCHEMA_CHECK, Stage.MIGRATION);
> put(Verb.MIGRATION_REQUEST, Stage.MIGRATION);
> put(Verb.INDEX_SCAN, Stage.READ);
> put(Verb.REPLICATION_FINISHED, Stage.MISC);
> *put(Verb.COUNTER_MUTATION, Stage.MUTATION);**
> put(Verb.SNAPSHOT, Stage.MISC);
> put(Verb.ECHO, Stage.GOSSIP);
> put(Verb.UNUSED_1, Stage.INTERNAL_RESPONSE);
> put(Verb.UNUSED_2, Stage.INTERNAL_RESPONSE);
> put(Verb.UNUSED_3, Stage.INTERNAL_RESPONSE);
> }};



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10606) AbstractBTreePartition.rowCount() return the wrong number of rows for compact tables

2015-10-27 Thread Benjamin Lerer (JIRA)
Benjamin Lerer created CASSANDRA-10606:
--

 Summary: AbstractBTreePartition.rowCount() return the wrong number 
of rows for compact tables
 Key: CASSANDRA-10606
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10606
 Project: Cassandra
  Issue Type: Bug
Reporter: Benjamin Lerer
Assignee: Benjamin Lerer
 Fix For: 3.0.0


For compact tables {{AbstractBTreePartition.rowCount()}} return the wrong 
number of columns as it does not take into account static rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10511) Index summary downsampling prevents mmap access of large files after restart

2015-10-27 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977217#comment-14977217
 ] 

Ariel Weisberg commented on CASSANDRA-10511:


Is this true for 3.0 after [~Stefania]'s work? I remember reviewing that and 
looking at the code we don't use the readable bounds anymore?

We use it at runtime for early opening just to denote the end of the file, but 
that is it.

For 2.1 and 2.2 yeah it looks like short of backporting Stefania's work we 
should populate the boundaries.



> Index summary downsampling prevents mmap access of large files after restart
> 
>
> Key: CASSANDRA-10511
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10511
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
> Fix For: 2.1.x, 2.2.x, 3.1
>
>
> {{SSTableReader.cloneWithNewSummarySampleLevel}} constructs a 
> {{SegmentedFile.Builder}} but never populates it with any boundaries. For 
> files larger than 2Gb, this will result in their being accessed via buffered 
> io after a restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10511) Index summary downsampling prevents mmap access of large files after restart

2015-10-27 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg reassigned CASSANDRA-10511:
--

Assignee: Ariel Weisberg

> Index summary downsampling prevents mmap access of large files after restart
> 
>
> Key: CASSANDRA-10511
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10511
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Ariel Weisberg
> Fix For: 2.1.x, 2.2.x, 3.1
>
>
> {{SSTableReader.cloneWithNewSummarySampleLevel}} constructs a 
> {{SegmentedFile.Builder}} but never populates it with any boundaries. For 
> files larger than 2Gb, this will result in their being accessed via buffered 
> io after a restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10605) MessagingService: COUNTER_MUTATION verb is associated with MUTATION stage

2015-10-27 Thread Anubhav Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977199#comment-14977199
 ] 

Anubhav Kale commented on CASSANDRA-10605:
--

Also reduced severity to "Minor" assuming my patch is what is needed. 

> MessagingService: COUNTER_MUTATION verb is associated with MUTATION stage
> -
>
> Key: CASSANDRA-10605
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10605
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Anubhav Kale
>Priority: Minor
> Fix For: 2.1.10
>
> Attachments: trunk-10605.txt
>
>
> In MessagingService.java, I see that the verb COUNTER_MUTATION is associated 
> with COUNTER_MUTATION stage first, and later with MUTATION stage. We should 
> remove appropriate entry.
> I marked this as Major because I think the stage assignment is wrong.
> What should this be assigned to ?
> public static final EnumMap verbStages = new 
> EnumMap(MessagingService.Verb.class)
> {{
> put(Verb.MUTATION, Stage.MUTATION);
> put(Verb.COUNTER_MUTATION, Stage.COUNTER_MUTATION);*
> put(Verb.READ_REPAIR, Stage.MUTATION);
> put(Verb.HINT, Stage.MUTATION);
> put(Verb.TRUNCATE, Stage.MUTATION);
> put(Verb.PAXOS_PREPARE, Stage.MUTATION);
> put(Verb.PAXOS_PROPOSE, Stage.MUTATION);
> put(Verb.PAXOS_COMMIT, Stage.MUTATION);
> put(Verb.BATCH_STORE, Stage.MUTATION);
> put(Verb.BATCH_REMOVE, Stage.MUTATION);
> put(Verb.READ, Stage.READ);
> put(Verb.RANGE_SLICE, Stage.READ);
> put(Verb.INDEX_SCAN, Stage.READ);
> put(Verb.PAGED_RANGE, Stage.READ);
> put(Verb.REQUEST_RESPONSE, Stage.REQUEST_RESPONSE);
> put(Verb.INTERNAL_RESPONSE, Stage.INTERNAL_RESPONSE);
> put(Verb.STREAM_REPLY, Stage.MISC); // actually handled by 
> FileStreamTask and streamExecutors
> put(Verb.STREAM_REQUEST, Stage.MISC);
> put(Verb.REPLICATION_FINISHED, Stage.MISC);
> put(Verb.SNAPSHOT, Stage.MISC);
> put(Verb.TREE_REQUEST, Stage.ANTI_ENTROPY);
> put(Verb.TREE_RESPONSE, Stage.ANTI_ENTROPY);
> put(Verb.STREAMING_REPAIR_REQUEST, Stage.ANTI_ENTROPY);
> put(Verb.STREAMING_REPAIR_RESPONSE, Stage.ANTI_ENTROPY);
> put(Verb.REPAIR_MESSAGE, Stage.ANTI_ENTROPY);
> put(Verb.GOSSIP_DIGEST_ACK, Stage.GOSSIP);
> put(Verb.GOSSIP_DIGEST_ACK2, Stage.GOSSIP);
> put(Verb.GOSSIP_DIGEST_SYN, Stage.GOSSIP);
> put(Verb.GOSSIP_SHUTDOWN, Stage.GOSSIP);
> put(Verb.DEFINITIONS_UPDATE, Stage.MIGRATION);
> put(Verb.SCHEMA_CHECK, Stage.MIGRATION);
> put(Verb.MIGRATION_REQUEST, Stage.MIGRATION);
> put(Verb.INDEX_SCAN, Stage.READ);
> put(Verb.REPLICATION_FINISHED, Stage.MISC);
> *put(Verb.COUNTER_MUTATION, Stage.MUTATION);**
> put(Verb.SNAPSHOT, Stage.MISC);
> put(Verb.ECHO, Stage.GOSSIP);
> put(Verb.UNUSED_1, Stage.INTERNAL_RESPONSE);
> put(Verb.UNUSED_2, Stage.INTERNAL_RESPONSE);
> put(Verb.UNUSED_3, Stage.INTERNAL_RESPONSE);
> }};



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10605) MessagingService: COUNTER_MUTATION verb is associated with MUTATION stage

2015-10-27 Thread Anubhav Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Kale updated CASSANDRA-10605:
-
Priority: Minor  (was: Major)

> MessagingService: COUNTER_MUTATION verb is associated with MUTATION stage
> -
>
> Key: CASSANDRA-10605
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10605
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Anubhav Kale
>Priority: Minor
> Fix For: 2.1.10
>
> Attachments: trunk-10605.txt
>
>
> In MessagingService.java, I see that the verb COUNTER_MUTATION is associated 
> with COUNTER_MUTATION stage first, and later with MUTATION stage. We should 
> remove appropriate entry.
> I marked this as Major because I think the stage assignment is wrong.
> What should this be assigned to ?
> public static final EnumMap verbStages = new 
> EnumMap(MessagingService.Verb.class)
> {{
> put(Verb.MUTATION, Stage.MUTATION);
> put(Verb.COUNTER_MUTATION, Stage.COUNTER_MUTATION);*
> put(Verb.READ_REPAIR, Stage.MUTATION);
> put(Verb.HINT, Stage.MUTATION);
> put(Verb.TRUNCATE, Stage.MUTATION);
> put(Verb.PAXOS_PREPARE, Stage.MUTATION);
> put(Verb.PAXOS_PROPOSE, Stage.MUTATION);
> put(Verb.PAXOS_COMMIT, Stage.MUTATION);
> put(Verb.BATCH_STORE, Stage.MUTATION);
> put(Verb.BATCH_REMOVE, Stage.MUTATION);
> put(Verb.READ, Stage.READ);
> put(Verb.RANGE_SLICE, Stage.READ);
> put(Verb.INDEX_SCAN, Stage.READ);
> put(Verb.PAGED_RANGE, Stage.READ);
> put(Verb.REQUEST_RESPONSE, Stage.REQUEST_RESPONSE);
> put(Verb.INTERNAL_RESPONSE, Stage.INTERNAL_RESPONSE);
> put(Verb.STREAM_REPLY, Stage.MISC); // actually handled by 
> FileStreamTask and streamExecutors
> put(Verb.STREAM_REQUEST, Stage.MISC);
> put(Verb.REPLICATION_FINISHED, Stage.MISC);
> put(Verb.SNAPSHOT, Stage.MISC);
> put(Verb.TREE_REQUEST, Stage.ANTI_ENTROPY);
> put(Verb.TREE_RESPONSE, Stage.ANTI_ENTROPY);
> put(Verb.STREAMING_REPAIR_REQUEST, Stage.ANTI_ENTROPY);
> put(Verb.STREAMING_REPAIR_RESPONSE, Stage.ANTI_ENTROPY);
> put(Verb.REPAIR_MESSAGE, Stage.ANTI_ENTROPY);
> put(Verb.GOSSIP_DIGEST_ACK, Stage.GOSSIP);
> put(Verb.GOSSIP_DIGEST_ACK2, Stage.GOSSIP);
> put(Verb.GOSSIP_DIGEST_SYN, Stage.GOSSIP);
> put(Verb.GOSSIP_SHUTDOWN, Stage.GOSSIP);
> put(Verb.DEFINITIONS_UPDATE, Stage.MIGRATION);
> put(Verb.SCHEMA_CHECK, Stage.MIGRATION);
> put(Verb.MIGRATION_REQUEST, Stage.MIGRATION);
> put(Verb.INDEX_SCAN, Stage.READ);
> put(Verb.REPLICATION_FINISHED, Stage.MISC);
> *put(Verb.COUNTER_MUTATION, Stage.MUTATION);**
> put(Verb.SNAPSHOT, Stage.MISC);
> put(Verb.ECHO, Stage.GOSSIP);
> put(Verb.UNUSED_1, Stage.INTERNAL_RESPONSE);
> put(Verb.UNUSED_2, Stage.INTERNAL_RESPONSE);
> put(Verb.UNUSED_3, Stage.INTERNAL_RESPONSE);
> }};



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10605) MessagingService: COUNTER_MUTATION verb is associated with MUTATION stage

2015-10-27 Thread Anubhav Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Kale updated CASSANDRA-10605:
-
Attachment: trunk-10605.txt

> MessagingService: COUNTER_MUTATION verb is associated with MUTATION stage
> -
>
> Key: CASSANDRA-10605
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10605
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Anubhav Kale
> Fix For: 2.1.10
>
> Attachments: trunk-10605.txt
>
>
> In MessagingService.java, I see that the verb COUNTER_MUTATION is associated 
> with COUNTER_MUTATION stage first, and later with MUTATION stage. We should 
> remove appropriate entry.
> I marked this as Major because I think the stage assignment is wrong.
> What should this be assigned to ?
> public static final EnumMap verbStages = new 
> EnumMap(MessagingService.Verb.class)
> {{
> put(Verb.MUTATION, Stage.MUTATION);
> put(Verb.COUNTER_MUTATION, Stage.COUNTER_MUTATION);*
> put(Verb.READ_REPAIR, Stage.MUTATION);
> put(Verb.HINT, Stage.MUTATION);
> put(Verb.TRUNCATE, Stage.MUTATION);
> put(Verb.PAXOS_PREPARE, Stage.MUTATION);
> put(Verb.PAXOS_PROPOSE, Stage.MUTATION);
> put(Verb.PAXOS_COMMIT, Stage.MUTATION);
> put(Verb.BATCH_STORE, Stage.MUTATION);
> put(Verb.BATCH_REMOVE, Stage.MUTATION);
> put(Verb.READ, Stage.READ);
> put(Verb.RANGE_SLICE, Stage.READ);
> put(Verb.INDEX_SCAN, Stage.READ);
> put(Verb.PAGED_RANGE, Stage.READ);
> put(Verb.REQUEST_RESPONSE, Stage.REQUEST_RESPONSE);
> put(Verb.INTERNAL_RESPONSE, Stage.INTERNAL_RESPONSE);
> put(Verb.STREAM_REPLY, Stage.MISC); // actually handled by 
> FileStreamTask and streamExecutors
> put(Verb.STREAM_REQUEST, Stage.MISC);
> put(Verb.REPLICATION_FINISHED, Stage.MISC);
> put(Verb.SNAPSHOT, Stage.MISC);
> put(Verb.TREE_REQUEST, Stage.ANTI_ENTROPY);
> put(Verb.TREE_RESPONSE, Stage.ANTI_ENTROPY);
> put(Verb.STREAMING_REPAIR_REQUEST, Stage.ANTI_ENTROPY);
> put(Verb.STREAMING_REPAIR_RESPONSE, Stage.ANTI_ENTROPY);
> put(Verb.REPAIR_MESSAGE, Stage.ANTI_ENTROPY);
> put(Verb.GOSSIP_DIGEST_ACK, Stage.GOSSIP);
> put(Verb.GOSSIP_DIGEST_ACK2, Stage.GOSSIP);
> put(Verb.GOSSIP_DIGEST_SYN, Stage.GOSSIP);
> put(Verb.GOSSIP_SHUTDOWN, Stage.GOSSIP);
> put(Verb.DEFINITIONS_UPDATE, Stage.MIGRATION);
> put(Verb.SCHEMA_CHECK, Stage.MIGRATION);
> put(Verb.MIGRATION_REQUEST, Stage.MIGRATION);
> put(Verb.INDEX_SCAN, Stage.READ);
> put(Verb.REPLICATION_FINISHED, Stage.MISC);
> *put(Verb.COUNTER_MUTATION, Stage.MUTATION);**
> put(Verb.SNAPSHOT, Stage.MISC);
> put(Verb.ECHO, Stage.GOSSIP);
> put(Verb.UNUSED_1, Stage.INTERNAL_RESPONSE);
> put(Verb.UNUSED_2, Stage.INTERNAL_RESPONSE);
> put(Verb.UNUSED_3, Stage.INTERNAL_RESPONSE);
> }};



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10605) MessagingService: COUNTER_MUTATION verb is associated with MUTATION stage

2015-10-27 Thread Anubhav Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Kale updated CASSANDRA-10605:
-
Description: 
In MessagingService.java, I see that the verb COUNTER_MUTATION is associated 
with COUNTER_MUTATION stage first, and later with MUTATION stage. We should 
remove appropriate entry.

I marked this as Major because I think the stage assignment is wrong.

What should this be assigned to ?

public static final EnumMap verbStages = new 
EnumMap(MessagingService.Verb.class)
{{
put(Verb.MUTATION, Stage.MUTATION);
put(Verb.COUNTER_MUTATION, Stage.COUNTER_MUTATION);*
put(Verb.READ_REPAIR, Stage.MUTATION);
put(Verb.HINT, Stage.MUTATION);
put(Verb.TRUNCATE, Stage.MUTATION);
put(Verb.PAXOS_PREPARE, Stage.MUTATION);
put(Verb.PAXOS_PROPOSE, Stage.MUTATION);
put(Verb.PAXOS_COMMIT, Stage.MUTATION);
put(Verb.BATCH_STORE, Stage.MUTATION);
put(Verb.BATCH_REMOVE, Stage.MUTATION);

put(Verb.READ, Stage.READ);
put(Verb.RANGE_SLICE, Stage.READ);
put(Verb.INDEX_SCAN, Stage.READ);
put(Verb.PAGED_RANGE, Stage.READ);

put(Verb.REQUEST_RESPONSE, Stage.REQUEST_RESPONSE);
put(Verb.INTERNAL_RESPONSE, Stage.INTERNAL_RESPONSE);

put(Verb.STREAM_REPLY, Stage.MISC); // actually handled by 
FileStreamTask and streamExecutors
put(Verb.STREAM_REQUEST, Stage.MISC);
put(Verb.REPLICATION_FINISHED, Stage.MISC);
put(Verb.SNAPSHOT, Stage.MISC);

put(Verb.TREE_REQUEST, Stage.ANTI_ENTROPY);
put(Verb.TREE_RESPONSE, Stage.ANTI_ENTROPY);
put(Verb.STREAMING_REPAIR_REQUEST, Stage.ANTI_ENTROPY);
put(Verb.STREAMING_REPAIR_RESPONSE, Stage.ANTI_ENTROPY);
put(Verb.REPAIR_MESSAGE, Stage.ANTI_ENTROPY);
put(Verb.GOSSIP_DIGEST_ACK, Stage.GOSSIP);
put(Verb.GOSSIP_DIGEST_ACK2, Stage.GOSSIP);
put(Verb.GOSSIP_DIGEST_SYN, Stage.GOSSIP);
put(Verb.GOSSIP_SHUTDOWN, Stage.GOSSIP);

put(Verb.DEFINITIONS_UPDATE, Stage.MIGRATION);
put(Verb.SCHEMA_CHECK, Stage.MIGRATION);
put(Verb.MIGRATION_REQUEST, Stage.MIGRATION);
put(Verb.INDEX_SCAN, Stage.READ);
put(Verb.REPLICATION_FINISHED, Stage.MISC);
*put(Verb.COUNTER_MUTATION, Stage.MUTATION);**
put(Verb.SNAPSHOT, Stage.MISC);
put(Verb.ECHO, Stage.GOSSIP);

put(Verb.UNUSED_1, Stage.INTERNAL_RESPONSE);
put(Verb.UNUSED_2, Stage.INTERNAL_RESPONSE);
put(Verb.UNUSED_3, Stage.INTERNAL_RESPONSE);
}};

  was:
In MessagingService.java, I see that the verb COUNTER_MUTATION is associated 
with COUNTER_MUTATION stage first, and later with MUTATION stage. We should 
remove appropriate entry.

What should this be assigned to ?

public static final EnumMap verbStages = new 
EnumMap(MessagingService.Verb.class)
{{
put(Verb.MUTATION, Stage.MUTATION);
put(Verb.COUNTER_MUTATION, Stage.COUNTER_MUTATION);*
put(Verb.READ_REPAIR, Stage.MUTATION);
put(Verb.HINT, Stage.MUTATION);
put(Verb.TRUNCATE, Stage.MUTATION);
put(Verb.PAXOS_PREPARE, Stage.MUTATION);
put(Verb.PAXOS_PROPOSE, Stage.MUTATION);
put(Verb.PAXOS_COMMIT, Stage.MUTATION);
put(Verb.BATCH_STORE, Stage.MUTATION);
put(Verb.BATCH_REMOVE, Stage.MUTATION);

put(Verb.READ, Stage.READ);
put(Verb.RANGE_SLICE, Stage.READ);
put(Verb.INDEX_SCAN, Stage.READ);
put(Verb.PAGED_RANGE, Stage.READ);

put(Verb.REQUEST_RESPONSE, Stage.REQUEST_RESPONSE);
put(Verb.INTERNAL_RESPONSE, Stage.INTERNAL_RESPONSE);

put(Verb.STREAM_REPLY, Stage.MISC); // actually handled by 
FileStreamTask and streamExecutors
put(Verb.STREAM_REQUEST, Stage.MISC);
put(Verb.REPLICATION_FINISHED, Stage.MISC);
put(Verb.SNAPSHOT, Stage.MISC);

put(Verb.TREE_REQUEST, Stage.ANTI_ENTROPY);
put(Verb.TREE_RESPONSE, Stage.ANTI_ENTROPY);
put(Verb.STREAMING_REPAIR_REQUEST, Stage.ANTI_ENTROPY);
put(Verb.STREAMING_REPAIR_RESPONSE, Stage.ANTI_ENTROPY);
put(Verb.REPAIR_MESSAGE, Stage.ANTI_ENTROPY);
put(Verb.GOSSIP_DIGEST_ACK, Stage.GOSSIP);
put(Verb.GOSSIP_DIGEST_ACK2, Stage.GOSSIP);
put(Verb.GOSSIP_DIGEST_SYN, Stage.GOSSIP);
put(Verb.GOSSIP_SHUTDOWN, Stage.GOSSIP);

put(Verb.DEFINITIONS_UPDATE, Stage.MIGRATION);
put(Verb.SCHEMA_CHECK, Stage.MIGRATION);
put(Verb.MIGRATION_REQUEST, Stage.MIGRATION);
put(Verb.INDEX_SCAN, Stage.READ);
put(Verb.REPLICATION_FINISHED, Stage.MISC);
*put(Verb.COUNTER_MUTATION, Stage.MUTATION);**
put(Verb.SNAPSHOT, Stage.MISC);
put(Verb.ECHO, Stage.GOSSIP);

put(Verb.UNUSED_1, Stage.INTERNAL_RESPONSE);
put(Verb.UNUSED_2, Stage.INTERNAL_RESPONSE);
  

[jira] [Created] (CASSANDRA-10605) MessagingService: COUNTER_MUTATION verb is associated with MUTATION stage

2015-10-27 Thread Anubhav Kale (JIRA)
Anubhav Kale created CASSANDRA-10605:


 Summary: MessagingService: COUNTER_MUTATION verb is associated 
with MUTATION stage
 Key: CASSANDRA-10605
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10605
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Anubhav Kale
 Fix For: 2.1.10


In MessagingService.java, I see that the verb COUNTER_MUTATION is associated 
with COUNTER_MUTATION stage first, and later with MUTATION stage. We should 
remove appropriate entry.

What should this be assigned to ?

public static final EnumMap verbStages = new 
EnumMap(MessagingService.Verb.class)
{{
put(Verb.MUTATION, Stage.MUTATION);
put(Verb.COUNTER_MUTATION, Stage.COUNTER_MUTATION);*
put(Verb.READ_REPAIR, Stage.MUTATION);
put(Verb.HINT, Stage.MUTATION);
put(Verb.TRUNCATE, Stage.MUTATION);
put(Verb.PAXOS_PREPARE, Stage.MUTATION);
put(Verb.PAXOS_PROPOSE, Stage.MUTATION);
put(Verb.PAXOS_COMMIT, Stage.MUTATION);
put(Verb.BATCH_STORE, Stage.MUTATION);
put(Verb.BATCH_REMOVE, Stage.MUTATION);

put(Verb.READ, Stage.READ);
put(Verb.RANGE_SLICE, Stage.READ);
put(Verb.INDEX_SCAN, Stage.READ);
put(Verb.PAGED_RANGE, Stage.READ);

put(Verb.REQUEST_RESPONSE, Stage.REQUEST_RESPONSE);
put(Verb.INTERNAL_RESPONSE, Stage.INTERNAL_RESPONSE);

put(Verb.STREAM_REPLY, Stage.MISC); // actually handled by 
FileStreamTask and streamExecutors
put(Verb.STREAM_REQUEST, Stage.MISC);
put(Verb.REPLICATION_FINISHED, Stage.MISC);
put(Verb.SNAPSHOT, Stage.MISC);

put(Verb.TREE_REQUEST, Stage.ANTI_ENTROPY);
put(Verb.TREE_RESPONSE, Stage.ANTI_ENTROPY);
put(Verb.STREAMING_REPAIR_REQUEST, Stage.ANTI_ENTROPY);
put(Verb.STREAMING_REPAIR_RESPONSE, Stage.ANTI_ENTROPY);
put(Verb.REPAIR_MESSAGE, Stage.ANTI_ENTROPY);
put(Verb.GOSSIP_DIGEST_ACK, Stage.GOSSIP);
put(Verb.GOSSIP_DIGEST_ACK2, Stage.GOSSIP);
put(Verb.GOSSIP_DIGEST_SYN, Stage.GOSSIP);
put(Verb.GOSSIP_SHUTDOWN, Stage.GOSSIP);

put(Verb.DEFINITIONS_UPDATE, Stage.MIGRATION);
put(Verb.SCHEMA_CHECK, Stage.MIGRATION);
put(Verb.MIGRATION_REQUEST, Stage.MIGRATION);
put(Verb.INDEX_SCAN, Stage.READ);
put(Verb.REPLICATION_FINISHED, Stage.MISC);
*put(Verb.COUNTER_MUTATION, Stage.MUTATION);**
put(Verb.SNAPSHOT, Stage.MISC);
put(Verb.ECHO, Stage.GOSSIP);

put(Verb.UNUSED_1, Stage.INTERNAL_RESPONSE);
put(Verb.UNUSED_2, Stage.INTERNAL_RESPONSE);
put(Verb.UNUSED_3, Stage.INTERNAL_RESPONSE);
}};



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9947) nodetool verify is broken

2015-10-27 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977114#comment-14977114
 ] 

Jeff Jirsa edited comment on CASSANDRA-9947 at 10/27/15 8:38 PM:
-

The intent behind marking it as unrepaired was to allow other nodes to repair 
this data inbound, though it does make sense that by doing so, we'll also 
potentially pollute that corrupt data out to other replicas. Issuing a scrub 
may not be the right fix, either - a single bit flip in an uncompressed table 
will scrub just fine, and nothing will happen except you'll write a new 
checksum and lose knowledge of the fact that the bit-flip happened. 

Maybe the limitation here is that we have effectively 2 states - repaired and 
unrepaired - when we need a third - corrupt - so we can force this local node 
to repair its range, without using that sstable as a source for outgoing repair 
streams? 

Maybe the right thing to do is just trigger disk failure policy and let the 
operator decide what to do with it? Then they can offline scrub and throw away 
data from compressed tables, or delete uncompressed and let it be repaired. 


was (Author: jjirsa):
The intent behind marking it as unrepaired was to allow other nodes to repair 
this data inbound, though it does make sense that by doing so, we'll also 
potentially pollute that corrupt data out to other replicas. Issuing a scrub 
may not be the right fix, either - a single bit flip in an uncompressed table 
will scrub just fine, and nothing will happen except you'll write a new 
checksum and lose knowledge of the fact that the bit-flip happened. 

Maybe the limitation here is that we have effectively 2 states - repaired and 
unrepaired - when we need a third - corrupt - so we can force this local node 
to repair its range, without using that sstable as a source for outgoing repair 
streams? 

> nodetool verify is broken
> -
>
> Key: CASSANDRA-9947
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9947
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jonathan Ellis
>Priority: Critical
> Fix For: 2.2.x
>
>
> Raised these issues on CASSANDRA-5791, but didn't revert/re-open, so they 
> were ignored:
> We mark sstables that fail verification as unrepaired, but that's not going 
> to do what you think.  What it means is that the local node will use that 
> sstable in the next repair, but other nodes will not. So all we'll end up 
> doing is streaming whatever data we can read from it, to the other replicas.  
> If we could magically mark whatever sstables correspond on the remote nodes, 
> to the data in the local sstable, that would work, but we can't.
> IMO what we should do is:
> *scrub, because it's quite likely we'll fail reading from the sstable 
> otherwise and
> *full repair across the data range covered by the sstable
> Additionally,
> * I'm not sure that keeping "extended verify" code around is worth it. Since 
> the point is to work around not having a checksum, we could just scrub 
> instead. This is slightly more heavyweight but it would be a one-time cost 
> (scrub would build a new checksum) and we wouldn't have to worry about 
> keeping two versions of almost-the-same-code in sync.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9947) nodetool verify is broken

2015-10-27 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977114#comment-14977114
 ] 

Jeff Jirsa commented on CASSANDRA-9947:
---

The intent behind marking it as unrepaired was to allow other nodes to repair 
this data inbound, though it does make sense that by doing so, we'll also 
potentially pollute that corrupt data out to other replicas. Issuing a scrub 
may not be the right fix, either - a single bit flip in an uncompressed table 
will scrub just fine, and nothing will happen except you'll write a new 
checksum and lose knowledge of the fact that the bit-flip happened. 

Maybe the limitation here is that we have effectively 2 states - repaired and 
unrepaired - when we need a third - corrupt - so we can force this local node 
to repair its range, without using that sstable as a source for outgoing repair 
streams? 

> nodetool verify is broken
> -
>
> Key: CASSANDRA-9947
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9947
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jonathan Ellis
>Priority: Critical
> Fix For: 2.2.x
>
>
> Raised these issues on CASSANDRA-5791, but didn't revert/re-open, so they 
> were ignored:
> We mark sstables that fail verification as unrepaired, but that's not going 
> to do what you think.  What it means is that the local node will use that 
> sstable in the next repair, but other nodes will not. So all we'll end up 
> doing is streaming whatever data we can read from it, to the other replicas.  
> If we could magically mark whatever sstables correspond on the remote nodes, 
> to the data in the local sstable, that would work, but we can't.
> IMO what we should do is:
> *scrub, because it's quite likely we'll fail reading from the sstable 
> otherwise and
> *full repair across the data range covered by the sstable
> Additionally,
> * I'm not sure that keeping "extended verify" code around is worth it. Since 
> the point is to work around not having a checksum, we could just scrub 
> instead. This is slightly more heavyweight but it would be a one-time cost 
> (scrub would build a new checksum) and we wouldn't have to worry about 
> keeping two versions of almost-the-same-code in sync.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9222) AssertionError after decommission

2015-10-27 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan resolved CASSANDRA-9222.

Resolution: Duplicate

> AssertionError after decommission
> -
>
> Key: CASSANDRA-9222
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9222
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Priority: Minor
>
> Saw this on trunk while working on CASSANDRA-8072, but it may affect earlier 
> revisions as well:
> {noformat}
> INFO  17:48:57 MessagingService has terminated the accept() thread
> INFO  17:48:58 DECOMMISSIONED
> ERROR 17:52:25 Exception in thread Thread[OptionalTasks:1,5,main]
> java.lang.AssertionError: -1011553757645129692 not found in 
> -9212067178699207814, -9200531256183869940, -9166030381776079682, 
> -9162013024688602642, -9151724494713671168, -9095828490921521759, 
> -9035494031488373110, -8993765846966048219, -8912013107131353260, 
> -8909000788978186800, -8879514397454962673, -8868628980500567099, 
> -8850730903031889070, -8810378752213886595, -8779200870214886308, 
> -8758215747589442842, -8751091270073031687, -8727034084505556969, 
> -8665197275159395069, -8656563059526305598, -8468078121019364990, 
> -8465001791134178844, -8442193507205463429, -8422069069190372219, 
> -8342133517826612505, -8341643847610190520, -8340770353573450569, 
> -8337671516798157281, -8299063757464280571, -8294397037816683529, 
> -8190643358275415766, -8125907580996325958, -8080821167493102683, 
> -8058428707430264364, -8033777866368709204, -8018079744052327023, 
> -8005568943124488030, -7911488756902729132, -7831006227012170930, 
> -7824529182957931950, -7807286997402075771, -7795080548612350344, 
> -7778629955912441437, -7771701686959718810, -7759250335393772671, 
> -7745731940317799541, -7703194536911509010, -7694764467260740698, 
> -7691909270364954632, -7687121918922986909, -7682707339911246942, 
> -7517133373189921954, -7482800574078120526, -7475897243891441451, 
> -7334307376946940271, -7326649207653179327, -7258677281263041990, 
> -7221843646683358238, -7193299656451825680, -7105256682000196035, 
> -7035269781687029457, -7024278722443497027, -7019197046707993025, 
> -7015131617238216508, -7003811999522811317, -6980314778696530567, 
> -6966235125715836473, -691530498397662, -6912703644363131398, 
> -6881456879008059927, -6861265076865721267, -6850740895102395611, 
> -6808435504617684311, -6785202117470372844, -6782573711981746574, 
> -6763604807975420855, -6738443719321921481, -6718513123799422576, 
> -6711670508127917469, -6709012720615571304, -6645945635050443947, 
> -6629420613692951932, -6542209628003661283, -6535684002637060628, 
> -6507671461487774245, -6423206762015678338, -6409843839148310789, 
> -6404011469157179029, -6381904465334594648, -6311911206861962333, 
> -6296991709696294898, -6264931794517958640, -6261574198670386500, 
> -6261382604358802512, -6252257370391459113, -6241897861580419597, 
> -6227245701986117982, -6199525755295090433, -6180934919369759659, 
> -6144605078172691818, -6126223305042342065, -6118447361839427651, 
> -6074679422903704861, -6053157348245110185, -6029489996808528900, 
> -5984211855143878285, -5976157876053718897, -5960786495011670628, 
> -5958735514226770035, -5899767639655442330, -5822684184303415148, 
> -5781417439294763637, -5751460432371890910, -5740166642636309327, 
> -5695626417612186310, -5640765045723408247, -5617181156049689169, 
> -5609533985177356591, -5601369236916580549, -5597950494887081576, 
> -5563417985168606424, -5544827346340456629, -5532661047516804641, 
> -5522839053491352218, -5515748028172318343, -5503681859719385351, 
> -5454037971834611841, -5391841126413524561, -5391486446881271229, 
> -5345799278441821500, -5334673760925625816, -5223383618739305156, 
> -5221923994481449381, -5201263557535069480, -5146266397250565218, 
> -5129908985877585855, -5105202808286786842, -5087879514740126453, 
> -5015647678958926683, -4956601765875516828, -4870012706573251068, 
> -4843165740363419346, -4785540557423875550, -4769272272470020667, 
> -4743838345902355963, -4652149714081482841, -4651813505681686208, 
> -4633498525751156636, -4617489888285113964, -4575171285024168183, 
> -4426852178336308913, -4426400792698710435, -4389286320937036309, 
> -4324528033603203034, -4310368852323145495, -4302216608677327172, 
> -4229528661709148440, -4207740831738287983, -4203528661247313570, 
> -3948641241721335982, -3946554569612854645, -3931865850800685387, 
> -3925635355333550077, -3834502440481769685, -3827908348147378297, 
> -3805680095754927988, -3804947918584815385, -3800995210938487618, 
> -3783564223836955070, -3775028120786497996, -3711629770355538643, 
> -3710182799291812403, -3643158926306968005, -3625334149683154824, 
> -3601333132746233576, -3525454189106118076, -3488846023499783

[jira] [Updated] (CASSANDRA-9222) AssertionError after decommission

2015-10-27 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-9222:
---
Fix Version/s: (was: 3.1)

> AssertionError after decommission
> -
>
> Key: CASSANDRA-9222
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9222
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Priority: Minor
>
> Saw this on trunk while working on CASSANDRA-8072, but it may affect earlier 
> revisions as well:
> {noformat}
> INFO  17:48:57 MessagingService has terminated the accept() thread
> INFO  17:48:58 DECOMMISSIONED
> ERROR 17:52:25 Exception in thread Thread[OptionalTasks:1,5,main]
> java.lang.AssertionError: -1011553757645129692 not found in 
> -9212067178699207814, -9200531256183869940, -9166030381776079682, 
> -9162013024688602642, -9151724494713671168, -9095828490921521759, 
> -9035494031488373110, -8993765846966048219, -8912013107131353260, 
> -8909000788978186800, -8879514397454962673, -8868628980500567099, 
> -8850730903031889070, -8810378752213886595, -8779200870214886308, 
> -8758215747589442842, -8751091270073031687, -8727034084505556969, 
> -8665197275159395069, -8656563059526305598, -8468078121019364990, 
> -8465001791134178844, -8442193507205463429, -8422069069190372219, 
> -8342133517826612505, -8341643847610190520, -8340770353573450569, 
> -8337671516798157281, -8299063757464280571, -8294397037816683529, 
> -8190643358275415766, -8125907580996325958, -8080821167493102683, 
> -8058428707430264364, -8033777866368709204, -8018079744052327023, 
> -8005568943124488030, -7911488756902729132, -7831006227012170930, 
> -7824529182957931950, -7807286997402075771, -7795080548612350344, 
> -7778629955912441437, -7771701686959718810, -7759250335393772671, 
> -7745731940317799541, -7703194536911509010, -7694764467260740698, 
> -7691909270364954632, -7687121918922986909, -7682707339911246942, 
> -7517133373189921954, -7482800574078120526, -7475897243891441451, 
> -7334307376946940271, -7326649207653179327, -7258677281263041990, 
> -7221843646683358238, -7193299656451825680, -7105256682000196035, 
> -7035269781687029457, -7024278722443497027, -7019197046707993025, 
> -7015131617238216508, -7003811999522811317, -6980314778696530567, 
> -6966235125715836473, -691530498397662, -6912703644363131398, 
> -6881456879008059927, -6861265076865721267, -6850740895102395611, 
> -6808435504617684311, -6785202117470372844, -6782573711981746574, 
> -6763604807975420855, -6738443719321921481, -6718513123799422576, 
> -6711670508127917469, -6709012720615571304, -6645945635050443947, 
> -6629420613692951932, -6542209628003661283, -6535684002637060628, 
> -6507671461487774245, -6423206762015678338, -6409843839148310789, 
> -6404011469157179029, -6381904465334594648, -6311911206861962333, 
> -6296991709696294898, -6264931794517958640, -6261574198670386500, 
> -6261382604358802512, -6252257370391459113, -6241897861580419597, 
> -6227245701986117982, -6199525755295090433, -6180934919369759659, 
> -6144605078172691818, -6126223305042342065, -6118447361839427651, 
> -6074679422903704861, -6053157348245110185, -6029489996808528900, 
> -5984211855143878285, -5976157876053718897, -5960786495011670628, 
> -5958735514226770035, -5899767639655442330, -5822684184303415148, 
> -5781417439294763637, -5751460432371890910, -5740166642636309327, 
> -5695626417612186310, -5640765045723408247, -5617181156049689169, 
> -5609533985177356591, -5601369236916580549, -5597950494887081576, 
> -5563417985168606424, -5544827346340456629, -5532661047516804641, 
> -5522839053491352218, -5515748028172318343, -5503681859719385351, 
> -5454037971834611841, -5391841126413524561, -5391486446881271229, 
> -5345799278441821500, -5334673760925625816, -5223383618739305156, 
> -5221923994481449381, -5201263557535069480, -5146266397250565218, 
> -5129908985877585855, -5105202808286786842, -5087879514740126453, 
> -5015647678958926683, -4956601765875516828, -4870012706573251068, 
> -4843165740363419346, -4785540557423875550, -4769272272470020667, 
> -4743838345902355963, -4652149714081482841, -4651813505681686208, 
> -4633498525751156636, -4617489888285113964, -4575171285024168183, 
> -4426852178336308913, -4426400792698710435, -4389286320937036309, 
> -4324528033603203034, -4310368852323145495, -4302216608677327172, 
> -4229528661709148440, -4207740831738287983, -4203528661247313570, 
> -3948641241721335982, -3946554569612854645, -3931865850800685387, 
> -3925635355333550077, -3834502440481769685, -3827908348147378297, 
> -3805680095754927988, -3804947918584815385, -3800995210938487618, 
> -3783564223836955070, -3775028120786497996, -3711629770355538643, 
> -3710182799291812403, -3643158926306968005, -3625334149683154824, 
> -3601333132746233576, -3525454189106118076, -3488846023

[jira] [Commented] (CASSANDRA-10474) Streaming should tolerate secondary index build failure

2015-10-27 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977061#comment-14977061
 ] 

Ariel Weisberg commented on CASSANDRA-10474:


[Looks like this works on 
trunk?|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java#L179]
 I see exception handling for cfs.indexManager.buildAllIndexesBlocking(readers).

Is that enough or is there additional cleanup necessary if that throws. I know 
we added a lot of stuff in 3.0 to handle failure during streaming.

> Streaming should tolerate secondary index build failure
> ---
>
> Key: CASSANDRA-10474
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10474
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yuki Morishita
>  Labels: streaming
> Fix For: 2.1.x, 2.2.x, 3.1
>
>
> When streaming failed to build secondary index at the end of streaming (like 
> in CASSANDRA-10449), streaming session can hang as it throws exception 
> without catching it.
> Streaming should tolerate secondary index build failure, and instead of 
> failing (hanging) streaming session, it should WARN user and go on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10515) Commit logs back up with move to 2.1.10

2015-10-27 Thread Jeff Griffith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977052#comment-14977052
 ] 

Jeff Griffith edited comment on CASSANDRA-10515 at 10/27/15 7:59 PM:
-

[~krummas] [~tjake] something interesting on this second form of commit log 
growth where all nodes had uncontrolled commit log growth unlike the first 
example (many files in L0) where it was isolated nodes. for this latter case, I 
think i'm able to relate this to a separate problem with an index out of bounds 
exception. working with [~benedict] it seems like we have that one solved. i'm 
hopeful that patch will solve this growing commit log problem as well. it seems 
like all roads lead to rome where rome is commit log growth :-)

here is the other JIRA identifying an integer overflow in 
AbstractNativeCell.java
https://issues.apache.org/jira/browse/CASSANDRA-10579

Still uncertain how to proceed with the first form that seems to be starvation 
as you have described.



was (Author: jeffery.griffith):
[~krummas] [~tjake] something interesting on this second form of commit log 
growth where all nodes had uncontrolled commit log growth unless the first 
example (many files in L0) where it was isolated nodes. for this latter case, I 
think i'm able to relate this to a separate problem with an index out of bounds 
exception. working with [~benedict] it seems like we have that one solved. i'm 
hopeful that patch will solve this growing commit log problem as well. it seems 
like all roads lead to rome where rome is commit log growth :-)

here is the other JIRA identifying an integer overflow in 
AbstractNativeCell.java
https://issues.apache.org/jira/browse/CASSANDRA-10579

Still uncertain how to proceed with the first form that seems to be starvation 
as you have described.


> Commit logs back up with move to 2.1.10
> ---
>
> Key: CASSANDRA-10515
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10515
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: redhat 6.5, cassandra 2.1.10
>Reporter: Jeff Griffith
>Assignee: Branimir Lambov
>Priority: Critical
>  Labels: commitlog, triage
> Attachments: C5commitLogIncrease.jpg, CommitLogProblem.jpg, 
> CommitLogSize.jpg, MultinodeCommitLogGrowth-node1.tar.gz, RUN3tpstats.jpg, 
> cassandra.yaml, cfstats-clean.txt, stacktrace.txt, system.log.clean
>
>
> After upgrading from cassandra 2.0.x to 2.1.10, we began seeing problems 
> where some nodes break the 12G commit log max we configured and go as high as 
> 65G or more before it restarts. Once it reaches the state of more than 12G 
> commit log files, "nodetool compactionstats" hangs. Eventually C* restarts 
> without errors (not sure yet whether it is crashing but I'm checking into it) 
> and the cleanup occurs and the commit logs shrink back down again. Here is 
> the nodetool compactionstats immediately after restart.
> {code}
> jgriffith@prod1xc1.c2.bf1:~$ ndc
> pending tasks: 2185
>compaction type   keyspace  table completed
>   totalunit   progress
> Compaction   SyncCore  *cf1*   61251208033   
> 170643574558   bytes 35.89%
> Compaction   SyncCore  *cf2*   19262483904
> 19266079916   bytes 99.98%
> Compaction   SyncCore  *cf3*6592197093
>  6592316682   bytes100.00%
> Compaction   SyncCore  *cf4*3411039555
>  3411039557   bytes100.00%
> Compaction   SyncCore  *cf5*2879241009
>  2879487621   bytes 99.99%
> Compaction   SyncCore  *cf6*   21252493623
> 21252635196   bytes100.00%
> Compaction   SyncCore  *cf7*   81009853587
> 81009854438   bytes100.00%
> Compaction   SyncCore  *cf8*3005734580
>  3005768582   bytes100.00%
> Active compaction remaining time :n/a
> {code}
> I was also doing periodic "nodetool tpstats" which were working but not being 
> logged in system.log on the StatusLogger thread until after the compaction 
> started working again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10515) Commit logs back up with move to 2.1.10

2015-10-27 Thread Jeff Griffith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977052#comment-14977052
 ] 

Jeff Griffith commented on CASSANDRA-10515:
---

[~krummas] [~tjake] something interesting on this second form of commit log 
growth where all nodes had uncontrolled commit log growth unless the first 
example (many files in L0) where it was isolated nodes. for this latter case, I 
think i'm able to relate this to a separate problem with an index out of bounds 
exception. working with [~benedict] it seems like we have that one solved. i'm 
hopeful that patch will solve this growing commit log problem as well. it seems 
like all roads lead to rome where rome is commit log growth :-)

here is the other JIRA identifying an integer overflow in 
AbstractNativeCell.java
https://issues.apache.org/jira/browse/CASSANDRA-10579

Still uncertain how to proceed with the first form that seems to be starvation 
as you have described.


> Commit logs back up with move to 2.1.10
> ---
>
> Key: CASSANDRA-10515
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10515
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: redhat 6.5, cassandra 2.1.10
>Reporter: Jeff Griffith
>Assignee: Branimir Lambov
>Priority: Critical
>  Labels: commitlog, triage
> Attachments: C5commitLogIncrease.jpg, CommitLogProblem.jpg, 
> CommitLogSize.jpg, MultinodeCommitLogGrowth-node1.tar.gz, RUN3tpstats.jpg, 
> cassandra.yaml, cfstats-clean.txt, stacktrace.txt, system.log.clean
>
>
> After upgrading from cassandra 2.0.x to 2.1.10, we began seeing problems 
> where some nodes break the 12G commit log max we configured and go as high as 
> 65G or more before it restarts. Once it reaches the state of more than 12G 
> commit log files, "nodetool compactionstats" hangs. Eventually C* restarts 
> without errors (not sure yet whether it is crashing but I'm checking into it) 
> and the cleanup occurs and the commit logs shrink back down again. Here is 
> the nodetool compactionstats immediately after restart.
> {code}
> jgriffith@prod1xc1.c2.bf1:~$ ndc
> pending tasks: 2185
>compaction type   keyspace  table completed
>   totalunit   progress
> Compaction   SyncCore  *cf1*   61251208033   
> 170643574558   bytes 35.89%
> Compaction   SyncCore  *cf2*   19262483904
> 19266079916   bytes 99.98%
> Compaction   SyncCore  *cf3*6592197093
>  6592316682   bytes100.00%
> Compaction   SyncCore  *cf4*3411039555
>  3411039557   bytes100.00%
> Compaction   SyncCore  *cf5*2879241009
>  2879487621   bytes 99.99%
> Compaction   SyncCore  *cf6*   21252493623
> 21252635196   bytes100.00%
> Compaction   SyncCore  *cf7*   81009853587
> 81009854438   bytes100.00%
> Compaction   SyncCore  *cf8*3005734580
>  3005768582   bytes100.00%
> Active compaction remaining time :n/a
> {code}
> I was also doing periodic "nodetool tpstats" which were working but not being 
> logged in system.log on the StatusLogger thread until after the compaction 
> started working again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10140) Enable GC logging by default

2015-10-27 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977051#comment-14977051
 ] 

Chris Lohfink commented on CASSANDRA-10140:
---

+1

> Enable GC logging by default
> 
>
> Key: CASSANDRA-10140
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10140
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 2.2.x, 3.0.x
>
> Attachments: 10140-debian-packaging-2.2.txt, 
> 10140-debian-packaging-3.0.txt, CASSANDRA-10140-2-2.txt, 
> CASSANDRA-10140-v2.txt, CASSANDRA-10140-v3.txt, CASSANDRA-10140.txt, 
> cassandra-2.2-10140-v2.txt, cassandra-2.2-10140-v3.txt, 
> casssandra-2.2-10140-v4.txt
>
>
> Overhead for the gc logging is very small (with cycling logs in 7+) and it 
> provides a ton of useful information. This will open up more for C* 
> diagnostic tools to provide feedback as well without requiring restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10140) Enable GC logging by default

2015-10-27 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-10140:
---
Attachment: casssandra-2.2-10140-v4.txt
CASSANDRA-10140-v3.txt

Combined patches for each 2.2 and 3.0. 

> Enable GC logging by default
> 
>
> Key: CASSANDRA-10140
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10140
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 2.2.x, 3.0.x
>
> Attachments: 10140-debian-packaging-2.2.txt, 
> 10140-debian-packaging-3.0.txt, CASSANDRA-10140-2-2.txt, 
> CASSANDRA-10140-v2.txt, CASSANDRA-10140-v3.txt, CASSANDRA-10140.txt, 
> cassandra-2.2-10140-v2.txt, cassandra-2.2-10140-v3.txt, 
> casssandra-2.2-10140-v4.txt
>
>
> Overhead for the gc logging is very small (with cycling logs in 7+) and it 
> provides a ton of useful information. This will open up more for C* 
> diagnostic tools to provide feedback as well without requiring restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10140) Enable GC logging by default

2015-10-27 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-10140:
---
Attachment: 10140-debian-packaging-3.0.txt
10140-debian-packaging-2.2.txt

10140-debian-packaging-2.2.txt and 10140-debian-packaging-3.0.txt patches 
attached.

> Enable GC logging by default
> 
>
> Key: CASSANDRA-10140
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10140
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 2.2.x, 3.0.x
>
> Attachments: 10140-debian-packaging-2.2.txt, 
> 10140-debian-packaging-3.0.txt, CASSANDRA-10140-2-2.txt, 
> CASSANDRA-10140-v2.txt, CASSANDRA-10140.txt, cassandra-2.2-10140-v2.txt, 
> cassandra-2.2-10140-v3.txt
>
>
> Overhead for the gc logging is very small (with cycling logs in 7+) and it 
> provides a ton of useful information. This will open up more for C* 
> diagnostic tools to provide feedback as well without requiring restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8965) Cassandra retains a file handle to the directory its writing to for each writer instance

2015-10-27 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976986#comment-14976986
 ] 

Ariel Weisberg commented on CASSANDRA-8965:
---

Is it BigTableWriter that has the handle open? Having trouble tracking where 
this file handle for the directory is.

> Cassandra retains a file handle to the directory its writing to for each 
> writer instance
> 
>
> Key: CASSANDRA-8965
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8965
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Priority: Trivial
> Fix For: 3.1
>
>
> We could either share this amongst the CF object, or have a shared 
> ref-counted cache that opens a reference and shares it amongst all writer 
> instances, closing it once they all close.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10140) Enable GC logging by default

2015-10-27 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976976#comment-14976976
 ] 

Michael Shuler commented on CASSANDRA-10140:


Sorry, let me throw this in the existing dpatch patch and we can work on moving 
to quilt later.

> Enable GC logging by default
> 
>
> Key: CASSANDRA-10140
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10140
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 2.2.x, 3.0.x
>
> Attachments: CASSANDRA-10140-2-2.txt, CASSANDRA-10140-v2.txt, 
> CASSANDRA-10140.txt, cassandra-2.2-10140-v2.txt, cassandra-2.2-10140-v3.txt
>
>
> Overhead for the gc logging is very small (with cycling logs in 7+) and it 
> provides a ton of useful information. This will open up more for C* 
> diagnostic tools to provide feedback as well without requiring restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10092) Generalize PerRowSecondaryIndex validation

2015-10-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976971#comment-14976971
 ] 

Andrés de la Peña commented on CASSANDRA-10092:
---

Thanks for your revision. I have uploaded a new version of the patch for 2.2. I 
have fixed the validation in {{CassandraServer.createMutationList}} and I have 
added tests for the three {{CassandraServer}} methods. All the tests check 
validation both in key and columns. I hope you find it OK, and sorry for the 
inconvenience, I'm not familiarized with the Thrift API.

> Generalize PerRowSecondaryIndex validation
> --
>
> Key: CASSANDRA-10092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10092
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>Priority: Minor
>  Labels: 2i, secondary_index, validation
> Fix For: 2.1.x, 2.2.x
>
> Attachments: CASSANDRA-10092_v2.patch, CASSANDRA-10092_v3.patch, 
> improve_2i_validation.patch
>
>
> Index validation is currently done in a per-cell basis. However, per-row 
> secondary index developers can be interested in validating all the written 
> columns at once, because some implementations need to check the validity of a 
> row write by comparing some column values against others. For example, a per 
> row 2i implementation indexing time ranges (composed by a start date column 
> and an end date column) should check that the start date is before the stop 
> date.
> I'm attaching a patch adding a new method to {{PerRowSecondaryIndex}}:
> {code:java}
> public void validate(ByteBuffer key, ColumnFamily cf) throws 
> InvalidRequestException {}
> {code}
> and a new method to {{SecondaryIndexManager}}:
> {code:java}
> public void validateRowLevelIndexes(ByteBuffer key, ColumnFamily cf) throws 
> InvalidRequestException
>   {
>   for (SecondaryIndex index : rowLevelIndexMap.values())
>   {
>   ((PerRowSecondaryIndex) index).validate(key, cf);
>   }
>   }
> {code}
> This method is invoked in CQL {{UpdateStatement#validateIndexedColumns}}. 
> This way, {{PerRowSecondaryIndex}} could perform complex write validation.
> I have tried to do the patch in the least invasive way possible. Maybe the 
> current method {{SecondaryIndex#validate(ByteBuffer, Cell)}} should be moved 
> to {{PerColumnSecondaryIndex}}, and the {{InvalidRequestException}} that 
> informs about the particular 64k limitation should be thrown by 
> {{AbstractSimplePerColumnSecondaryIndex}}. However, given the incoming  
> [CASSANDRA-9459|https://issues.apache.org/jira/browse/CASSANDRA-9459], I 
> think that the proposed patch is more than enough to provide rich validation 
> features to 2i implementations based on 2.1.x and 2.2.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10604) Secondary index metadata is not reloaded when table is altered

2015-10-27 Thread Tyler Hobbs (JIRA)
Tyler Hobbs created CASSANDRA-10604:
---

 Summary: Secondary index metadata is not reloaded when table is 
altered
 Key: CASSANDRA-10604
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10604
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Sam Tunnicliffe
 Fix For: 3.x


The javadocs for {{Index.getMetadataReloadTask()}} state the following:

{quote}
Returns a task to reload the internal metadata of an index.
Called when the base table metadata is modified or when the configuration of 
the Index is updated.
{quote}

However, altering a table does not result in the reload task being executed.  I 
think the root of the problem is that in 
{{SecondaryIndexManager.reloadIndex()}}, we only execute the reload task when 
the old {{IndexMetadata}} does not equal the current {{IndexMetadata}}.  
Altering the table does not change the index metadata, so this check always 
fails.

This especially affects per-row secondary indexes, where the index may need to 
handle columns being added or dropped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10092) Generalize PerRowSecondaryIndex validation

2015-10-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrés de la Peña updated CASSANDRA-10092:
--
Attachment: CASSANDRA-10092_v3.patch

> Generalize PerRowSecondaryIndex validation
> --
>
> Key: CASSANDRA-10092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10092
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>Priority: Minor
>  Labels: 2i, secondary_index, validation
> Fix For: 2.1.x, 2.2.x
>
> Attachments: CASSANDRA-10092_v2.patch, CASSANDRA-10092_v3.patch, 
> improve_2i_validation.patch
>
>
> Index validation is currently done in a per-cell basis. However, per-row 
> secondary index developers can be interested in validating all the written 
> columns at once, because some implementations need to check the validity of a 
> row write by comparing some column values against others. For example, a per 
> row 2i implementation indexing time ranges (composed by a start date column 
> and an end date column) should check that the start date is before the stop 
> date.
> I'm attaching a patch adding a new method to {{PerRowSecondaryIndex}}:
> {code:java}
> public void validate(ByteBuffer key, ColumnFamily cf) throws 
> InvalidRequestException {}
> {code}
> and a new method to {{SecondaryIndexManager}}:
> {code:java}
> public void validateRowLevelIndexes(ByteBuffer key, ColumnFamily cf) throws 
> InvalidRequestException
>   {
>   for (SecondaryIndex index : rowLevelIndexMap.values())
>   {
>   ((PerRowSecondaryIndex) index).validate(key, cf);
>   }
>   }
> {code}
> This method is invoked in CQL {{UpdateStatement#validateIndexedColumns}}. 
> This way, {{PerRowSecondaryIndex}} could perform complex write validation.
> I have tried to do the patch in the least invasive way possible. Maybe the 
> current method {{SecondaryIndex#validate(ByteBuffer, Cell)}} should be moved 
> to {{PerColumnSecondaryIndex}}, and the {{InvalidRequestException}} that 
> informs about the particular 64k limitation should be thrown by 
> {{AbstractSimplePerColumnSecondaryIndex}}. However, given the incoming  
> [CASSANDRA-9459|https://issues.apache.org/jira/browse/CASSANDRA-9459], I 
> think that the proposed patch is more than enough to provide rich validation 
> features to 2i implementations based on 2.1.x and 2.2.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10140) Enable GC logging by default

2015-10-27 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976897#comment-14976897
 ] 

Ariel Weisberg commented on CASSANDRA-10140:


My understanding is that now we are waiting for [~mshuler] to do something to 
get the paths for this working in the debian package. I could just add a patch 
to the list of patches we apply when building the .deb, but I also don't know 
what that tool is. There was some talk of being unhappy with the current system 
of patching because it's not supported anymore.

> Enable GC logging by default
> 
>
> Key: CASSANDRA-10140
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10140
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 2.2.x, 3.0.x
>
> Attachments: CASSANDRA-10140-2-2.txt, CASSANDRA-10140-v2.txt, 
> CASSANDRA-10140.txt, cassandra-2.2-10140-v2.txt, cassandra-2.2-10140-v3.txt
>
>
> Overhead for the gc logging is very small (with cycling logs in 7+) and it 
> provides a ton of useful information. This will open up more for C* 
> diagnostic tools to provide feedback as well without requiring restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10579) IndexOutOfBoundsException during memtable flushing at startup (with offheap_objects)

2015-10-27 Thread Jeff Griffith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976864#comment-14976864
 ] 

Jeff Griffith edited comment on CASSANDRA-10579 at 10/27/15 6:09 PM:
-

perfect. thanks again.


was (Author: jeffery.griffith):
perfect. thanks.

> IndexOutOfBoundsException during memtable flushing at startup (with 
> offheap_objects)
> 
>
> Key: CASSANDRA-10579
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10579
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.1.10 on linux
>Reporter: Jeff Griffith
>Assignee: Benedict
> Fix For: 2.1.x
>
>
> Sometimes we have problems at startup where memtable flushes with an index 
> out of bounds exception as seen below. Cassandra is then dead in the water 
> until we track down the corresponding commit log via the segment ID and 
> remove it:
> {code}
> INFO  [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log
> INFO  [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log (CL version 4, 
> messaging version 8)
> INFO  [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:478 - Finished 
> reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log
> INFO  [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log
> INFO  [main] 2015-10-23 14:43:36,595 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log (CL version 4, 
> messaging version 8)
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:478 - Finished 
> reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log (CL version 4, 
> messaging version 8)
> WARN  [SharedPool-Worker-5] 2015-10-23 14:43:36,747 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-5,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: 6
> at 
> org.apache.cassandra.db.AbstractNativeCell.nametype(AbstractNativeCell.java:204)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.AbstractNativeCell.isStatic(AbstractNativeCell.java:199)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCType.compare(AbstractCType.java:166)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:61)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:58)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.BTree.find(BTree.java:277) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.utils.btree.NodeBuilder.update(NodeBuilder.java:154) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.Builder.update(Builder.java:74) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.BTree.update(BTree.java:186) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.AtomicBTreeColumns.addAllWithSizeDelta(AtomicBTreeColumns.java:225)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Memtable.put(Memtable.java:210) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1225) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:359) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer$1.runMayThrow(CommitLogReplayer.java:455)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
>

[jira] [Commented] (CASSANDRA-10579) IndexOutOfBoundsException during memtable flushing at startup (with offheap_objects)

2015-10-27 Thread Jeff Griffith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976864#comment-14976864
 ] 

Jeff Griffith commented on CASSANDRA-10579:
---

perfect. thanks.

> IndexOutOfBoundsException during memtable flushing at startup (with 
> offheap_objects)
> 
>
> Key: CASSANDRA-10579
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10579
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.1.10 on linux
>Reporter: Jeff Griffith
>Assignee: Benedict
> Fix For: 2.1.x
>
>
> Sometimes we have problems at startup where memtable flushes with an index 
> out of bounds exception as seen below. Cassandra is then dead in the water 
> until we track down the corresponding commit log via the segment ID and 
> remove it:
> {code}
> INFO  [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log
> INFO  [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log (CL version 4, 
> messaging version 8)
> INFO  [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:478 - Finished 
> reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log
> INFO  [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log
> INFO  [main] 2015-10-23 14:43:36,595 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log (CL version 4, 
> messaging version 8)
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:478 - Finished 
> reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log (CL version 4, 
> messaging version 8)
> WARN  [SharedPool-Worker-5] 2015-10-23 14:43:36,747 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-5,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: 6
> at 
> org.apache.cassandra.db.AbstractNativeCell.nametype(AbstractNativeCell.java:204)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.AbstractNativeCell.isStatic(AbstractNativeCell.java:199)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCType.compare(AbstractCType.java:166)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:61)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:58)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.BTree.find(BTree.java:277) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.utils.btree.NodeBuilder.update(NodeBuilder.java:154) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.Builder.update(Builder.java:74) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.BTree.update(BTree.java:186) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.AtomicBTreeColumns.addAllWithSizeDelta(AtomicBTreeColumns.java:225)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Memtable.put(Memtable.java:210) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1225) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:359) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer$1.runMayThrow(CommitLogReplayer.java:455)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_31]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTa

[jira] [Commented] (CASSANDRA-10365) Consider storing types by their CQL names in schema tables instead of fully-qualified internal class names

2015-10-27 Thread Adam Holmberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976821#comment-14976821
 ] 

Adam Holmberg commented on CASSANDRA-10365:
---

[~iamaleksey] Curious if you have given any thought to storing 
{{system_schema.aggregates.initcond}} as a text CQL literal. This is the only 
thing I have come across in my integration thus far that requires the client to 
parse types in order to reproduce DDL.

> Consider storing types by their CQL names in schema tables instead of 
> fully-qualified internal class names
> --
>
> Key: CASSANDRA-10365
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10365
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>  Labels: client-impacting
> Fix For: 3.0.0
>
>
> Consider saving CQL types names for column, UDF/UDA arguments and return 
> types, and UDT components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10579) IndexOutOfBoundsException during memtable flushing at startup (with offheap_objects)

2015-10-27 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976810#comment-14976810
 ] 

Benedict commented on CASSANDRA-10579:
--

I based the patch on 2.1.10, so you should be good to roll out the {{-fix}} 
patch across your cluster, as it has no other changes. If you'd prefer to wait 
until formal review occurs, it will be exactly the same as deploying 2.1.11 
(when it materializes). However the patch will only be made officially 
available in 2.1.11, as we do not re-release.

Taking another look, it seems that without assertions the possibility for 
undefined behaviour (potentially resulting in corruption) is quite high, as 
negative lengths can occur in the data we serialize (at minimum).

I've pushed an update that includes unit tests to cover this issue.

> IndexOutOfBoundsException during memtable flushing at startup (with 
> offheap_objects)
> 
>
> Key: CASSANDRA-10579
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10579
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.1.10 on linux
>Reporter: Jeff Griffith
>Assignee: Benedict
> Fix For: 2.1.x
>
>
> Sometimes we have problems at startup where memtable flushes with an index 
> out of bounds exception as seen below. Cassandra is then dead in the water 
> until we track down the corresponding commit log via the segment ID and 
> remove it:
> {code}
> INFO  [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log
> INFO  [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log (CL version 4, 
> messaging version 8)
> INFO  [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:478 - Finished 
> reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log
> INFO  [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log
> INFO  [main] 2015-10-23 14:43:36,595 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log (CL version 4, 
> messaging version 8)
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:478 - Finished 
> reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log (CL version 4, 
> messaging version 8)
> WARN  [SharedPool-Worker-5] 2015-10-23 14:43:36,747 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-5,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: 6
> at 
> org.apache.cassandra.db.AbstractNativeCell.nametype(AbstractNativeCell.java:204)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.AbstractNativeCell.isStatic(AbstractNativeCell.java:199)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCType.compare(AbstractCType.java:166)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:61)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:58)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.BTree.find(BTree.java:277) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.utils.btree.NodeBuilder.update(NodeBuilder.java:154) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.Builder.update(Builder.java:74) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.BTree.update(BTree.java:186) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.AtomicBTreeColumns.addAllWithSizeDelta(AtomicBTreeColumns.java:225)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Memtable.put(Memtable.java:210) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1225) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
>   

[jira] [Commented] (CASSANDRA-10452) Fix resummable_bootstrap_test and bootstrap_with_reset_bootstrap_state_test

2015-10-27 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976765#comment-14976765
 ] 

Andrew Hust commented on CASSANDRA-10452:
-

Confirmed tests are no longer flapping and pass successfully when ran without 
asserts enabled.  Closing.

> Fix resummable_bootstrap_test and bootstrap_with_reset_bootstrap_state_test
> ---
>
> Key: CASSANDRA-10452
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10452
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Yuki Morishita
> Fix For: 3.0.0
>
>
> {{bootstrap_test.py:TestBootstrap.resumable_bootstrap_test}} has been 
> flapping on CassCI lately:
> http://cassci.datastax.com/view/trunk/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/
> I have not been able to reproduce on OpenStack. I'm assigning [~yukim] for 
> now, but feel free to reassign.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10603) Fix CQL syntax errors in upgrade_through_versions_test

2015-10-27 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976738#comment-14976738
 ] 

Jim Witschey commented on CASSANDRA-10603:
--

I've made this a subtask of CASSANDRA-10166, since it involves upgrades to 3.0.

> Fix CQL syntax errors in upgrade_through_versions_test
> --
>
> Key: CASSANDRA-10603
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10603
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>
> In the {{cassandra_upgrade_2.1_to_3.0_proto_v3}} upgrade tests on CassCI, 
> some of the tests are failing with the following error:
> {code}
> 
> {code}
> The tests that fail this way [(at least as of this 
> run)|http://cassci.datastax.com/view/Upgrades/job/cassandra_upgrade_2.1_to_3.0_proto_v3/10/testReport/]
>  are the following:
> {code}
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.parallel_upgrade_with_internode_ssl_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.parallel_upgrade_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.bootstrap_multidc_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.bootstrap_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.bootstrap_multidc_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.bootstrap_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.parallel_upgrade_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.parallel_upgrade_with_internode_ssl_test
> {code}
> There may be other tests in other protocol upgrade jobs that fail this way, 
> but I haven't dug through yet to see.
> Assigning to [~rhatch] since, afaik, you're the most likely person to 
> understand the problem. Feel free to reassign, of course.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10603) Fix CQL syntax errors in upgrade_through_versions_test

2015-10-27 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-10603:
-
Issue Type: Sub-task  (was: Bug)
Parent: CASSANDRA-10166

> Fix CQL syntax errors in upgrade_through_versions_test
> --
>
> Key: CASSANDRA-10603
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10603
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>
> In the {{cassandra_upgrade_2.1_to_3.0_proto_v3}} upgrade tests on CassCI, 
> some of the tests are failing with the following error:
> {code}
> 
> {code}
> The tests that fail this way [(at least as of this 
> run)|http://cassci.datastax.com/view/Upgrades/job/cassandra_upgrade_2.1_to_3.0_proto_v3/10/testReport/]
>  are the following:
> {code}
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.parallel_upgrade_with_internode_ssl_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.parallel_upgrade_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.bootstrap_multidc_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.bootstrap_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.bootstrap_multidc_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.bootstrap_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.parallel_upgrade_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.parallel_upgrade_with_internode_ssl_test
> {code}
> There may be other tests in other protocol upgrade jobs that fail this way, 
> but I haven't dug through yet to see.
> Assigning to [~rhatch] since, afaik, you're the most likely person to 
> understand the problem. Feel free to reassign, of course.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10554) Batch that updates two or more table can produce unreadable SSTable (was: Auto Bootstraping a new node fails)

2015-10-27 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976730#comment-14976730
 ] 

Alan Boudreault commented on CASSANDRA-10554:
-

I confirm that my bootstrap issue is now resolved. Thanks!

> Batch that updates two or more table can produce unreadable SSTable (was: 
> Auto Bootstraping a new node fails)
> -
>
> Key: CASSANDRA-10554
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10554
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alan Boudreault
>Assignee: Sylvain Lebresne
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: 0001-Add-debug.txt, 10554.cql, debug.log, system.log, 
> test.sh
>
>
> I've been trying to add a new node in my 3.0 cluster and it seems to fail. 
> All my nodes are using apache/cassandra-3.0.0 branch. At the beginning, I can 
> see the following error:
> {code}
> INFO  18:45:55 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a ID#0] Prepare 
> completed. Receiving 42 files(1910066622 bytes), sending 0 files(0 bytes)
> WARN  18:45:55 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a] Retrying for 
> following error
> java.lang.RuntimeException: Unknown column added_time during deserialization
> at 
> org.apache.cassandra.db.SerializationHeader$Component.toHeader(SerializationHeader.java:331)
>  ~[main/:na]
> at 
> org.apache.cassandra.streaming.StreamReader.createWriter(StreamReader.java:136)
>  ~[main/:na]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:77)
>  ~[main/:na]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:50)
>  [main/:na]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:39)
>  [main/:na]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:59)
>  [main/:na]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [main/:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> ERROR 18:45:55 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a] Streaming error 
> occurred
> java.lang.IllegalArgumentException: Unknown type 0
> at 
> org.apache.cassandra.streaming.messages.StreamMessage$Type.get(StreamMessage.java:97)
>  ~[main/:na]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  ~[main/:na]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  ~[main/:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> INFO  18:45:55 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a] Session with 
> /54.210.187.114 is complete
> INFO  18:45:56 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a ID#0] Prepare 
> completed. Receiving 38 files(2323537628 bytes), sending 0 files(0 bytes)
> WARN  18:45:56 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a] Retrying for 
> following error
> java.lang.RuntimeException: Unknown column added_time during deserialization
> at 
> org.apache.cassandra.db.SerializationHeader$Component.toHeader(SerializationHeader.java:331)
>  ~[main/:na]
> at 
> org.apache.cassandra.streaming.StreamReader.createWriter(StreamReader.java:136)
>  ~[main/:na]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:77)
>  ~[main/:na]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:50)
>  [main/:na]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:39)
>  [main/:na]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:59)
>  [main/:na]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [main/:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> ERROR 18:45:56 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a] Streaming error 
> occurred
> java.lang.IllegalArgumentException: Unknown type 0
> at 
> org.apache.cassandra.streaming.messages.StreamMessage$Type.get(StreamMessage.java:97)
>  ~[main/:na]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  ~[main/:na]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  ~[main/:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> I

[jira] [Created] (CASSANDRA-10603) Fix CQL syntax errors in upgrade_through_versions_test

2015-10-27 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-10603:


 Summary: Fix CQL syntax errors in upgrade_through_versions_test
 Key: CASSANDRA-10603
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10603
 Project: Cassandra
  Issue Type: Bug
Reporter: Jim Witschey
Assignee: Russ Hatch


In the {{cassandra_upgrade_2.1_to_3.0_proto_v3}} upgrade tests on CassCI, some 
of the tests are failing with the following error:

{code}

{code}

The tests that fail this way [(at least as of this 
run)|http://cassci.datastax.com/view/Upgrades/job/cassandra_upgrade_2.1_to_3.0_proto_v3/10/testReport/]
 are the following:

{code}
upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.parallel_upgrade_with_internode_ssl_test
upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.parallel_upgrade_test
upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.bootstrap_multidc_test
upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.bootstrap_test
upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.bootstrap_multidc_test
upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.bootstrap_test
upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.parallel_upgrade_test
upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.parallel_upgrade_with_internode_ssl_test
{code}

There may be other tests in other protocol upgrade jobs that fail this way, but 
I haven't dug through yet to see.

Assigning to [~rhatch] since, afaik, you're the most likely person to 
understand the problem. Feel free to reassign, of course.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10579) IndexOutOfBoundsException during memtable flushing at startup (with offheap_objects)

2015-10-27 Thread Jeff Griffith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976703#comment-14976703
 ] 

Jeff Griffith commented on CASSANDRA-10579:
---

my pleasure, thanks for the patch! we are running on 2.1.10. is the patch only 
for 2.1.11?

> IndexOutOfBoundsException during memtable flushing at startup (with 
> offheap_objects)
> 
>
> Key: CASSANDRA-10579
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10579
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.1.10 on linux
>Reporter: Jeff Griffith
>Assignee: Benedict
> Fix For: 2.1.x
>
>
> Sometimes we have problems at startup where memtable flushes with an index 
> out of bounds exception as seen below. Cassandra is then dead in the water 
> until we track down the corresponding commit log via the segment ID and 
> remove it:
> {code}
> INFO  [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log
> INFO  [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log (CL version 4, 
> messaging version 8)
> INFO  [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:478 - Finished 
> reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log
> INFO  [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log
> INFO  [main] 2015-10-23 14:43:36,595 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log (CL version 4, 
> messaging version 8)
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:478 - Finished 
> reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log (CL version 4, 
> messaging version 8)
> WARN  [SharedPool-Worker-5] 2015-10-23 14:43:36,747 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-5,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: 6
> at 
> org.apache.cassandra.db.AbstractNativeCell.nametype(AbstractNativeCell.java:204)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.AbstractNativeCell.isStatic(AbstractNativeCell.java:199)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCType.compare(AbstractCType.java:166)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:61)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:58)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.BTree.find(BTree.java:277) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.utils.btree.NodeBuilder.update(NodeBuilder.java:154) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.Builder.update(Builder.java:74) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.BTree.update(BTree.java:186) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.AtomicBTreeColumns.addAllWithSizeDelta(AtomicBTreeColumns.java:225)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Memtable.put(Memtable.java:210) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1225) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:359) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer$1.runMayThrow(CommitLogReplayer.java:455)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_31]
> at 
> o

[jira] [Commented] (CASSANDRA-10579) IndexOutOfBoundsException during memtable flushing at startup (with offheap_objects)

2015-10-27 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976700#comment-14976700
 ] 

Benedict commented on CASSANDRA-10579:
--

Ok. Please keep an eye out, and report if it reoccurs. Thanks for your help 
tracking down this issue.

> IndexOutOfBoundsException during memtable flushing at startup (with 
> offheap_objects)
> 
>
> Key: CASSANDRA-10579
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10579
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.1.10 on linux
>Reporter: Jeff Griffith
>Assignee: Benedict
> Fix For: 2.1.x
>
>
> Sometimes we have problems at startup where memtable flushes with an index 
> out of bounds exception as seen below. Cassandra is then dead in the water 
> until we track down the corresponding commit log via the segment ID and 
> remove it:
> {code}
> INFO  [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log
> INFO  [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log (CL version 4, 
> messaging version 8)
> INFO  [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:478 - Finished 
> reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log
> INFO  [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log
> INFO  [main] 2015-10-23 14:43:36,595 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log (CL version 4, 
> messaging version 8)
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:478 - Finished 
> reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log (CL version 4, 
> messaging version 8)
> WARN  [SharedPool-Worker-5] 2015-10-23 14:43:36,747 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-5,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: 6
> at 
> org.apache.cassandra.db.AbstractNativeCell.nametype(AbstractNativeCell.java:204)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.AbstractNativeCell.isStatic(AbstractNativeCell.java:199)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCType.compare(AbstractCType.java:166)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:61)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:58)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.BTree.find(BTree.java:277) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.utils.btree.NodeBuilder.update(NodeBuilder.java:154) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.Builder.update(Builder.java:74) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.BTree.update(BTree.java:186) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.AtomicBTreeColumns.addAllWithSizeDelta(AtomicBTreeColumns.java:225)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Memtable.put(Memtable.java:210) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1225) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:359) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer$1.runMayThrow(CommitLogReplayer.java:455)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_31]
> at 
> 

[jira] [Commented] (CASSANDRA-8068) Allow to create authenticator which is aware of the client connection

2015-10-27 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976684#comment-14976684
 ] 

Aleksey Yeschenko commented on CASSANDRA-8068:
--

bq. QueryState & ClientState both also provide access to a bunch of things that 
an IAuthenticator probably has no business with: the tracing-related methods, 
login, authz functions etc.

This.

> Allow to create authenticator which is aware of the client connection
> -
>
> Key: CASSANDRA-8068
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8068
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Jacek Lewandowski
>Assignee: Sam Tunnicliffe
>Priority: Minor
>  Labels: security
> Fix For: 3.0.0
>
>
> Currently, the authenticator interface doesn't allow to make a decision 
> according to the client connection properties (especially the client host 
> name or address). 
> The idea is to add the interface which extends the current SASL aware 
> authenticator interface with additional method to set the client connection. 
> ServerConnection then could supply the connection to the authenticator if the 
> authenticator implements that interface. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8068) Allow to create authenticator which is aware of the client connection

2015-10-27 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976683#comment-14976683
 ] 

Jeremiah Jordan commented on CASSANDRA-8068:


True, the full ClientState or QueryState is a lot.  The InetSocketAddress from 
the ClientState.remoteAddress is enough for the use cases I am thinking of.

> Allow to create authenticator which is aware of the client connection
> -
>
> Key: CASSANDRA-8068
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8068
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Jacek Lewandowski
>Assignee: Sam Tunnicliffe
>Priority: Minor
>  Labels: security
> Fix For: 3.0.0
>
>
> Currently, the authenticator interface doesn't allow to make a decision 
> according to the client connection properties (especially the client host 
> name or address). 
> The idea is to add the interface which extends the current SASL aware 
> authenticator interface with additional method to set the client connection. 
> ServerConnection then could supply the connection to the authenticator if the 
> authenticator implements that interface. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8068) Allow to create authenticator which is aware of the client connection

2015-10-27 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976655#comment-14976655
 ] 

Sam Tunnicliffe commented on CASSANDRA-8068:


Would the {{InetAddress}} of the client attempting authentication be sufficient 
for the negotiator? QueryState & ClientState both also provide access to a 
bunch of things that an IAuthenticator probably has no business with: the 
tracing-related methods, login, authz functions etc.

> Allow to create authenticator which is aware of the client connection
> -
>
> Key: CASSANDRA-8068
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8068
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Jacek Lewandowski
>Assignee: Sam Tunnicliffe
>Priority: Minor
>  Labels: security
> Fix For: 3.0.0
>
>
> Currently, the authenticator interface doesn't allow to make a decision 
> according to the client connection properties (especially the client host 
> name or address). 
> The idea is to add the interface which extends the current SASL aware 
> authenticator interface with additional method to set the client connection. 
> ServerConnection then could supply the connection to the authenticator if the 
> authenticator implements that interface. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10595) Don't initialize un-registered indexes

2015-10-27 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10595:

Reviewer: Sergio Bossa

> Don't initialize un-registered indexes
> --
>
> Key: CASSANDRA-10595
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10595
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.0.0
>
>
> If a secondary index implementation chooses not to register with 
> {{SecondaryIndexManager}} on a particular node, it won't be required to 
> provide either {{Indexer}} or {{Searcher}} instances. In this case, 
> initialization is unnecessary so we should avoid doing it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10579) IndexOutOfBoundsException during memtable flushing at startup (with offheap_objects)

2015-10-27 Thread Jeff Griffith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976664#comment-14976664
 ] 

Jeff Griffith commented on CASSANDRA-10579:
---

Yes, we are seeing sstable corruption also which we scrub. Not 100% certain it 
results from this index out of bounds problem though.

> IndexOutOfBoundsException during memtable flushing at startup (with 
> offheap_objects)
> 
>
> Key: CASSANDRA-10579
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10579
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.1.10 on linux
>Reporter: Jeff Griffith
>Assignee: Benedict
> Fix For: 2.1.x
>
>
> Sometimes we have problems at startup where memtable flushes with an index 
> out of bounds exception as seen below. Cassandra is then dead in the water 
> until we track down the corresponding commit log via the segment ID and 
> remove it:
> {code}
> INFO  [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log
> INFO  [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log (CL version 4, 
> messaging version 8)
> INFO  [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:478 - Finished 
> reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log
> INFO  [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log
> INFO  [main] 2015-10-23 14:43:36,595 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log (CL version 4, 
> messaging version 8)
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:478 - Finished 
> reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log (CL version 4, 
> messaging version 8)
> WARN  [SharedPool-Worker-5] 2015-10-23 14:43:36,747 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-5,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: 6
> at 
> org.apache.cassandra.db.AbstractNativeCell.nametype(AbstractNativeCell.java:204)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.AbstractNativeCell.isStatic(AbstractNativeCell.java:199)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCType.compare(AbstractCType.java:166)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:61)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:58)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.BTree.find(BTree.java:277) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.utils.btree.NodeBuilder.update(NodeBuilder.java:154) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.Builder.update(Builder.java:74) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.BTree.update(BTree.java:186) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.AtomicBTreeColumns.addAllWithSizeDelta(AtomicBTreeColumns.java:225)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Memtable.put(Memtable.java:210) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1225) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:359) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer$1.runMayThrow(CommitLogReplayer.java:455)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.jav

[jira] [Updated] (CASSANDRA-10600) CqlInputFormat throws IOE if the size estimates are zero

2015-10-27 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10600:

Priority: Minor  (was: Major)

> CqlInputFormat throws IOE if the size estimates are zero
> 
>
> Key: CASSANDRA-10600
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10600
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Reporter: Mike Adamson
>Assignee: Mike Adamson
>Priority: Minor
> Fix For: 3.x, 3.0.x
>
> Attachments: 10600.txt
>
>
> {{CqlInputFormat.describeSplits}} handles the case of no entry in the 
> 'system.size_estimates' table but does not handle the situation when there is 
> a zero size estimate in the table. This can happen if an input job is started 
> immediately after data is added but before the {{SizeEstimatesRecorder}} has 
> run.
> {{CqlInputFormat.describeSplits}} should handle 0 size estimate in the same 
> manner as no size estimate and not sub-split but return the full split. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10600) CqlInputFormat throws IOE if the size estimates are zero

2015-10-27 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10600:

Fix Version/s: (was: 3.0.0)
   3.0.x
   3.x

> CqlInputFormat throws IOE if the size estimates are zero
> 
>
> Key: CASSANDRA-10600
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10600
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Reporter: Mike Adamson
>Assignee: Mike Adamson
>Priority: Minor
> Fix For: 3.x, 3.0.x
>
> Attachments: 10600.txt
>
>
> {{CqlInputFormat.describeSplits}} handles the case of no entry in the 
> 'system.size_estimates' table but does not handle the situation when there is 
> a zero size estimate in the table. This can happen if an input job is started 
> immediately after data is added but before the {{SizeEstimatesRecorder}} has 
> run.
> {{CqlInputFormat.describeSplits}} should handle 0 size estimate in the same 
> manner as no size estimate and not sub-split but return the full split. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate

2015-10-27 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976650#comment-14976650
 ] 

Ariel Weisberg commented on CASSANDRA-10592:


|[Code|https://github.com/apache/cassandra/compare/cassandra-2.2...aweisberg:CASSANDRA-10592?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-10592-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-10592-dtest/]|

> IllegalArgumentException in DataOutputBuffer.reallocate
> ---
>
> Key: CASSANDRA-10592
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10592
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sebastian Estevez
>Assignee: Ariel Weisberg
> Fix For: 2.2.4, 3.0.0
>
>
> The following exception appeared in my logs while running a cassandra-stress 
> workload on master. 
> {code}
> WARN  [SharedPool-Worker-1] 2015-10-22 12:58:20,792 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.RuntimeException: java.lang.IllegalArgumentException
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> Caused by: java.lang.IllegalArgumentException: null
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
>  ~[main/:na]
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1697)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2362)
>  ~[main/:na]
>   ... 4 common frames omitted
> {code}
> I was running this command:
> {code}
> tools/bin/cassandra-stress user 
> profile=~/Desktop/startup/stress/stress.yaml n=10 ops\(insert=1\) -rate 
> threads=30
> {code}
> Here's the stress.yaml UPDATED!
> {code}
> ### DML ### THIS IS UNDER CONSTRUCTION!!!
> # Keyspace Name
> keyspace: autogeneratedtest
> # The CQL for creating a keyspace (optional if it already exists)
> keyspace_definition: |
>   CREATE 

[jira] [Commented] (CASSANDRA-10341) Streaming does not guarantee cache invalidation

2015-10-27 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976593#comment-14976593
 ] 

Paulo Motta commented on CASSANDRA-10341:
-

It seems some tests did not executed correctly or had much noise on them. 
Rebased and re-submitted tests for execution.

> Streaming does not guarantee cache invalidation
> ---
>
> Key: CASSANDRA-10341
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10341
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Paulo Motta
> Fix For: 2.1.x, 2.2.x, 3.1
>
>
> Looking at the code, we attempt to invalidate the row cache for any rows we 
> receive via streaming, however we invalidate them immediately, before the new 
> data is available. So, if it is requested (which is likely if it is "hot") in 
> the interval, it will be re-cached and not invalidated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10602) 2 upgrade test failures: static_columns_paging_test and multi_list_set_test

2015-10-27 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-10602:


 Summary: 2 upgrade test failures: static_columns_paging_test and 
multi_list_set_test
 Key: CASSANDRA-10602
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10602
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 3.0.0


The two following test throws a NPE:
* 
http://cassci.datastax.com/job/cassandra-3.0_dtest/293/testReport/junit/upgrade_tests.paging_test/TestPagingDataNodes2RF1/static_columns_paging_test/
* 
http://cassci.datastax.com/job/cassandra-3.0_dtest/293/testReport/junit/upgrade_tests.cql_tests/TestCQLNodes3RF3/multi_list_set_test/




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-9912) SizeEstimatesRecorder has assertions after decommission sometimes

2015-10-27 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta reassigned CASSANDRA-9912:
--

Assignee: Paulo Motta

> SizeEstimatesRecorder has assertions after decommission sometimes
> -
>
> Key: CASSANDRA-9912
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9912
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jeremiah Jordan
>Assignee: Paulo Motta
> Fix For: 2.1.12
>
>
> Doing some testing with 2.1.8 adding and decommissioning nodes.  Sometimes 
> after decommissioning the following starts being thrown by the 
> SizeEstimatesRecorder.
> {noformat}
> java.lang.AssertionError: -9223372036854775808 not found in 
> -9223372036854775798, 10
> at 
> org.apache.cassandra.locator.TokenMetadata.getPredecessor(TokenMetadata.java:683)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.locator.TokenMetadata.getPrimaryRangesFor(TokenMetadata.java:627)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.db.SizeEstimatesRecorder.run(SizeEstimatesRecorder.java:68)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_40]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_40]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_40]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_40]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate

2015-10-27 Thread Sebastian Estevez (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Estevez updated CASSANDRA-10592:
--
Description: 
The following exception appeared in my logs while running a cassandra-stress 
workload on master. 

{code}
WARN  [SharedPool-Worker-1] 2015-10-22 12:58:20,792 
AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.RuntimeException: java.lang.IllegalArgumentException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366)
 ~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_60]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 ~[main/:na]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
Caused by: java.lang.IllegalArgumentException: null
at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60]
at 
org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63)
 ~[main/:na]
at 
org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57) 
~[main/:na]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
 ~[main/:na]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
 ~[main/:na]
at 
org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
 ~[main/:na]
at 
org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) 
~[main/:na]
at 
org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
~[main/:na]
at 
org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1697)
 ~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2362)
 ~[main/:na]
... 4 common frames omitted
{code}

I was running this command:

{code}
tools/bin/cassandra-stress user 
profile=~/Desktop/startup/stress/stress.yaml n=10 ops\(insert=1\) -rate 
threads=30
{code}

Here's the stress.yaml 

{code}
### DML ### THIS IS UNDER CONSTRUCTION!!!

# Keyspace Name
keyspace: autogeneratedtest

# The CQL for creating a keyspace (optional if it already exists)
keyspace_definition: |
  CREATE KEYSPACE autogeneratedtest WITH replication = {'class': 
'SimpleStrategy', 'replication_factor': 1};
# Table name
table: test

# The CQL for creating a table you wish to stress (optional if it already 
exists)
table_definition:
  CREATE TABLE test (
  a int,
  b int,
  c int,
  d int,
  e int,
  f timestamp,
  g text,
  h bigint,
  i text,
  j text,
  k bigint,
  l text,
  m text,
  n float,
  o int,
  p float,
  q float,
  r text,
  s float,
  PRIMARY KEY ((a, c, d, b, e), m, f, g)
  );
### Column Distribution Specifications ###

columnspec:
  - name: a
size: uniform(4..4)
population: uniform(1..500)

  - name: b
size: uniform(4..4)
population: uniform(2..3000)

  - name: c
size: uniform(4..4)
population: uniform(1..100)

  - name: d
size: uniform(4..4)
population: uniform(1..120)

  - name: e
size: uniform(4..4)
population: uniform(1..100)

[jira] [Commented] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate

2015-10-27 Thread Sebastian Estevez (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976569#comment-14976569
 ] 

Sebastian Estevez commented on CASSANDRA-10592:
---

Right sorry about that, let me correct the file. We don't support maps in 
cassandra stress and I must have pasted an old version of the yaml.

> IllegalArgumentException in DataOutputBuffer.reallocate
> ---
>
> Key: CASSANDRA-10592
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10592
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sebastian Estevez
>Assignee: Ariel Weisberg
> Fix For: 2.2.4, 3.0.0
>
>
> The following exception appeared in my logs while running a cassandra-stress 
> workload on master. 
> {code}
> WARN  [SharedPool-Worker-1] 2015-10-22 12:58:20,792 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.RuntimeException: java.lang.IllegalArgumentException
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> Caused by: java.lang.IllegalArgumentException: null
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
>  ~[main/:na]
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1697)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2362)
>  ~[main/:na]
>   ... 4 common frames omitted
> {code}
> I was running this command:
> {code}
> tools/bin/cassandra-stress user 
> profile=~/Desktop/startup/stress/stress.yaml n=10 ops\(insert=1\) -rate 
> threads=30
> {code}
> Here's the stress.yaml 
> {code}
> ### DML ### THIS IS UNDER CONSTRUCTION!!!
> # Keyspace Name
> keyspace: autogeneratedtest
> # The CQL for creating a keyspace (optional if it already exists)
> keyspace_definition: |
>   CREATE KEYSPACE autogeneratedtest WITH replication = {'class': 
> 'SimpleStrategy', 'replication_factor': 1};
> # Table name
> table: test
> # The CQL for creating a 

[jira] [Comment Edited] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate

2015-10-27 Thread Sebastian Estevez (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976569#comment-14976569
 ] 

Sebastian Estevez edited comment on CASSANDRA-10592 at 10/27/15 3:33 PM:
-

Right sorry about that, I corrected the file now. You'll have to drop the table 
and try again. We don't support maps in cassandra stress and I must have pasted 
an old version of the yaml.


was (Author: sebastian.este...@datastax.com):
Right sorry about that, let me correct the file. We don't support maps in 
cassandra stress and I must have pasted an old version of the yaml.

> IllegalArgumentException in DataOutputBuffer.reallocate
> ---
>
> Key: CASSANDRA-10592
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10592
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sebastian Estevez
>Assignee: Ariel Weisberg
> Fix For: 2.2.4, 3.0.0
>
>
> The following exception appeared in my logs while running a cassandra-stress 
> workload on master. 
> {code}
> WARN  [SharedPool-Worker-1] 2015-10-22 12:58:20,792 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.RuntimeException: java.lang.IllegalArgumentException
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> Caused by: java.lang.IllegalArgumentException: null
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
>  ~[main/:na]
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1697)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2362)
>  ~[main/:na]
>   ... 4 common frames omitted
> {code}
> I was running this command:
> {code}
> tools/bin/cassandra-stress user 
> profile=~/Desktop/startup/stress/stress.yaml n=10 ops\(insert=1\) -rate 
> threads=30
> {code}
> Here's the stress.yaml 
> {code}
> ### DML ### THIS IS UNDER CONSTRUCTION!!!
> # Keyspace Name
> ke

[jira] [Comment Edited] (CASSANDRA-10579) IndexOutOfBoundsException during memtable flushing at startup (with offheap_objects)

2015-10-27 Thread Jeff Griffith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976555#comment-14976555
 ] 

Jeff Griffith edited comment on CASSANDRA-10579 at 10/27/15 3:34 PM:
-

So [~benedict] I rebuilt 2.1.10 with a merge of your diagnostic patch plus the 
changes you mention above for integer overflow. I tried this on a node where i 
had re-enabled assertions. i THINK but i am not certain that the assertions 
suppress seeing the commit log IndexOutOfBounds exception, i will confirm this. 
but the GOOD news is that this version DOES seem to fix the startup problem! I 
will confirm this on the next node that fails where assertions are off.  By the 
way, it seems like this may also be leading to sstable corruption (probably not 
surprising since it's flushing sstables when the IOOB exception happens?)




was (Author: jeffery.griffith):
So [~benedict] I rebuilt 2.1.10 with a merge of your diagnostic patch plus the 
changes you mention above for integer overflow. I tried this on a node where i 
had re-enabled assertions. i THINK but i am not certain that the assertions 
suppress seeing the commit log IndexOutOfBounds exception, i will confirm this. 
but the GOOD news is that this version DOES seem to fix the startup problem! I 
will confirm this on the next node that fails where assertions are off.


> IndexOutOfBoundsException during memtable flushing at startup (with 
> offheap_objects)
> 
>
> Key: CASSANDRA-10579
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10579
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.1.10 on linux
>Reporter: Jeff Griffith
>Assignee: Benedict
> Fix For: 2.1.x
>
>
> Sometimes we have problems at startup where memtable flushes with an index 
> out of bounds exception as seen below. Cassandra is then dead in the water 
> until we track down the corresponding commit log via the segment ID and 
> remove it:
> {code}
> INFO  [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log
> INFO  [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log (CL version 4, 
> messaging version 8)
> INFO  [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:478 - Finished 
> reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log
> INFO  [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log
> INFO  [main] 2015-10-23 14:43:36,595 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log (CL version 4, 
> messaging version 8)
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:478 - Finished 
> reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log (CL version 4, 
> messaging version 8)
> WARN  [SharedPool-Worker-5] 2015-10-23 14:43:36,747 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-5,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: 6
> at 
> org.apache.cassandra.db.AbstractNativeCell.nametype(AbstractNativeCell.java:204)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.AbstractNativeCell.isStatic(AbstractNativeCell.java:199)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCType.compare(AbstractCType.java:166)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:61)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:58)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.BTree.find(BTree.java:277) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.utils.btree.NodeBuilder.update(NodeBuilder.java:154) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.Builder.update(Builder.java:74) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.BTree.update(BTree.java:186) 
> ~[apache-cassandra-2.1.10.jar:

[jira] [Commented] (CASSANDRA-10476) Fix upgrade paging dtest failures on 2.2->3.0 path

2015-10-27 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976575#comment-14976575
 ] 

Sylvain Lebresne commented on CASSANDRA-10476:
--

bq.  this one fails in at least 2 ways: sometimes with an NPE

For the NPE, that's almost surely CASSANDRA-10602. That leaves the 
data-validation error to fix here though.

> Fix upgrade paging dtest failures on 2.2->3.0 path
> --
>
> Key: CASSANDRA-10476
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10476
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
> Fix For: 3.0.0
>
>
> EDIT: this list of failures is no longer current; see comments for current 
> failures.
> The following upgrade tests for paging features fail or flap on the upgrade 
> path from 2.2 to 3.0:
> - {{upgrade_tests/paging_test.py:TestPagingData.static_columns_paging_test}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingSize.test_undefined_page_size_default}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingSize.test_with_more_results_than_page_size}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_failure_threshold_deletions}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_multiple_cell_deletions}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_cell_deletions}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_row_deletions}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingDatasetChanges.test_cell_TTL_expiry_during_paging/}}
> I've grouped them all together because I don't know how to tell if they're 
> related; once someone triages them, it may be appropriate to break this out 
> into multiple tickets.
> The failures can be found here:
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingData/static_columns_paging_test/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingSize/test_undefined_page_size_default/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/42/testReport/upgrade_tests.paging_test/TestPagingSize/test_with_more_results_than_page_size/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_failure_threshold_deletions/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_multiple_cell_deletions/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_single_cell_deletions/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_single_row_deletions/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingDatasetChanges/test_cell_TTL_expiry_during_paging/
> Once [this dtest PR|https://github.com/riptano/cassandra-dtest/pull/586] is 
> merged, these tests should also run with this upgrade path on normal 3.0 
> jobs. Until then, you can run them with the following command:
> {code}
> SKIP=false CASSANDRA_VERSION=binary:2.2.0 UPGRADE_TO=git:cassandra-3.0 
> nosetests 
> upgrade_tests/paging_test.py:TestPagingData.static_columns_paging_test 
> upgrade_tests/paging_test.py:TestPagingSize.test_undefined_page_size_default 
> upgrade_tests/paging_test.py:TestPagingSize.test_with_more_results_than_page_size
>  
> upgrade_tests/paging_test.py:TestPagingWithDeletions.test_failure_threshold_deletions
>  
> upgrade_tests/paging_test.py:TestPagingWithDeletions.test_multiple_cell_deletions
>  
> upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_cell_deletions
>  
> upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_row_deletions
> upgrade_tests/paging_test.py:TestPagingDatasetChanges.test_cell_TTL_expiry_during_paging
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate

2015-10-27 Thread Sebastian Estevez (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Estevez updated CASSANDRA-10592:
--
Description: 
The following exception appeared in my logs while running a cassandra-stress 
workload on master. 

{code}
WARN  [SharedPool-Worker-1] 2015-10-22 12:58:20,792 
AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.RuntimeException: java.lang.IllegalArgumentException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366)
 ~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_60]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 ~[main/:na]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
Caused by: java.lang.IllegalArgumentException: null
at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60]
at 
org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63)
 ~[main/:na]
at 
org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57) 
~[main/:na]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
 ~[main/:na]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
 ~[main/:na]
at 
org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
 ~[main/:na]
at 
org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) 
~[main/:na]
at 
org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
~[main/:na]
at 
org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1697)
 ~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2362)
 ~[main/:na]
... 4 common frames omitted
{code}

I was running this command:

{code}
tools/bin/cassandra-stress user 
profile=~/Desktop/startup/stress/stress.yaml n=10 ops\(insert=1\) -rate 
threads=30
{code}

Here's the stress.yaml UPDATED!

{code}
### DML ### THIS IS UNDER CONSTRUCTION!!!

# Keyspace Name
keyspace: autogeneratedtest

# The CQL for creating a keyspace (optional if it already exists)
keyspace_definition: |
  CREATE KEYSPACE autogeneratedtest WITH replication = {'class': 
'SimpleStrategy', 'replication_factor': 1};
# Table name
table: test
# The CQL for creating a table you wish to stress (optional if it already 
exists)
table_definition:
  CREATE TABLE test (
  a int,
  b int,
  c int,
  d int,
  e int,
  f timestamp,
  g text,
  h bigint,
  i text,
  j text,
  k bigint,
  l text,
  m text,
  n float,
  o int,
  p float,
  q float,
  r text,
  s float,
  PRIMARY KEY ((a, c, d, b, e), m, f, g)
  );
### Column Distribution Specifications ###

columnspec:
  - name: a
size: uniform(4..4)
population: uniform(1..500)

  - name: b
size: uniform(4..4)
population: uniform(2..3000)

  - name: c
size: uniform(4..4)
population: uniform(1..100)

  - name: d
size: uniform(4..4)
population: uniform(1..120)

  - name: e
size: uniform(4..4)
population: uniform(

[jira] [Commented] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate

2015-10-27 Thread Sebastian Estevez (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976570#comment-14976570
 ] 

Sebastian Estevez commented on CASSANDRA-10592:
---

By the way now I'm seeing it on 3.0 RC2 as well.

> IllegalArgumentException in DataOutputBuffer.reallocate
> ---
>
> Key: CASSANDRA-10592
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10592
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sebastian Estevez
>Assignee: Ariel Weisberg
> Fix For: 2.2.4, 3.0.0
>
>
> The following exception appeared in my logs while running a cassandra-stress 
> workload on master. 
> {code}
> WARN  [SharedPool-Worker-1] 2015-10-22 12:58:20,792 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.RuntimeException: java.lang.IllegalArgumentException
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> Caused by: java.lang.IllegalArgumentException: null
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
>  ~[main/:na]
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1697)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2362)
>  ~[main/:na]
>   ... 4 common frames omitted
> {code}
> I was running this command:
> {code}
> tools/bin/cassandra-stress user 
> profile=~/Desktop/startup/stress/stress.yaml n=10 ops\(insert=1\) -rate 
> threads=30
> {code}
> Here's the stress.yaml 
> {code}
> ### DML ### THIS IS UNDER CONSTRUCTION!!!
> # Keyspace Name
> keyspace: autogeneratedtest
> # The CQL for creating a keyspace (optional if it already exists)
> keyspace_definition: |
>   CREATE KEYSPACE autogeneratedtest WITH replication = {'class': 
> 'SimpleStrategy', 'replication_factor': 1};
> # Table name
> table: test
> # The CQL for creating a table you wish to stress (optional if it already 
> exists)
> table_definition:
>   CREATE TAB

[jira] [Commented] (CASSANDRA-10476) Fix upgrade paging dtest failures on 2.2->3.0 path

2015-10-27 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976556#comment-14976556
 ] 

Jim Witschey commented on CASSANDRA-10476:
--

Here are the current failures from recent runs:
* failing reliably
** 
[upgrade_tests/paging_test.py:TestPagingDataNodes2RF1.static_columns_paging_test|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/upgrade_tests.paging_test/TestPagingDataNodes2RF1/static_columns_paging_test/history/]
*** note that this one fails in at least 2 ways: sometimes with an NPE (see 
[here|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/293/testReport/junit/upgrade_tests.paging_test/TestPagingDataNodes2RF1/static_columns_paging_test/]
 and 
[here|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/293/testReport/junit/upgrade_tests.paging_test/TestPagingDataNodes2RF1/static_columns_paging_test/]
 for different reports on the same error), and sometimes with a normal 
data-validation error (see 
[here|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/junit/upgrade_tests.paging_test/TestPagingDataNodes2RF1/static_columns_paging_test/]).
* flapping
** 
[upgrade_tests/paging_test.py:TestPagingWithDeletionsNodes3RF3.test_multiple_partition_deletions|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/upgrade_tests.paging_test/TestPagingWithDeletionsNodes3RF3/test_multiple_partition_deletions/history/]
** 
[upgrade_tests/paging_test.py:TestPagingWithDeletionsNodes3RF3.test_single_partition_deletions|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/293/testReport/upgrade_tests.paging_test/TestPagingWithDeletionsNodes3RF3/test_single_partition_deletions/history/]
** 
[upgrade_tests/paging_test.py:TestPagingWithDeletionsNodes3RF3.test_query_isolation|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/293/testReport/upgrade_tests.paging_test/TestPagingQueryIsolationNodes3RF3/test_query_isolation/history/]

> Fix upgrade paging dtest failures on 2.2->3.0 path
> --
>
> Key: CASSANDRA-10476
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10476
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
> Fix For: 3.0.0
>
>
> The following upgrade tests for paging features fail or flap on the upgrade 
> path from 2.2 to 3.0:
> - {{upgrade_tests/paging_test.py:TestPagingData.static_columns_paging_test}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingSize.test_undefined_page_size_default}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingSize.test_with_more_results_than_page_size}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_failure_threshold_deletions}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_multiple_cell_deletions}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_cell_deletions}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_row_deletions}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingDatasetChanges.test_cell_TTL_expiry_during_paging/}}
> I've grouped them all together because I don't know how to tell if they're 
> related; once someone triages them, it may be appropriate to break this out 
> into multiple tickets.
> The failures can be found here:
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingData/static_columns_paging_test/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingSize/test_undefined_page_size_default/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/42/testReport/upgrade_tests.paging_test/TestPagingSize/test_with_more_results_than_page_size/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_failure_threshold_deletions/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_multiple_cell_deletions/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_single_cell_deletions/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_single_row_deletions/

[jira] [Updated] (CASSANDRA-10602) 2 upgrade test failures: static_columns_paging_test and multi_list_set_test

2015-10-27 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10602:
-
Reviewer: Aleksey Yeschenko

> 2 upgrade test failures: static_columns_paging_test and multi_list_set_test
> ---
>
> Key: CASSANDRA-10602
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10602
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.0.0
>
>
> The two following test throws a NPE:
> * 
> http://cassci.datastax.com/job/cassandra-3.0_dtest/293/testReport/junit/upgrade_tests.paging_test/TestPagingDataNodes2RF1/static_columns_paging_test/
> * 
> http://cassci.datastax.com/job/cassandra-3.0_dtest/293/testReport/junit/upgrade_tests.cql_tests/TestCQLNodes3RF3/multi_list_set_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10341) Streaming does not guarantee cache invalidation

2015-10-27 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-10341:
---
Reviewer: Yuki Morishita

> Streaming does not guarantee cache invalidation
> ---
>
> Key: CASSANDRA-10341
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10341
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Paulo Motta
> Fix For: 2.1.x, 2.2.x, 3.1
>
>
> Looking at the code, we attempt to invalidate the row cache for any rows we 
> receive via streaming, however we invalidate them immediately, before the new 
> data is available. So, if it is requested (which is likely if it is "hot") in 
> the interval, it will be re-cached and not invalidated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10476) Fix upgrade paging dtest failures on 2.2->3.0 path

2015-10-27 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-10476:
-
Description: 
EDIT: this list of failures is no longer current; see comments for current 
failures.

The following upgrade tests for paging features fail or flap on the upgrade 
path from 2.2 to 3.0:

- {{upgrade_tests/paging_test.py:TestPagingData.static_columns_paging_test}}
- 
{{upgrade_tests/paging_test.py:TestPagingSize.test_undefined_page_size_default}}
- 
{{upgrade_tests/paging_test.py:TestPagingSize.test_with_more_results_than_page_size}}
- 
{{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_failure_threshold_deletions}}
- 
{{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_multiple_cell_deletions}}
- 
{{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_cell_deletions}}
- 
{{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_row_deletions}}
- 
{{upgrade_tests/paging_test.py:TestPagingDatasetChanges.test_cell_TTL_expiry_during_paging/}}

I've grouped them all together because I don't know how to tell if they're 
related; once someone triages them, it may be appropriate to break this out 
into multiple tickets.

The failures can be found here:

http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingData/static_columns_paging_test/history/
http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingSize/test_undefined_page_size_default/history/
http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/42/testReport/upgrade_tests.paging_test/TestPagingSize/test_with_more_results_than_page_size/history/
http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_failure_threshold_deletions/history/
http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_multiple_cell_deletions/history/
http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_single_cell_deletions/history/
http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_single_row_deletions/history/
http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingDatasetChanges/test_cell_TTL_expiry_during_paging/

Once [this dtest PR|https://github.com/riptano/cassandra-dtest/pull/586] is 
merged, these tests should also run with this upgrade path on normal 3.0 jobs. 
Until then, you can run them with the following command:

{code}
SKIP=false CASSANDRA_VERSION=binary:2.2.0 UPGRADE_TO=git:cassandra-3.0 
nosetests 
upgrade_tests/paging_test.py:TestPagingData.static_columns_paging_test 
upgrade_tests/paging_test.py:TestPagingSize.test_undefined_page_size_default 
upgrade_tests/paging_test.py:TestPagingSize.test_with_more_results_than_page_size
 
upgrade_tests/paging_test.py:TestPagingWithDeletions.test_failure_threshold_deletions
 
upgrade_tests/paging_test.py:TestPagingWithDeletions.test_multiple_cell_deletions
 
upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_cell_deletions 
upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_row_deletions
upgrade_tests/paging_test.py:TestPagingDatasetChanges.test_cell_TTL_expiry_during_paging
{code}

  was:
The following upgrade tests for paging features fail or flap on the upgrade 
path from 2.2 to 3.0:

- {{upgrade_tests/paging_test.py:TestPagingData.static_columns_paging_test}}
- 
{{upgrade_tests/paging_test.py:TestPagingSize.test_undefined_page_size_default}}
- 
{{upgrade_tests/paging_test.py:TestPagingSize.test_with_more_results_than_page_size}}
- 
{{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_failure_threshold_deletions}}
- 
{{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_multiple_cell_deletions}}
- 
{{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_cell_deletions}}
- 
{{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_row_deletions}}
- 
{{upgrade_tests/paging_test.py:TestPagingDatasetChanges.test_cell_TTL_expiry_during_paging/}}

I've grouped them all together because I don't know how to tell if they're 
related; once someone triages them, it may be appropriate to break this out 
into multiple tickets.

The failures can be found here:

http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagi

[jira] [Commented] (CASSANDRA-10579) IndexOutOfBoundsException during memtable flushing at startup (with offheap_objects)

2015-10-27 Thread Jeff Griffith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976555#comment-14976555
 ] 

Jeff Griffith commented on CASSANDRA-10579:
---

So [~benedict] I rebuilt 2.1.10 with a merge of your diagnostic patch plus the 
changes you mention above for integer overflow. I tried this on a node where i 
had re-enabled assertions. i THINK but i am not certain that the assertions 
suppress seeing the commit log IndexOutOfBounds exception, i will confirm this. 
but the GOOD news is that this version DOES seem to fix the startup problem! I 
will confirm this on the next node that fails where assertions are off.


> IndexOutOfBoundsException during memtable flushing at startup (with 
> offheap_objects)
> 
>
> Key: CASSANDRA-10579
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10579
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.1.10 on linux
>Reporter: Jeff Griffith
>Assignee: Benedict
> Fix For: 2.1.x
>
>
> Sometimes we have problems at startup where memtable flushes with an index 
> out of bounds exception as seen below. Cassandra is then dead in the water 
> until we track down the corresponding commit log via the segment ID and 
> remove it:
> {code}
> INFO  [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log
> INFO  [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log (CL version 4, 
> messaging version 8)
> INFO  [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:478 - Finished 
> reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log
> INFO  [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log
> INFO  [main] 2015-10-23 14:43:36,595 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log (CL version 4, 
> messaging version 8)
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:478 - Finished 
> reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:267 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log
> INFO  [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:270 - Replaying 
> /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log (CL version 4, 
> messaging version 8)
> WARN  [SharedPool-Worker-5] 2015-10-23 14:43:36,747 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-5,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: 6
> at 
> org.apache.cassandra.db.AbstractNativeCell.nametype(AbstractNativeCell.java:204)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.AbstractNativeCell.isStatic(AbstractNativeCell.java:199)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCType.compare(AbstractCType.java:166)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:61)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:58)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.BTree.find(BTree.java:277) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.utils.btree.NodeBuilder.update(NodeBuilder.java:154) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.Builder.update(Builder.java:74) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.utils.btree.BTree.update(BTree.java:186) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.AtomicBTreeColumns.addAllWithSizeDelta(AtomicBTreeColumns.java:225)
>  ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Memtable.put(Memtable.java:210) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1225) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:359) 
> ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT]
> at 
> org.apache.cassandra.db.co

[jira] [Commented] (CASSANDRA-10276) With DTCS, do STCS in windows if more than max_threshold sstables

2015-10-27 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976550#comment-14976550
 ] 

Marcus Eriksson commented on CASSANDRA-10276:
-

[~jjirsa] could you have a look at the latest patch that always does STCS?

> With DTCS, do STCS in windows if more than max_threshold sstables
> -
>
> Key: CASSANDRA-10276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10276
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.x, 2.1.x, 2.2.x
>
>
> To avoid constant recompaction of files in big ( > max threshold) DTCS 
> windows, we should do STCS of those files.
> Patch here: https://github.com/krummas/cassandra/commits/marcuse/dtcs_stcs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8068) Allow to create authenticator which is aware of the client connection

2015-10-27 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8068:
-
Fix Version/s: 3.0.0

> Allow to create authenticator which is aware of the client connection
> -
>
> Key: CASSANDRA-8068
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8068
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Jacek Lewandowski
>Assignee: Sam Tunnicliffe
>Priority: Minor
>  Labels: security
> Fix For: 3.0.0
>
>
> Currently, the authenticator interface doesn't allow to make a decision 
> according to the client connection properties (especially the client host 
> name or address). 
> The idea is to add the interface which extends the current SASL aware 
> authenticator interface with additional method to set the client connection. 
> ServerConnection then could supply the connection to the authenticator if the 
> authenticator implements that interface. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10601) SecondaryIndexManager incorrectly updates index build status system table

2015-10-27 Thread Sam Tunnicliffe (JIRA)
Sam Tunnicliffe created CASSANDRA-10601:
---

 Summary: SecondaryIndexManager incorrectly updates index build 
status system table 
 Key: CASSANDRA-10601
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10601
 Project: Cassandra
  Issue Type: Bug
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
 Fix For: 3.0.0


The {{markIndexBuilt}} and {{markIndexRemoved}} methods on 
{{SecondaryIndexManager}} incorrectly supply the table name to the methods on 
{{SystemKeyspace}} which update the underlying system table. They should pass 
the keyspace name instead, which results in incorrect rows being written 
to/removed from the system table when a rebuild of the index is performed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-8068) Allow to create authenticator which is aware of the client connection

2015-10-27 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan reopened CASSANDRA-8068:

  Assignee: Sam Tunnicliffe  (was: Jacek Lewandowski)

[~beobal] there are multiple use cases where this would be beneficial and I 
think it would be good to get a change like this in before we ship 3.0. Being 
able to restrict login by ip is a very common thing to do in authentication. 
Besides that being able to track where login attempts are coming from is 
essential for many types of users.

It probably makes the most sense to update the 
IAuthenticator::newSaslNegotiator call and add a QueryState/ClientState 
parameter to it, and an authenticator can track it that way.

> Allow to create authenticator which is aware of the client connection
> -
>
> Key: CASSANDRA-8068
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8068
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Jacek Lewandowski
>Assignee: Sam Tunnicliffe
>Priority: Minor
>  Labels: security
>
> Currently, the authenticator interface doesn't allow to make a decision 
> according to the client connection properties (especially the client host 
> name or address). 
> The idea is to add the interface which extends the current SASL aware 
> authenticator interface with additional method to set the client connection. 
> ServerConnection then could supply the connection to the authenticator if the 
> authenticator implements that interface. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10476) Fix upgrade paging dtest failures on 2.2->3.0 path

2015-10-27 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976502#comment-14976502
 ] 

Sylvain Lebresne commented on CASSANDRA-10476:
--

Looking quickly, it seems some of those tests have been passing reliably for a 
while. [~mambocab], could you look at it an update this with a list of the ones 
that still needs considerations?

> Fix upgrade paging dtest failures on 2.2->3.0 path
> --
>
> Key: CASSANDRA-10476
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10476
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
> Fix For: 3.0.0
>
>
> The following upgrade tests for paging features fail or flap on the upgrade 
> path from 2.2 to 3.0:
> - {{upgrade_tests/paging_test.py:TestPagingData.static_columns_paging_test}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingSize.test_undefined_page_size_default}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingSize.test_with_more_results_than_page_size}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_failure_threshold_deletions}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_multiple_cell_deletions}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_cell_deletions}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_row_deletions}}
> - 
> {{upgrade_tests/paging_test.py:TestPagingDatasetChanges.test_cell_TTL_expiry_during_paging/}}
> I've grouped them all together because I don't know how to tell if they're 
> related; once someone triages them, it may be appropriate to break this out 
> into multiple tickets.
> The failures can be found here:
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingData/static_columns_paging_test/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingSize/test_undefined_page_size_default/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/42/testReport/upgrade_tests.paging_test/TestPagingSize/test_with_more_results_than_page_size/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_failure_threshold_deletions/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_multiple_cell_deletions/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_single_cell_deletions/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_single_row_deletions/history/
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingDatasetChanges/test_cell_TTL_expiry_during_paging/
> Once [this dtest PR|https://github.com/riptano/cassandra-dtest/pull/586] is 
> merged, these tests should also run with this upgrade path on normal 3.0 
> jobs. Until then, you can run them with the following command:
> {code}
> SKIP=false CASSANDRA_VERSION=binary:2.2.0 UPGRADE_TO=git:cassandra-3.0 
> nosetests 
> upgrade_tests/paging_test.py:TestPagingData.static_columns_paging_test 
> upgrade_tests/paging_test.py:TestPagingSize.test_undefined_page_size_default 
> upgrade_tests/paging_test.py:TestPagingSize.test_with_more_results_than_page_size
>  
> upgrade_tests/paging_test.py:TestPagingWithDeletions.test_failure_threshold_deletions
>  
> upgrade_tests/paging_test.py:TestPagingWithDeletions.test_multiple_cell_deletions
>  
> upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_cell_deletions
>  
> upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_row_deletions
> upgrade_tests/paging_test.py:TestPagingDatasetChanges.test_cell_TTL_expiry_during_paging
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10452) Fix resummable_bootstrap_test and bootstrap_with_reset_bootstrap_state_test

2015-10-27 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976490#comment-14976490
 ] 

Sylvain Lebresne commented on CASSANDRA-10452:
--

The CCM PR has been committed some time ago. [~nutbunnies] can you validate 
that this does fix the tests and close this ticket if it does.

> Fix resummable_bootstrap_test and bootstrap_with_reset_bootstrap_state_test
> ---
>
> Key: CASSANDRA-10452
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10452
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Yuki Morishita
> Fix For: 3.0.0
>
>
> {{bootstrap_test.py:TestBootstrap.resumable_bootstrap_test}} has been 
> flapping on CassCI lately:
> http://cassci.datastax.com/view/trunk/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/
> I have not been able to reproduce on OpenStack. I'm assigning [~yukim] for 
> now, but feel free to reassign.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10590) LegacySSTableTest fails after CASSANDRA-10360

2015-10-27 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reassigned CASSANDRA-10590:


Assignee: Sylvain Lebresne

> LegacySSTableTest fails after CASSANDRA-10360
> -
>
> Key: CASSANDRA-10590
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10590
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Sylvain Lebresne
> Fix For: 3.0.0
>
>
> LegacySSTableTest fails reading pre-3.0 sstables (versions {{jb}}, {{ka}}, 
> {{la}}) with clustering keys and counters.
> [First failing 3.0 testall 
> build|http://cassci.datastax.com/job/cassandra-3.0_testall/205/]
> /cc [~slebresne]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9169) Cover all keyspace, table, and index options in DESCRIBE tests

2015-10-27 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-9169:
--
Assignee: (was: Carl Yeksigian)

> Cover all keyspace, table, and index options in DESCRIBE tests
> --
>
> Key: CASSANDRA-9169
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9169
> Project: Cassandra
>  Issue Type: Test
>  Components: Drivers (now out of tree), Tests
>Reporter: Tyler Hobbs
>  Labels: retrospective_generated
> Fix For: 2.1.x
>
>
> To prevent bugs like CASSANDRA-8288, we should improve the tests for the code 
> used to generate DESCRIBE output.  These tests exist in the python driver at 
> https://github.com/datastax/python-driver/blob/master/tests/integration/standard/test_metadata.py.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9188) cqlsh does not display properly the modified UDTs

2015-10-27 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-9188:
--
Assignee: (was: Carl Yeksigian)

> cqlsh does not display properly the modified UDTs
> -
>
> Key: CASSANDRA-9188
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9188
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Lerer
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.1.x
>
>
> The problem can be reproduced as follow:
> {code}
> cqlsh:test2> create type myType (a int);
> cqlsh:test2> create table myTable (a int primary key, b frozen);
> cqlsh:test2> insert into myTable (a, b) values (1, {a: 1});
> cqlsh:test2> select * from myTable;
>  a | b
> ---+
>  1 | {a: 1}
> (1 rows)
> cqlsh:test2> alter type myType add b int;
> cqlsh:test2> insert into myTable (a, b) values (2, {a: 2, b :2});
> cqlsh:test2> select * from myTable;
>  a | b
> ---+
>  1 | {a: 1}
>  2 | {a: 2}
> (2 rows)
> {code}
> If {{cqlsh}} is then restarted it will display the data properly.
> {code}
> cqlsh:test2> select * from mytable;
>  a | b
> ---+-
>  1 | {a: 1, b: null}
>  2 |{a: 2, b: 2}
> (2 rows)
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9285) LEAK DETECTED in sstwriter

2015-10-27 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-9285:
--
Assignee: (was: Carl Yeksigian)

> LEAK DETECTED in sstwriter
> --
>
> Key: CASSANDRA-9285
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9285
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pierre N.
> Fix For: 2.1.x
>
>
> reproduce bug : 
> {code}
> public static void main(String[] args) throws Exception {
> System.setProperty("cassandra.debugrefcount","true");
> 
> String ks = "ks1";
> String table = "t1";
> 
> String schema = "CREATE TABLE " + ks + "." + table + "(a1 INT, 
> PRIMARY KEY (a1));";
> String insert = "INSERT INTO "+ ks + "." + table + "(a1) VALUES(?);";
> 
> File dir = new File("/var/tmp/" + ks + "/" + table);
> dir.mkdirs();
> 
> CQLSSTableWriter writer = 
> CQLSSTableWriter.builder().forTable(schema).using(insert).inDirectory(dir).build();
> 
> writer.addRow(1);
> writer.close();
> writer = null;
> 
> Thread.sleep(1000);System.gc();
> Thread.sleep(1000);System.gc();
> }
> {code}
> {quote}
> [2015-05-01 16:09:59,139] [Reference-Reaper:1] ERROR 
> org.apache.cassandra.utils.concurrent.Ref - LEAK DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@79fa9da9) to class 
> org.apache.cassandra.io.util.SafeMemory$MemoryTidy@2053866990:Memory@[7f87f8043b20..7f87f8043b48)
>  was not released before the reference was garbage collected
> [2015-05-01 16:09:59,143] [Reference-Reaper:1] ERROR 
> org.apache.cassandra.utils.concurrent.Ref - Allocate trace 
> org.apache.cassandra.utils.concurrent.Ref$State@79fa9da9:
> Thread[Thread-2,5,main]
>   at java.lang.Thread.getStackTrace(Thread.java:1552)
>   at org.apache.cassandra.utils.concurrent.Ref$Debug.(Ref.java:200)
>   at org.apache.cassandra.utils.concurrent.Ref$State.(Ref.java:133)
>   at org.apache.cassandra.utils.concurrent.Ref.(Ref.java:60)
>   at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32)
>   at 
> org.apache.cassandra.io.util.SafeMemoryWriter.(SafeMemoryWriter.java:33)
>   at 
> org.apache.cassandra.io.sstable.IndexSummaryBuilder.(IndexSummaryBuilder.java:111)
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.(SSTableWriter.java:576)
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.(SSTableWriter.java:140)
>   at 
> org.apache.cassandra.io.sstable.AbstractSSTableSimpleWriter.getWriter(AbstractSSTableSimpleWriter.java:58)
>   at 
> org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:227)
> [2015-05-01 16:09:59,144] [Reference-Reaper:1] ERROR 
> org.apache.cassandra.utils.concurrent.Ref - LEAK DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@664382e3) to class 
> org.apache.cassandra.io.util.SafeMemory$MemoryTidy@899100784:Memory@[7f87f8043990..7f87f8043994)
>  was not released before the reference was garbage collected
> [2015-05-01 16:09:59,144] [Reference-Reaper:1] ERROR 
> org.apache.cassandra.utils.concurrent.Ref - Allocate trace 
> org.apache.cassandra.utils.concurrent.Ref$State@664382e3:
> Thread[Thread-2,5,main]
>   at java.lang.Thread.getStackTrace(Thread.java:1552)
>   at org.apache.cassandra.utils.concurrent.Ref$Debug.(Ref.java:200)
>   at org.apache.cassandra.utils.concurrent.Ref$State.(Ref.java:133)
>   at org.apache.cassandra.utils.concurrent.Ref.(Ref.java:60)
>   at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32)
>   at 
> org.apache.cassandra.io.util.SafeMemoryWriter.(SafeMemoryWriter.java:33)
>   at 
> org.apache.cassandra.io.sstable.IndexSummaryBuilder.(IndexSummaryBuilder.java:110)
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.(SSTableWriter.java:576)
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.(SSTableWriter.java:140)
>   at 
> org.apache.cassandra.io.sstable.AbstractSSTableSimpleWriter.getWriter(AbstractSSTableSimpleWriter.java:58)
>   at 
> org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:227)
> [2015-05-01 16:09:59,144] [Reference-Reaper:1] ERROR 
> org.apache.cassandra.utils.concurrent.Ref - LEAK DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@3cca0ac2) to class 
> org.apache.cassandra.io.util.SafeMemory$MemoryTidy@499043670:Memory@[7f87f8039940..7f87f8039c60)
>  was not released before the reference was garbage collected
> [2015-05-01 16:09:59,144] [Reference-Reaper:1] ERROR 
> org.apache.cassandra.utils.concurrent.Ref - Allocate trace 
> org.apache.cassandra.utils.concurrent.Ref$State@3

[jira] [Commented] (CASSANDRA-10571) ClusteringIndexNamesFilter::shouldInclude is not implemented, SinglePartitionNamesCommand not discarding the sstables it could

2015-10-27 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976473#comment-14976473
 ] 

Sylvain Lebresne commented on CASSANDRA-10571:
--

I'll note that the patch from CASSANDRA-10572 moves counters read to the 
"collect all data" path which does use {{shouldInclude}}, making this not a 
regression for counters anymore. For that reason, I'd personally have a small 
preference for pushing this to 3.2 (there is enough scary changes going on last 
minute before 3.0 GA as is). But anyway, a fix for this is 
[here|https://github.com/pcmanus/cassandra/commits/10571] (the patch is rebased 
on top of the one from CASSANDRA-10572).

> ClusteringIndexNamesFilter::shouldInclude is not implemented, 
> SinglePartitionNamesCommand not discarding the sstables it could
> --
>
> Key: CASSANDRA-10571
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10571
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Aleksey Yeschenko
>Assignee: Sylvain Lebresne
> Fix For: 3.0.0
>
>
> Now that we use {{SinglePartitionNamesCommand}} in more places - where we'd 
> previously use what is now {{SinglePartitionSliceCommand}} - not being able 
> to skip sstables with non-overlapping clusterings is actually a performance 
> regression.
> {{SinglePartitionNamesCommand::queryMemtableAndDiskInternal}} should prune 
> sstables based on {{ClusteringIndexNamesFilter::shouldInclude}} output, and 
> the latter must be replaced with an actual implementation instead of a 
> {{TODO}}.
> This is also a potentially a big regression in performance for counter writes 
> (say, with DTCS), since before 3.0, the read-before-write code would use 
> {{collectAllData}}, and that *was* pruning sstable with non-overlapping 
> clusterings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/5] cassandra git commit: cqlsh: Fix NULL option in COPY cmds after CASS-10415

2015-10-27 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk d3617a347 -> 7cedeca9a


cqlsh: Fix NULL option in COPY cmds after CASS-10415

Patch by Stefania Alborghetti; reviewed by Tyler Hobbs for
CASSANDRA-10577


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/78810f25
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/78810f25
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/78810f25

Branch: refs/heads/trunk
Commit: 78810f25506b3b612f22fb20ddc7500e45cb1eec
Parents: 87f43ac
Author: Stefania Alborghetti 
Authored: Tue Oct 27 09:10:10 2015 -0500
Committer: Tyler Hobbs 
Committed: Tue Oct 27 09:10:10 2015 -0500

--
 bin/cqlsh | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/78810f25/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index 21f8ffe..ca45be3 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -327,10 +327,11 @@ cqlsh_extra_syntax_rules = r'''
  ( "WITH"  ( "AND"  )* )?
 ;
 
- ::= [optnames]= "=" [optvals]=
+ ::= [optnames]=(|) "=" 
[optvals]=
;
 
  ::= 
+  | 
   | 
   ;
 



[3/5] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-10-27 Thread tylerhobbs
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/40cef770
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/40cef770
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/40cef770

Branch: refs/heads/trunk
Commit: 40cef770d08a899860358f31571cba26f5d8d3ee
Parents: b06c637 78810f2
Author: Tyler Hobbs 
Authored: Tue Oct 27 09:11:57 2015 -0500
Committer: Tyler Hobbs 
Committed: Tue Oct 27 09:11:57 2015 -0500

--
 bin/cqlsh.py | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--




[4/5] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-10-27 Thread tylerhobbs
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d856d3d7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d856d3d7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d856d3d7

Branch: refs/heads/trunk
Commit: d856d3d7bb4ab2a7534158a23e6a2132ef9bb246
Parents: f901a74 40cef77
Author: Tyler Hobbs 
Authored: Tue Oct 27 09:12:22 2015 -0500
Committer: Tyler Hobbs 
Committed: Tue Oct 27 09:12:22 2015 -0500

--
 bin/cqlsh.py | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d856d3d7/bin/cqlsh.py
--



[5/5] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-10-27 Thread tylerhobbs
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7cedeca9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7cedeca9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7cedeca9

Branch: refs/heads/trunk
Commit: 7cedeca9ac30173dbbc4178b191df6960d8f8190
Parents: d3617a3 d856d3d
Author: Tyler Hobbs 
Authored: Tue Oct 27 09:12:42 2015 -0500
Committer: Tyler Hobbs 
Committed: Tue Oct 27 09:12:42 2015 -0500

--
 bin/cqlsh.py | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--




[3/4] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-10-27 Thread tylerhobbs
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/40cef770
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/40cef770
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/40cef770

Branch: refs/heads/cassandra-3.0
Commit: 40cef770d08a899860358f31571cba26f5d8d3ee
Parents: b06c637 78810f2
Author: Tyler Hobbs 
Authored: Tue Oct 27 09:11:57 2015 -0500
Committer: Tyler Hobbs 
Committed: Tue Oct 27 09:11:57 2015 -0500

--
 bin/cqlsh.py | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--




[4/4] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-10-27 Thread tylerhobbs
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d856d3d7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d856d3d7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d856d3d7

Branch: refs/heads/cassandra-3.0
Commit: d856d3d7bb4ab2a7534158a23e6a2132ef9bb246
Parents: f901a74 40cef77
Author: Tyler Hobbs 
Authored: Tue Oct 27 09:12:22 2015 -0500
Committer: Tyler Hobbs 
Committed: Tue Oct 27 09:12:22 2015 -0500

--
 bin/cqlsh.py | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d856d3d7/bin/cqlsh.py
--



[2/3] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-10-27 Thread tylerhobbs
http://git-wip-us.apache.org/repos/asf/cassandra/blob/40cef770/bin/cqlsh.py
--
diff --cc bin/cqlsh.py
index 7f2d39b,000..09da020
mode 100644,00..100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@@ -1,2743 -1,0 +1,2744 @@@
 +#!/bin/sh
 +# -*- mode: Python -*-
 +
 +# Licensed to the Apache Software Foundation (ASF) under one
 +# or more contributor license agreements.  See the NOTICE file
 +# distributed with this work for additional information
 +# regarding copyright ownership.  The ASF licenses this file
 +# to you under the Apache License, Version 2.0 (the
 +# "License"); you may not use this file except in compliance
 +# with the License.  You may obtain a copy of the License at
 +#
 +# http://www.apache.org/licenses/LICENSE-2.0
 +#
 +# Unless required by applicable law or agreed to in writing, software
 +# distributed under the License is distributed on an "AS IS" BASIS,
 +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 +# See the License for the specific language governing permissions and
 +# limitations under the License.
 +
 +""":"
 +# bash code here; finds a suitable python interpreter and execs this file.
 +# prefer unqualified "python" if suitable:
 +python -c 'import sys; sys.exit(not (0x020500b0 < sys.hexversion < 
0x0300))' 2>/dev/null \
 +&& exec python "$0" "$@"
 +for pyver in 2.6 2.7 2.5; do
 +which python$pyver > /dev/null 2>&1 && exec python$pyver "$0" "$@"
 +done
 +echo "No appropriate python interpreter found." >&2
 +exit 1
 +":"""
 +
 +from __future__ import with_statement
 +
 +import cmd
 +import codecs
 +import ConfigParser
 +import csv
 +import getpass
 +import locale
 +import multiprocessing
 +import optparse
 +import os
 +import platform
 +import sys
 +import time
 +import traceback
 +import warnings
 +from contextlib import contextmanager
 +from functools import partial
 +from glob import glob
 +from StringIO import StringIO
 +from uuid import UUID
 +
 +if sys.version_info[0] != 2 or sys.version_info[1] != 7:
 +sys.exit("\nCQL Shell supports only Python 2.7\n")
 +
 +description = "CQL Shell for Apache Cassandra"
 +version = "5.0.1"
 +
 +readline = None
 +try:
 +# check if tty first, cause readline doesn't check, and only cares
 +# about $TERM. we don't want the funky escape code stuff to be
 +# output if not a tty.
 +if sys.stdin.isatty():
 +import readline
 +except ImportError:
 +pass
 +
 +CQL_LIB_PREFIX = 'cassandra-driver-internal-only-'
 +
 +CASSANDRA_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 
'..')
 +
 +# use bundled libs for python-cql and thrift, if available. if there
 +# is a ../lib dir, use bundled libs there preferentially.
 +ZIPLIB_DIRS = [os.path.join(CASSANDRA_PATH, 'lib')]
 +myplatform = platform.system()
 +if myplatform == 'Linux':
 +ZIPLIB_DIRS.append('/usr/share/cassandra/lib')
 +
 +if os.environ.get('CQLSH_NO_BUNDLED', ''):
 +ZIPLIB_DIRS = ()
 +
 +
 +def find_zip(libprefix):
 +for ziplibdir in ZIPLIB_DIRS:
 +zips = glob(os.path.join(ziplibdir, libprefix + '*.zip'))
 +if zips:
 +return max(zips)   # probably the highest version, if multiple
 +
 +cql_zip = find_zip(CQL_LIB_PREFIX)
 +if cql_zip:
 +ver = os.path.splitext(os.path.basename(cql_zip))[0][len(CQL_LIB_PREFIX):]
 +sys.path.insert(0, os.path.join(cql_zip, 'cassandra-driver-' + ver))
 +
 +third_parties = ('futures-', 'six-')
 +
 +for lib in third_parties:
 +lib_zip = find_zip(lib)
 +if lib_zip:
 +sys.path.insert(0, lib_zip)
 +
 +warnings.filterwarnings("ignore", r".*blist.*")
 +try:
 +import cassandra
 +except ImportError, e:
 +sys.exit("\nPython Cassandra driver not installed, or not on 
PYTHONPATH.\n"
 + 'You might try "pip install cassandra-driver".\n\n'
 + 'Python: %s\n'
 + 'Module load path: %r\n\n'
 + 'Error: %s\n' % (sys.executable, sys.path, e))
 +
 +from cassandra.auth import PlainTextAuthProvider
 +from cassandra.cluster import Cluster, PagedResult
 +from cassandra.metadata import (ColumnMetadata, KeyspaceMetadata,
 +TableMetadata, protect_name, protect_names,
 +protect_value)
 +from cassandra.policies import WhiteListRoundRobinPolicy
 +from cassandra.protocol import QueryMessage, ResultMessage
 +from cassandra.query import SimpleStatement, ordered_dict_factory
 +
 +# cqlsh should run correctly when run out of a Cassandra source tree,
 +# out of an unpacked Cassandra tarball, and after a proper package install.
 +cqlshlibdir = os.path.join(CASSANDRA_PATH, 'pylib')
 +if os.path.isdir(cqlshlibdir):
 +sys.path.insert(0, cqlshlibdir)
 +
 +from cqlshlib import cql3handling, cqlhandling, pylexotron, sslhandling
 +from cqlshlib.displaying import (ANSI_RESET, BLUE, COLUMN_NAME_COLORS, CYAN,
 + RED, FormattedValue, colorm

[1/3] cassandra git commit: cqlsh: Fix NULL option in COPY cmds after CASS-10415

2015-10-27 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 b06c637f1 -> 40cef770d


cqlsh: Fix NULL option in COPY cmds after CASS-10415

Patch by Stefania Alborghetti; reviewed by Tyler Hobbs for
CASSANDRA-10577


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/78810f25
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/78810f25
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/78810f25

Branch: refs/heads/cassandra-2.2
Commit: 78810f25506b3b612f22fb20ddc7500e45cb1eec
Parents: 87f43ac
Author: Stefania Alborghetti 
Authored: Tue Oct 27 09:10:10 2015 -0500
Committer: Tyler Hobbs 
Committed: Tue Oct 27 09:10:10 2015 -0500

--
 bin/cqlsh | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/78810f25/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index 21f8ffe..ca45be3 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -327,10 +327,11 @@ cqlsh_extra_syntax_rules = r'''
  ( "WITH"  ( "AND"  )* )?
 ;
 
- ::= [optnames]= "=" [optvals]=
+ ::= [optnames]=(|) "=" 
[optvals]=
;
 
  ::= 
+  | 
   | 
   ;
 



[2/5] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-10-27 Thread tylerhobbs
http://git-wip-us.apache.org/repos/asf/cassandra/blob/40cef770/bin/cqlsh.py
--
diff --cc bin/cqlsh.py
index 7f2d39b,000..09da020
mode 100644,00..100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@@ -1,2743 -1,0 +1,2744 @@@
 +#!/bin/sh
 +# -*- mode: Python -*-
 +
 +# Licensed to the Apache Software Foundation (ASF) under one
 +# or more contributor license agreements.  See the NOTICE file
 +# distributed with this work for additional information
 +# regarding copyright ownership.  The ASF licenses this file
 +# to you under the Apache License, Version 2.0 (the
 +# "License"); you may not use this file except in compliance
 +# with the License.  You may obtain a copy of the License at
 +#
 +# http://www.apache.org/licenses/LICENSE-2.0
 +#
 +# Unless required by applicable law or agreed to in writing, software
 +# distributed under the License is distributed on an "AS IS" BASIS,
 +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 +# See the License for the specific language governing permissions and
 +# limitations under the License.
 +
 +""":"
 +# bash code here; finds a suitable python interpreter and execs this file.
 +# prefer unqualified "python" if suitable:
 +python -c 'import sys; sys.exit(not (0x020500b0 < sys.hexversion < 
0x0300))' 2>/dev/null \
 +&& exec python "$0" "$@"
 +for pyver in 2.6 2.7 2.5; do
 +which python$pyver > /dev/null 2>&1 && exec python$pyver "$0" "$@"
 +done
 +echo "No appropriate python interpreter found." >&2
 +exit 1
 +":"""
 +
 +from __future__ import with_statement
 +
 +import cmd
 +import codecs
 +import ConfigParser
 +import csv
 +import getpass
 +import locale
 +import multiprocessing
 +import optparse
 +import os
 +import platform
 +import sys
 +import time
 +import traceback
 +import warnings
 +from contextlib import contextmanager
 +from functools import partial
 +from glob import glob
 +from StringIO import StringIO
 +from uuid import UUID
 +
 +if sys.version_info[0] != 2 or sys.version_info[1] != 7:
 +sys.exit("\nCQL Shell supports only Python 2.7\n")
 +
 +description = "CQL Shell for Apache Cassandra"
 +version = "5.0.1"
 +
 +readline = None
 +try:
 +# check if tty first, cause readline doesn't check, and only cares
 +# about $TERM. we don't want the funky escape code stuff to be
 +# output if not a tty.
 +if sys.stdin.isatty():
 +import readline
 +except ImportError:
 +pass
 +
 +CQL_LIB_PREFIX = 'cassandra-driver-internal-only-'
 +
 +CASSANDRA_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 
'..')
 +
 +# use bundled libs for python-cql and thrift, if available. if there
 +# is a ../lib dir, use bundled libs there preferentially.
 +ZIPLIB_DIRS = [os.path.join(CASSANDRA_PATH, 'lib')]
 +myplatform = platform.system()
 +if myplatform == 'Linux':
 +ZIPLIB_DIRS.append('/usr/share/cassandra/lib')
 +
 +if os.environ.get('CQLSH_NO_BUNDLED', ''):
 +ZIPLIB_DIRS = ()
 +
 +
 +def find_zip(libprefix):
 +for ziplibdir in ZIPLIB_DIRS:
 +zips = glob(os.path.join(ziplibdir, libprefix + '*.zip'))
 +if zips:
 +return max(zips)   # probably the highest version, if multiple
 +
 +cql_zip = find_zip(CQL_LIB_PREFIX)
 +if cql_zip:
 +ver = os.path.splitext(os.path.basename(cql_zip))[0][len(CQL_LIB_PREFIX):]
 +sys.path.insert(0, os.path.join(cql_zip, 'cassandra-driver-' + ver))
 +
 +third_parties = ('futures-', 'six-')
 +
 +for lib in third_parties:
 +lib_zip = find_zip(lib)
 +if lib_zip:
 +sys.path.insert(0, lib_zip)
 +
 +warnings.filterwarnings("ignore", r".*blist.*")
 +try:
 +import cassandra
 +except ImportError, e:
 +sys.exit("\nPython Cassandra driver not installed, or not on 
PYTHONPATH.\n"
 + 'You might try "pip install cassandra-driver".\n\n'
 + 'Python: %s\n'
 + 'Module load path: %r\n\n'
 + 'Error: %s\n' % (sys.executable, sys.path, e))
 +
 +from cassandra.auth import PlainTextAuthProvider
 +from cassandra.cluster import Cluster, PagedResult
 +from cassandra.metadata import (ColumnMetadata, KeyspaceMetadata,
 +TableMetadata, protect_name, protect_names,
 +protect_value)
 +from cassandra.policies import WhiteListRoundRobinPolicy
 +from cassandra.protocol import QueryMessage, ResultMessage
 +from cassandra.query import SimpleStatement, ordered_dict_factory
 +
 +# cqlsh should run correctly when run out of a Cassandra source tree,
 +# out of an unpacked Cassandra tarball, and after a proper package install.
 +cqlshlibdir = os.path.join(CASSANDRA_PATH, 'pylib')
 +if os.path.isdir(cqlshlibdir):
 +sys.path.insert(0, cqlshlibdir)
 +
 +from cqlshlib import cql3handling, cqlhandling, pylexotron, sslhandling
 +from cqlshlib.displaying import (ANSI_RESET, BLUE, COLUMN_NAME_COLORS, CYAN,
 + RED, FormattedValue, colorm

[2/4] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-10-27 Thread tylerhobbs
http://git-wip-us.apache.org/repos/asf/cassandra/blob/40cef770/bin/cqlsh.py
--
diff --cc bin/cqlsh.py
index 7f2d39b,000..09da020
mode 100644,00..100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@@ -1,2743 -1,0 +1,2744 @@@
 +#!/bin/sh
 +# -*- mode: Python -*-
 +
 +# Licensed to the Apache Software Foundation (ASF) under one
 +# or more contributor license agreements.  See the NOTICE file
 +# distributed with this work for additional information
 +# regarding copyright ownership.  The ASF licenses this file
 +# to you under the Apache License, Version 2.0 (the
 +# "License"); you may not use this file except in compliance
 +# with the License.  You may obtain a copy of the License at
 +#
 +# http://www.apache.org/licenses/LICENSE-2.0
 +#
 +# Unless required by applicable law or agreed to in writing, software
 +# distributed under the License is distributed on an "AS IS" BASIS,
 +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 +# See the License for the specific language governing permissions and
 +# limitations under the License.
 +
 +""":"
 +# bash code here; finds a suitable python interpreter and execs this file.
 +# prefer unqualified "python" if suitable:
 +python -c 'import sys; sys.exit(not (0x020500b0 < sys.hexversion < 
0x0300))' 2>/dev/null \
 +&& exec python "$0" "$@"
 +for pyver in 2.6 2.7 2.5; do
 +which python$pyver > /dev/null 2>&1 && exec python$pyver "$0" "$@"
 +done
 +echo "No appropriate python interpreter found." >&2
 +exit 1
 +":"""
 +
 +from __future__ import with_statement
 +
 +import cmd
 +import codecs
 +import ConfigParser
 +import csv
 +import getpass
 +import locale
 +import multiprocessing
 +import optparse
 +import os
 +import platform
 +import sys
 +import time
 +import traceback
 +import warnings
 +from contextlib import contextmanager
 +from functools import partial
 +from glob import glob
 +from StringIO import StringIO
 +from uuid import UUID
 +
 +if sys.version_info[0] != 2 or sys.version_info[1] != 7:
 +sys.exit("\nCQL Shell supports only Python 2.7\n")
 +
 +description = "CQL Shell for Apache Cassandra"
 +version = "5.0.1"
 +
 +readline = None
 +try:
 +# check if tty first, cause readline doesn't check, and only cares
 +# about $TERM. we don't want the funky escape code stuff to be
 +# output if not a tty.
 +if sys.stdin.isatty():
 +import readline
 +except ImportError:
 +pass
 +
 +CQL_LIB_PREFIX = 'cassandra-driver-internal-only-'
 +
 +CASSANDRA_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 
'..')
 +
 +# use bundled libs for python-cql and thrift, if available. if there
 +# is a ../lib dir, use bundled libs there preferentially.
 +ZIPLIB_DIRS = [os.path.join(CASSANDRA_PATH, 'lib')]
 +myplatform = platform.system()
 +if myplatform == 'Linux':
 +ZIPLIB_DIRS.append('/usr/share/cassandra/lib')
 +
 +if os.environ.get('CQLSH_NO_BUNDLED', ''):
 +ZIPLIB_DIRS = ()
 +
 +
 +def find_zip(libprefix):
 +for ziplibdir in ZIPLIB_DIRS:
 +zips = glob(os.path.join(ziplibdir, libprefix + '*.zip'))
 +if zips:
 +return max(zips)   # probably the highest version, if multiple
 +
 +cql_zip = find_zip(CQL_LIB_PREFIX)
 +if cql_zip:
 +ver = os.path.splitext(os.path.basename(cql_zip))[0][len(CQL_LIB_PREFIX):]
 +sys.path.insert(0, os.path.join(cql_zip, 'cassandra-driver-' + ver))
 +
 +third_parties = ('futures-', 'six-')
 +
 +for lib in third_parties:
 +lib_zip = find_zip(lib)
 +if lib_zip:
 +sys.path.insert(0, lib_zip)
 +
 +warnings.filterwarnings("ignore", r".*blist.*")
 +try:
 +import cassandra
 +except ImportError, e:
 +sys.exit("\nPython Cassandra driver not installed, or not on 
PYTHONPATH.\n"
 + 'You might try "pip install cassandra-driver".\n\n'
 + 'Python: %s\n'
 + 'Module load path: %r\n\n'
 + 'Error: %s\n' % (sys.executable, sys.path, e))
 +
 +from cassandra.auth import PlainTextAuthProvider
 +from cassandra.cluster import Cluster, PagedResult
 +from cassandra.metadata import (ColumnMetadata, KeyspaceMetadata,
 +TableMetadata, protect_name, protect_names,
 +protect_value)
 +from cassandra.policies import WhiteListRoundRobinPolicy
 +from cassandra.protocol import QueryMessage, ResultMessage
 +from cassandra.query import SimpleStatement, ordered_dict_factory
 +
 +# cqlsh should run correctly when run out of a Cassandra source tree,
 +# out of an unpacked Cassandra tarball, and after a proper package install.
 +cqlshlibdir = os.path.join(CASSANDRA_PATH, 'pylib')
 +if os.path.isdir(cqlshlibdir):
 +sys.path.insert(0, cqlshlibdir)
 +
 +from cqlshlib import cql3handling, cqlhandling, pylexotron, sslhandling
 +from cqlshlib.displaying import (ANSI_RESET, BLUE, COLUMN_NAME_COLORS, CYAN,
 + RED, FormattedValue, colorm

[1/4] cassandra git commit: cqlsh: Fix NULL option in COPY cmds after CASS-10415

2015-10-27 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 f901a74c8 -> d856d3d7b


cqlsh: Fix NULL option in COPY cmds after CASS-10415

Patch by Stefania Alborghetti; reviewed by Tyler Hobbs for
CASSANDRA-10577


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/78810f25
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/78810f25
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/78810f25

Branch: refs/heads/cassandra-3.0
Commit: 78810f25506b3b612f22fb20ddc7500e45cb1eec
Parents: 87f43ac
Author: Stefania Alborghetti 
Authored: Tue Oct 27 09:10:10 2015 -0500
Committer: Tyler Hobbs 
Committed: Tue Oct 27 09:10:10 2015 -0500

--
 bin/cqlsh | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/78810f25/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index 21f8ffe..ca45be3 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -327,10 +327,11 @@ cqlsh_extra_syntax_rules = r'''
  ( "WITH"  ( "AND"  )* )?
 ;
 
- ::= [optnames]= "=" [optvals]=
+ ::= [optnames]=(|) "=" 
[optvals]=
;
 
  ::= 
+  | 
   | 
   ;
 



[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-10-27 Thread tylerhobbs
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/40cef770
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/40cef770
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/40cef770

Branch: refs/heads/cassandra-2.2
Commit: 40cef770d08a899860358f31571cba26f5d8d3ee
Parents: b06c637 78810f2
Author: Tyler Hobbs 
Authored: Tue Oct 27 09:11:57 2015 -0500
Committer: Tyler Hobbs 
Committed: Tue Oct 27 09:11:57 2015 -0500

--
 bin/cqlsh.py | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--




  1   2   >