[jira] [Commented] (CASSANDRA-12420) Duplicated Key in IN clause with a small fetch size will run forever

2016-08-18 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427703#comment-15427703
 ] 

ZhaoYang commented on CASSANDRA-12420:
--

The patch is to add current pager index into PagingState. I think the fix won't 
break compatibility in 2.1.x

> Duplicated Key in IN clause with a small fetch size will run forever
> 
>
> Key: CASSANDRA-12420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12420
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: cassandra 2.1.14, driver 2.1.7.1
>Reporter: ZhaoYang
>Assignee: ZhaoYang
> Fix For: 2.1.x
>
> Attachments: CASSANDRA-12420.patch
>
>
> This can be easily reproduced and fetch size is smaller than the correct 
> number of rows.
> A table has 2 partition key, 1 clustering key, 1 column.
> >Select select = QueryBuilder.select().from("ks", "cf");
> >select.where().and(QueryBuilder.eq("a", 1));
> >select.where().and(QueryBuilder.in("b", Arrays.asList(1, 1, 1)));
> >select.setFetchSize(5);
> Now we put a distinct method in client side to eliminate the duplicated key, 
> but it's better to fix inside Cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-11031) MultiTenant : support “ALLOW FILTERING" for Partition Key

2016-08-18 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-11031:
-
Comment: was deleted

(was: Hi, [~blerer] in case you forgot..)

> MultiTenant : support “ALLOW FILTERING" for Partition Key
> -
>
> Key: CASSANDRA-11031
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11031
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Minor
> Fix For: 3.x
>
>
> Currently, Allow Filtering only works for secondary Index column or 
> clustering columns. And it's slow, because Cassandra will read all data from 
> SSTABLE from hard-disk to memory to filter.
> But we can support allow filtering on Partition Key, as far as I know, 
> Partition Key is in memory, so we can easily filter them, and then read 
> required data from SSTable.
> This will similar to "Select * from table" which scan through entire cluster.
> CREATE TABLE multi_tenant_table (
>   tenant_id text,
>   pk2 text,
>   c1 text,
>   c2 text,
>   v1 text,
>   v2 text,
>   PRIMARY KEY ((tenant_id,pk2),c1,c2)
> ) ;
> Select * from multi_tenant_table where tenant_id = "datastax" allow filtering;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11031) MultiTenant : support “ALLOW FILTERING" for Partition Key

2016-08-18 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427699#comment-15427699
 ] 

ZhaoYang commented on CASSANDRA-11031:
--

Hi, [~blerer] in case you forgot..

> MultiTenant : support “ALLOW FILTERING" for Partition Key
> -
>
> Key: CASSANDRA-11031
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11031
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Minor
> Fix For: 3.x
>
>
> Currently, Allow Filtering only works for secondary Index column or 
> clustering columns. And it's slow, because Cassandra will read all data from 
> SSTABLE from hard-disk to memory to filter.
> But we can support allow filtering on Partition Key, as far as I know, 
> Partition Key is in memory, so we can easily filter them, and then read 
> required data from SSTable.
> This will similar to "Select * from table" which scan through entire cluster.
> CREATE TABLE multi_tenant_table (
>   tenant_id text,
>   pk2 text,
>   c1 text,
>   c2 text,
>   v1 text,
>   v2 text,
>   PRIMARY KEY ((tenant_id,pk2),c1,c2)
> ) ;
> Select * from multi_tenant_table where tenant_id = "datastax" allow filtering;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11031) MultiTenant : support “ALLOW FILTERING" for Partition Key

2016-08-18 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427700#comment-15427700
 ] 

ZhaoYang commented on CASSANDRA-11031:
--

Hi, [~blerer] in case you forgot..

> MultiTenant : support “ALLOW FILTERING" for Partition Key
> -
>
> Key: CASSANDRA-11031
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11031
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Minor
> Fix For: 3.x
>
>
> Currently, Allow Filtering only works for secondary Index column or 
> clustering columns. And it's slow, because Cassandra will read all data from 
> SSTABLE from hard-disk to memory to filter.
> But we can support allow filtering on Partition Key, as far as I know, 
> Partition Key is in memory, so we can easily filter them, and then read 
> required data from SSTable.
> This will similar to "Select * from table" which scan through entire cluster.
> CREATE TABLE multi_tenant_table (
>   tenant_id text,
>   pk2 text,
>   c1 text,
>   c2 text,
>   v1 text,
>   v2 text,
>   PRIMARY KEY ((tenant_id,pk2),c1,c2)
> ) ;
> Select * from multi_tenant_table where tenant_id = "datastax" allow filtering;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11031) MultiTenant : support “ALLOW FILTERING" for Partition Key

2016-08-18 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427701#comment-15427701
 ] 

ZhaoYang commented on CASSANDRA-11031:
--

thanks. LGTM

> MultiTenant : support “ALLOW FILTERING" for Partition Key
> -
>
> Key: CASSANDRA-11031
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11031
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Minor
> Fix For: 3.x
>
>
> Currently, Allow Filtering only works for secondary Index column or 
> clustering columns. And it's slow, because Cassandra will read all data from 
> SSTABLE from hard-disk to memory to filter.
> But we can support allow filtering on Partition Key, as far as I know, 
> Partition Key is in memory, so we can easily filter them, and then read 
> required data from SSTable.
> This will similar to "Select * from table" which scan through entire cluster.
> CREATE TABLE multi_tenant_table (
>   tenant_id text,
>   pk2 text,
>   c1 text,
>   c2 text,
>   v1 text,
>   v2 text,
>   PRIMARY KEY ((tenant_id,pk2),c1,c2)
> ) ;
> Select * from multi_tenant_table where tenant_id = "datastax" allow filtering;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12500) Counter cache hit counter not incrementing

2016-08-18 Thread Jeff Jirsa (JIRA)
Jeff Jirsa created CASSANDRA-12500:
--

 Summary: Counter cache hit counter not incrementing 
 Key: CASSANDRA-12500
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12500
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeff Jirsa
Priority: Minor


Trivial repro on 3.7 with scripts below. Haven't dug through 
{{CounterCacheTest}} to find out if the cache is getting skipped or if it's 
just not updating the hit counter properly: 

{code}
#!/bin/sh

ccm remove test

ccm create test -v 3.7 -n 1
sed -i'' -e 's/row_cache_size_in_mb: 0/row_cache_size_in_mb: 100/g' 
.ccm/test/node1/conf/cassandra.yaml
ccm start
sleep 5


ccm node1 cqlsh < ~/keyspace.cql
ccm node1 cqlsh < ~/table-counter.cql
ccm node1 cqlsh < ~/table-counter-clustering.cql

echo "Schema created, reads and writes starting"
ccm node1 nodetool info | grep Cache

echo "UPDATE test.test SET v=v+1 WHERE id=1; " | ccm node1 cqlsh
echo "UPDATE test.test2 SET v=v+1 WHERE id=1 and c=1; " | ccm node1 cqlsh
echo "UPDATE test.test2 SET v=v+1 WHERE id=1 and c=2; " | ccm node1 cqlsh

echo "SELECT * FROM test.test WHERE id=1; " | ccm node1 cqlsh
ccm node1 nodetool info | grep Cache
echo "SELECT * FROM test.test WHERE id=1; " | ccm node1 cqlsh
ccm node1 nodetool info | grep Cache

echo "SELECT * FROM test.test2 WHERE id=1; " | ccm node1 cqlsh
ccm node1 nodetool info | grep Cache
echo "SELECT * FROM test.test2 WHERE id=1; " | ccm node1 cqlsh
ccm node1 nodetool info | grep Cache
echo "SELECT * FROM test.test2 WHERE id=1 and c=1; " | ccm node1 cqlsh
ccm node1 nodetool info | grep Cache
echo "SELECT * FROM test.test2 WHERE id=1 and c=1; " | ccm node1 cqlsh
ccm node1 nodetool info | grep Cache
{code}

Keyspace / tables:

{code}
CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': '1'}  AND durable_writes = true;
{code}

{code}
CREATE TABLE test.test (
id int PRIMARY KEY,
v counter
) WITH caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'};
{code}

{code}
CREATE TABLE test.test2 (
id int,
c int,
v counter,
PRIMARY KEY(id, c)
) WITH caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'};
{code}


Output:

{code}
Schema created, reads and writes starting
Key Cache  : entries 17, size 1.29 KiB, capacity 24 MiB, 61 hits, 
84 requests, 0.726 recent hit rate, 14400 save period in seconds
Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
requests, NaN recent hit rate, 0 save period in seconds
Counter Cache  : entries 0, size 0 bytes, capacity 12 MiB, 0 hits, 0 
requests, NaN recent hit rate, 7200 save period in seconds
Chunk Cache: entries 14, size 896 KiB, capacity 91 MiB, 38 misses, 
227 requests, 0.833 recent hit rate, 80.234 microseconds miss latency

 id | v
+---
  1 | 1

(1 rows)
Key Cache  : entries 17, size 1.29 KiB, capacity 24 MiB, 70 hits, 
93 requests, 0.753 recent hit rate, 14400 save period in seconds
Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
requests, NaN recent hit rate, 0 save period in seconds
Counter Cache  : entries 3, size 328 bytes, capacity 12 MiB, 0 hits, 3 
requests, 0.000 recent hit rate, 7200 save period in seconds
Chunk Cache: entries 14, size 896 KiB, capacity 91 MiB, 38 misses, 
288 requests, 0.868 recent hit rate, 80.234 microseconds miss latency

 id | v
+---
  1 | 1

(1 rows)
Key Cache  : entries 17, size 1.29 KiB, capacity 24 MiB, 72 hits, 
95 requests, 0.758 recent hit rate, 14400 save period in seconds
Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
requests, NaN recent hit rate, 0 save period in seconds
Counter Cache  : entries 3, size 328 bytes, capacity 12 MiB, 0 hits, 3 
requests, 0.000 recent hit rate, 7200 save period in seconds
Chunk Cache: entries 14, size 896 KiB, capacity 91 MiB, 38 misses, 
303 requests, 0.875 recent hit rate, 80.234 microseconds miss latency

 id | c | v
+---+---
  1 | 1 | 1
  1 | 2 | 1

(2 rows)
Key Cache  : entries 17, size 1.29 KiB, capacity 24 MiB, 74 hits, 
97 requests, 0.763 recent hit rate, 14400 save period in seconds
Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
requests, NaN recent hit rate, 0 save period in seconds
Counter Cache  : entries 3, size 328 bytes, capacity 12 MiB, 0 hits, 3 
requests, 0.000 recent hit rate, 7200 save period in seconds
Chunk Cache: entries 14, size 896 KiB, capacity 91 MiB, 38 misses, 
318 requests, 0.881 recent hit rate, 80.234 microseconds miss latency

 id | c | v
+---+---
  1 | 1 | 1
  1 | 2 | 1

(2 rows)
Key Cache  : entries 17, size 1.29 KiB, capacity 24 MiB, 76 hits, 
99 requests, 0.768 recent hit rate, 14400 save period in seconds
Row Cache  : entries 0, size 0 bytes, capacity 100 MiB

[jira] [Updated] (CASSANDRA-12279) nodetool repair hangs on non-existant table

2016-08-18 Thread Masataka Yamaguchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masataka Yamaguchi updated CASSANDRA-12279:
---
Status: Patch Available  (was: Open)

> nodetool repair hangs on non-existant table
> ---
>
> Key: CASSANDRA-12279
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12279
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux Ubuntu, Openjdk
>Reporter: Benjamin Roth
>Priority: Minor
>  Labels: lhf
> Attachments: CASSANDRA-12279-trunk.patch, new_result_example.txt, 
> org_result_example.txt
>
>
> If nodetool repair is called with a table that does not exist, ist hangs 
> infinitely without any error message or logs.
> E.g.
> nodetool repair foo bar
> Keyspace foo exists but table bar does not



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12279) nodetool repair hangs on non-existant table

2016-08-18 Thread Masataka Yamaguchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masataka Yamaguchi updated CASSANDRA-12279:
---
Attachment: org_result_example.txt
new_result_example.txt
CASSANDRA-12279-trunk.patch

This happens because the method “parseOptinalTables” in 
src/java/org/apache/cassandra/tools/NodeTool.java does not confirm that the 
given keyspace has the given table.
The method “parseOptinalTables” is also used in some nodetool commands such as 
src/java/org/apache/cassandra/tools/nodetool/Cleanup.java, 
src/java/org/apache/cassandra/tools/nodetool/Compact.java and so on, and Java 
error happens when those commands are used with no-existent table name given, 
while they do not hang. 

I have reimplemented parseOptionalTables so that it confirms that the given 
keyspace has the given table, and I will send the patch.
Please review it.


> nodetool repair hangs on non-existant table
> ---
>
> Key: CASSANDRA-12279
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12279
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux Ubuntu, Openjdk
>Reporter: Benjamin Roth
>Priority: Minor
>  Labels: lhf
> Attachments: CASSANDRA-12279-trunk.patch, new_result_example.txt, 
> org_result_example.txt
>
>
> If nodetool repair is called with a table that does not exist, ist hangs 
> infinitely without any error message or logs.
> E.g.
> nodetool repair foo bar
> Keyspace foo exists but table bar does not



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12499) Row cache does not cache partitions on tables without clustering keys

2016-08-18 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-12499:
---
Assignee: (was: Jeff Jirsa)

> Row cache does not cache partitions on tables without clustering keys
> -
>
> Key: CASSANDRA-12499
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12499
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeff Jirsa
>
> {code}
> MLSEA-JJIRSA01:~ jjirsa$ ccm start
> MLSEA-JJIRSA01:~ jjirsa$ echo "DESCRIBE TABLE test.test; " | ccm node1 cqlsh
> CREATE TABLE test.test (
> id int PRIMARY KEY,
> v text
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': '100'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> MLSEA-JJIRSA01:~ jjirsa$ ccm node1 nodetool info | grep Row
> Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> MLSEA-JJIRSA01:~ jjirsa$ echo "INSERT INTO test.test(id,v) VALUES(1, 'a'); " 
> | ccm node1 cqlsh
> MLSEA-JJIRSA01:~ jjirsa$ echo "SELECT * FROM test.test WHERE id=1; " | ccm 
> node1 cqlsh
>  id | v
> +---
>   1 | a
> (1 rows)
> MLSEA-JJIRSA01:~ jjirsa$ ccm node1 nodetool info | grep Row
> Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> MLSEA-JJIRSA01:~ jjirsa$ echo "SELECT * FROM test.test WHERE id=1; " | ccm 
> node1 cqlsh
>  id | v
> +---
>   1 | a
> (1 rows)
> MLSEA-JJIRSA01:~ jjirsa$ ccm node1 nodetool info | grep Row
> Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> MLSEA-JJIRSA01:~ jjirsa$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12499) Row cache does not cache partitions on tables without clustering keys

2016-08-18 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427673#comment-15427673
 ] 

Jeff Jirsa commented on CASSANDRA-12499:


Working as intended in:
- 2.1.12
- 2.2.7

Broken in:
- 3.0.8
- 3.7
- Trunk



> Row cache does not cache partitions on tables without clustering keys
> -
>
> Key: CASSANDRA-12499
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12499
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>
> {code}
> MLSEA-JJIRSA01:~ jjirsa$ ccm start
> MLSEA-JJIRSA01:~ jjirsa$ echo "DESCRIBE TABLE test.test; " | ccm node1 cqlsh
> CREATE TABLE test.test (
> id int PRIMARY KEY,
> v text
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': '100'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> MLSEA-JJIRSA01:~ jjirsa$ ccm node1 nodetool info | grep Row
> Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> MLSEA-JJIRSA01:~ jjirsa$ echo "INSERT INTO test.test(id,v) VALUES(1, 'a'); " 
> | ccm node1 cqlsh
> MLSEA-JJIRSA01:~ jjirsa$ echo "SELECT * FROM test.test WHERE id=1; " | ccm 
> node1 cqlsh
>  id | v
> +---
>   1 | a
> (1 rows)
> MLSEA-JJIRSA01:~ jjirsa$ ccm node1 nodetool info | grep Row
> Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> MLSEA-JJIRSA01:~ jjirsa$ echo "SELECT * FROM test.test WHERE id=1; " | ccm 
> node1 cqlsh
>  id | v
> +---
>   1 | a
> (1 rows)
> MLSEA-JJIRSA01:~ jjirsa$ ccm node1 nodetool info | grep Row
> Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> MLSEA-JJIRSA01:~ jjirsa$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12499) Row cache does not cache partitions on tables without clustering keys

2016-08-18 Thread Jeff Jirsa (JIRA)
Jeff Jirsa created CASSANDRA-12499:
--

 Summary: Row cache does not cache partitions on tables without 
clustering keys
 Key: CASSANDRA-12499
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12499
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeff Jirsa
Assignee: Jeff Jirsa


{code}
MLSEA-JJIRSA01:~ jjirsa$ ccm start
MLSEA-JJIRSA01:~ jjirsa$ echo "DESCRIBE TABLE test.test; " | ccm node1 cqlsh

CREATE TABLE test.test (
id int PRIMARY KEY,
v text
) WITH bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': '100'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';

MLSEA-JJIRSA01:~ jjirsa$ ccm node1 nodetool info | grep Row
Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
requests, NaN recent hit rate, 0 save period in seconds
MLSEA-JJIRSA01:~ jjirsa$ echo "INSERT INTO test.test(id,v) VALUES(1, 'a'); " | 
ccm node1 cqlsh
MLSEA-JJIRSA01:~ jjirsa$ echo "SELECT * FROM test.test WHERE id=1; " | ccm 
node1 cqlsh

 id | v
+---
  1 | a

(1 rows)
MLSEA-JJIRSA01:~ jjirsa$ ccm node1 nodetool info | grep Row
Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
requests, NaN recent hit rate, 0 save period in seconds
MLSEA-JJIRSA01:~ jjirsa$ echo "SELECT * FROM test.test WHERE id=1; " | ccm 
node1 cqlsh

 id | v
+---
  1 | a

(1 rows)
MLSEA-JJIRSA01:~ jjirsa$ ccm node1 nodetool info | grep Row
Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
requests, NaN recent hit rate, 0 save period in seconds
MLSEA-JJIRSA01:~ jjirsa$
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12498) Shorten the sstable log message as it unnecessarily contains the full path of a SSTable

2016-08-18 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427610#comment-15427610
 ] 

Jeff Jirsa commented on CASSANDRA-12498:


Sort of -0 on this for two reasons: 

{code}
For JBOD configuration where you have multiple data directories, keeping the 
one level before ksName/cfName-UUID should be adequat
{code}

In the jbod/multiple data dir case, you may have lots of prefixes with similar 
names, ie: {{/mnt1/cassandra/data/ks/tbl}}, {{/mnt2/cassandra/data/ks/tbl}}, 
and so on. It's impossible to predict how users will name/organize their jbod 
system, so you'd be trying to guess, and inevitably cause pain for someone.

Also, I suspect that the repetitive log entries are annoying, but they also 
allow operators to do some intelligent scripting via log parsing that would be 
complicated if the prefixes disappear.

> Shorten the sstable log message as it unnecessarily contains the full path of 
> a SSTable
> ---
>
> Key: CASSANDRA-12498
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12498
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability
>Reporter: Wei Deng
>Priority: Minor
>
> There are a lot of places in debug.log where we print out the name of a 
> SSTable. This is useful to look at the full path of a SSTable file when 
> you're investigating individual SSTable. However, during compaction, we often 
> see 32 SSTables getting compacted at the same time, and the corresponding log 
> line becomes very repetitive and hard to read as most of them are repeating 
> the same first part of the file system path again and again, like the 
> following:
> {noformat}
> DEBUG [CompactionExecutor:94] 2016-08-18 06:33:17,185  
> CompactionTask.java:146 - Compacting (a5ca2f10-650d-11e6-95ef-a561ab3c45e8) 
> [/var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-200-big-Data.db:level=1,
>  
> /var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-201-big-Data.db:level=1,
>  
> /var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-16-big-Data.db:level=0,
>  
> /var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-204-big-Data.db:level=1,
>  
> /var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-205-big-Data.db:level=1,
>  
> /var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-203-big-Data.db:level=1,
>  
> /var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-202-big-Data.db:level=1,
>  
> /var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-207-big-Data.db:level=1,
>  
> /var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-206-big-Data.db:level=1,
>  
> /var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-3-big-Data.db:level=0,
>  
> /var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-208-big-Data.db:level=1,
>  
> /var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-209-big-Data.db:level=1,
>  ]
> {noformat}
> We should remove any text one level before ksName/cfName-UUID/ as it's very 
> easy to get them from cassandra.yaml. For JBOD configuration where you have 
> multiple data directories, keeping the one level before ksName/cfName-UUID 
> should be adequate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6151) CqlPagingRecorderReader Used when Partition Key Is Explicitly Stated

2016-08-18 Thread nicerobot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427547#comment-15427547
 ] 

nicerobot commented on CASSANDRA-6151:
--

Hypothetically, i can create lots of single-node results, join them, and 
hopefully the partitioning is retained for processing the data further.

> CqlPagingRecorderReader Used when Partition Key Is Explicitly Stated
> 
>
> Key: CASSANDRA-6151
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6151
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russell Spitzer
>Assignee: Alex Liu
>Priority: Minor
> Attachments: 6151-1.2-branch.txt, 6151-v2-1.2-branch.txt, 
> 6151-v3-1.2-branch.txt, 6151-v4-1.2.10-branch.txt
>
>
> From 
> http://stackoverflow.com/questions/19189649/composite-key-in-cassandra-with-pig/19211546#19211546
> The user was attempting to load a single partition using a where clause in a 
> pig load statement. 
> CQL Table
> {code}
> CREATE table data (
>   occurday  text,
>   seqnumber int,
>   occurtimems bigint,
>   unique bigint,
>   fields map,
>   primary key ((occurday, seqnumber), occurtimems, unique)
> )
> {code}
> Pig Load statement Query
> {code}
> data = LOAD 
> 'cql://ks/data?where_clause=seqnumber%3D10%20AND%20occurday%3D%272013-10-01%27'
>  USING CqlStorage();
> {code}
> This results in an exception when processed by the the CqlPagingRecordReader 
> which attempts to page this query even though it contains at most one 
> partition key. This leads to an invalid CQL statement. 
> CqlPagingRecordReader Query
> {code}
> SELECT * FROM "data" WHERE token("occurday","seqnumber") > ? AND
> token("occurday","seqnumber") <= ? AND occurday='A Great Day' 
> AND seqnumber=1 LIMIT 1000 ALLOW FILTERING
> {code}
> Exception
> {code}
>  InvalidRequestException(why:occurday cannot be restricted by more than one 
> relation if it includes an Equal)
> {code}
> I'm not sure it is worth the special case but, a modification to not use the 
> paging record reader when the entire partition key is specified would solve 
> this issue. 
> h3. Solution
>  If it have EQUAL clauses for all the partitioning keys, we use Query 
> {code}
>   SELECT * FROM "data" 
>   WHERE occurday='A Great Day' 
>AND seqnumber=1 LIMIT 1000 ALLOW FILTERING
> {code}
> instead of 
> {code}
>   SELECT * FROM "data" 
>   WHERE token("occurday","seqnumber") > ? 
>AND token("occurday","seqnumber") <= ? 
>AND occurday='A Great Day' 
>AND seqnumber=1 LIMIT 1000 ALLOW FILTERING
> {code}
> The base line implementation is to retrieve all data of all rows around the 
> ring. This new feature is to retrieve all data of a wide row. It's a one 
> level lower than the base line. It helps for the use case where user is only 
> interested in a specific wide row, so the user doesn't spend whole job to 
> retrieve all the rows around the ring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12092) dtest failure in consistency_test.TestAccuracy.test_simple_strategy_counters

2016-08-18 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427538#comment-15427538
 ] 

Stefania commented on CASSANDRA-12092:
--

I've reproduced a 
[failure|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-dtest-multiplex/20/testReport/junit/node_0_iter_169.consistency_test/TestAccuracy/test_simple_strategy_counters/]
 with the extra log message, confirming that the test is definitely reading 
from the host it contacts, so there could be a race in Cassandra.

I'm attempting another run, with more logs 
[here|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-dtest-multiplex/21/].

> dtest failure in consistency_test.TestAccuracy.test_simple_strategy_counters
> 
>
> Key: CASSANDRA-12092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12092
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Stefania
>  Labels: dtest
> Attachments: node1.log, node2.log, node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/484/testReport/consistency_test/TestAccuracy/test_simple_strategy_counters
> Failed on CassCI build cassandra-2.1_dtest #484
> {code}
> Standard Error
> Traceback (most recent call last):
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 514, in run
> valid_fcn(v)
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 497, in 
> validate_counters
> check_all_sessions(s, n, c)
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 490, in 
> check_all_sessions
> "value of %s at key %d, instead got these values: %s" % (write_nodes, 
> val, n, results)
> AssertionError: Failed to read value from sufficient number of nodes, 
> required 2 nodes to have a counter value of 1 at key 200, instead got these 
> values: [0, 0, 1]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12474) Define executeLocally() at the ReadQuery Level

2016-08-18 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427491#comment-15427491
 ] 

Stefania commented on CASSANDRA-12474:
--

Thanks for the review! 

The dtest build was aborted, I've relaunched it.

> Define executeLocally() at the ReadQuery Level
> --
>
> Key: CASSANDRA-12474
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12474
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
>
> We have {{execute}} and {{executeInternal}} at the {{ReadQuery}} level but 
> {{executeLocally}} is missing and this makes the abstraction incomplete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11594) Too many open files on directories

2016-08-18 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427488#comment-15427488
 ] 

Stefania commented on CASSANDRA-11594:
--

I've assigned this ticket to myself, what I plan to do is create a distributed 
test with the same topology and schema, that runs repair and monitors the file 
descriptors, cc [~cassandra-te] in case they have resources to assist with this.

If we cannot reproduce it that way, we could analyze a heap dump taken while 
the problem occurs. We would search for any file input stream instances and 
look at the GC roots. Let us know if you would be able to share a heap dump 
[~n0rad] as that might speed things up.


> Too many open files on directories
> --
>
> Key: CASSANDRA-11594
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11594
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: n0rad
>Assignee: Stefania
>Priority: Critical
> Attachments: openfiles.zip, screenshot.png
>
>
> I have a 6 nodes cluster in prod in 3 racks.
> each node :
> - 4Gb commitlogs on 343 files
> - 275Gb data on 504 files 
> On saturday, 1 node in each rack crash with with too many open files (seems 
> to be the similar node in each rack).
> {code}
> lsof -n -p $PID give me 66899 out of 65826 max
> {code}
> it contains 64527 open directories (2371 uniq)
> a part of the list :
> {code}
> java19076 root 2140r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2141r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2142r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2143r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2144r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2145r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2146r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2147r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2148r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2149r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2150r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2151r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2152r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2153r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2154r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2155r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> {code}
> The 3 others nodes crashes 4 hours later



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11594) Too many open files on directories

2016-08-18 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-11594:


Assignee: Stefania

> Too many open files on directories
> --
>
> Key: CASSANDRA-11594
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11594
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: n0rad
>Assignee: Stefania
>Priority: Critical
> Attachments: openfiles.zip, screenshot.png
>
>
> I have a 6 nodes cluster in prod in 3 racks.
> each node :
> - 4Gb commitlogs on 343 files
> - 275Gb data on 504 files 
> On saturday, 1 node in each rack crash with with too many open files (seems 
> to be the similar node in each rack).
> {code}
> lsof -n -p $PID give me 66899 out of 65826 max
> {code}
> it contains 64527 open directories (2371 uniq)
> a part of the list :
> {code}
> java19076 root 2140r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2141r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2142r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2143r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2144r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2145r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2146r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2147r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2148r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2149r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2150r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2151r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2152r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2153r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2154r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2155r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> {code}
> The 3 others nodes crashes 4 hours later



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12494) dtest failure in topology_test.TestTopology.crash_during_decommission_test

2016-08-18 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427474#comment-15427474
 ] 

Stefania commented on CASSANDRA-12494:
--

[~pauloricardomg], I think this is related to CASSANDRA-11611, can you confirm 
that we just need to replace "Streaming error occurred" with "Stream failed" or 
ignore both?

> dtest failure in topology_test.TestTopology.crash_during_decommission_test
> --
>
> Key: CASSANDRA-12494
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12494
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/376/testReport/topology_test/TestTopology/crash_during_decommission_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 673, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> "Unexpected error in log, see stdout
> {code}
> {code}
> Standard Output
> Unexpected error in node1 log, error: 
> ERROR [RMI TCP Connection(2)-127.0.0.1] 2016-08-18 02:15:31,444 
> StorageService.java:3719 - Error while decommissioning node 
> org.apache.cassandra.streaming.StreamException: Stream failed
>   at 
> org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:215)
>  ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:191)
>  ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:448)
>  ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamSession.onError(StreamSession.java:551) 
> ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamSession.start(StreamSession.java:249) 
> ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamCoordinator$StreamSessionConnector.run(StreamCoordinator.java:263)
>  ~[main/:na]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_45]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_45]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11534) cqlsh fails to format collections when using aliases

2016-08-18 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-11534:


Assignee: Stefania

> cqlsh fails to format collections when using aliases
> 
>
> Key: CASSANDRA-11534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11534
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Stefania
>Priority: Minor
>
> Given is a simple table. Selecting the columns without an alias works fine. 
> However, if the map is selected using an alias, cqlsh fails to format the 
> value.
> {code}
> create keyspace foo WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> CREATE TABLE foo.foo (id int primary key, m map);
> insert into foo.foo (id, m) VALUES ( 1, {1: 'one', 2: 'two', 3:'three'});
> insert into foo.foo (id, m) VALUES ( 2, {1: '1one', 2: '2two', 3:'3three'});
> cqlsh> select id, m from foo.foo;
>  id | m
> +-
>   1 |{1: 'one', 2: 'two', 3: 'three'}
>   2 | {1: '1one', 2: '2two', 3: '3three'}
> (2 rows)
> cqlsh> select id, m as "weofjkewopf" from foo.foo;
>  id | weofjkewopf
> +---
>   1 |OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, u'three')])
>   2 | OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), (3, u'3three')])
> (2 rows)
> Failed to format value OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, 
> u'three')]) : 'NoneType' object has no attribute 'sub_types'
> Failed to format value OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), 
> (3, u'3three')]) : 'NoneType' object has no attribute 'sub_types'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9875) Rebuild from targeted replica

2016-08-18 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427414#comment-15427414
 ] 

Paulo Motta commented on CASSANDRA-9875:


Sorry for the delay here. Implementation looks mostly good, but I find the UI a 
bit counter-intuitive, since if you want to rebuild multiple ranges from a 
single or small set of sources (a typical use case IMO), you'd need to repeat a 
source multiple times (one for each specified range). Unless you have a use 
case in mind for this, I think a more intuitive interface would be a host 
whitelist (similar to repair {{--hosts}}), and this would also make 
implementation a bit simpler, since you could simply add a source filter to 
RangeStreamer (avoiding the need for {{addRangesWithSources}}). This would 
still allow you to specify a direct range-source mapping by running multiple 
rebuild commands. WDYT?

Also, could you add a simple dtest?

> Rebuild from targeted replica
> -
>
> Key: CASSANDRA-9875
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9875
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Geoffrey Yu
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 9875-trunk.txt
>
>
> Nodetool rebuild command will rebuild all the token ranges handled by the 
> endpoint. Sometimes we want to rebuild only a certain token range. We should 
> add this ability to rebuild command. We should also add the ability to stream 
> from a given replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-12476) SyntaxException when COPY FROM Counter Table with Null value

2016-08-18 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-12476:


Assignee: Stefania

> SyntaxException when COPY FROM Counter Table with Null value
> 
>
> Key: CASSANDRA-12476
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12476
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Ashraful Islam
>Assignee: Stefania
>Priority: Minor
>  Labels: cqlsh
> Fix For: 3.0.x, 3.x
>
>
> I have a simple counter table 
> {noformat}
> CREATE TABLE test (
> a int PRIMARY KEY,
> b counter,
> c counter
> ) ;
> {noformat}
> I have updated b column value with 
> {noformat}
> UPDATE test SET b = b + 1 WHERE a = 1;
> {noformat}
> Now I have export the data with 
> {noformat}
> COPY test TO 'test.csv';
> {noformat}
> And Import it with 
> {noformat}
> COPY test FROM 'test.csv';
> {noformat}
> I get this Error
> {noformat}
> Failed to import 1 rows: SyntaxException - line 1:34 no viable alternative at 
> input 'WHERE' (...=b+1,c=c+ [WHERE]...) -  will retry later, attempt 1 of 5
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-12481) dtest failure in cqlshlib.test.test_cqlsh_output.TestCqlshOutput.test_describe_keyspace_output

2016-08-18 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-12481:


Assignee: Stefania  (was: DS Test Eng)

> dtest failure in 
> cqlshlib.test.test_cqlsh_output.TestCqlshOutput.test_describe_keyspace_output
> --
>
> Key: CASSANDRA-12481
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12481
> Project: Cassandra
>  Issue Type: Test
>Reporter: Craig Kodman
>Assignee: Stefania
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_cqlsh_tests/29/testReport/cqlshlib.test.test_cqlsh_output/TestCqlshOutput/test_describe_keyspace_output
> {code}
> Error Message
> errors={'127.0.0.1': 'Client request timeout. See 
> Session.execute[_async](timeout)'}, last_host=127.0.0.1
> {code}
> http://cassci.datastax.com/job/cassandra-3.0_cqlsh_tests/lastCompletedBuild/cython=no,label=ctool-lab/testReport/cqlshlib.test.test_cqlsh_output/TestCqlshOutput/test_describe_keyspace_output/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-12479) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_non_prepared_statements

2016-08-18 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-12479:


Assignee: Stefania  (was: DS Test Eng)

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_non_prepared_statements
> 
>
> Key: CASSANDRA-12479
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12479
> Project: Cassandra
>  Issue Type: Test
>Reporter: Craig Kodman
>Assignee: Stefania
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/447/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_non_prepared_statements
> {code}
> Error Message
> 10 != 96848
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-BryYNs
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'memtable_allocation_type': 'offheap_objects',
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Running stress without any user profile
> dtest: DEBUG: Generated 10 records
> dtest: DEBUG: Exporting to csv file: /tmp/tmpREOhBZ
> dtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO '/tmp/tmpREOhBZ' 
> WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000
> dtest: DEBUG: COPY TO took 0:00:04.598829 to export 10 records
> dtest: DEBUG: Truncating keyspace1.standard1...
> dtest: DEBUG: Importing from csv file: /tmp/tmpREOhBZ
> dtest: DEBUG: COPY keyspace1.standard1 FROM '/tmp/tmpREOhBZ' WITH 
> PREPAREDSTATEMENTS = False
> dtest: DEBUG: COPY FROM took 0:00:10.348123 to import 10 records
> dtest: DEBUG: Exporting to csv file: /tmp/tmpeXLPtz
> dtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO '/tmp/tmpeXLPtz' 
> WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000
> dtest: DEBUG: COPY TO took 0:00:11.681829 to export 10 records
> - >> end captured logging << -
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2482, in test_bulk_round_trip_non_prepared_statements
> copy_from_options={'PREPAREDSTATEMENTS': False})
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2461, in _test_bulk_round_trip
> sum(1 for _ in open(tempfile2.name)))
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
> raise self.failureException(msg)
> "10 != 96848\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-BryYNs\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'memtable_allocation_type': 'offheap_objects',\n  
>   'num_tokens': '32',\n'phi_convict_threshold': 5,\n
> 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 
> 1,\n'request_timeout_in_ms': 1,\n
> 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Running stress without any user profile\ndtest: DEBUG: 
> Generated 10 records\ndtest: DEBUG: Exporting to csv file: 
> /tmp/tmpREOhBZ\ndtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO 
> '/tmp/tmpREOhBZ' WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000\ndtest: DEBUG: 
> COPY TO took 0:00:04.598829 to export 10 records\ndtest: DEBUG: 
> Truncating keyspace1.standard1...\ndtest: DEBUG: Importing from csv file: 
> /tmp/tmpREOhBZ\ndtest: DEBUG: COPY keyspace1.standard1 FROM '/tmp/tmpREOhBZ' 
> WITH PREPAREDSTATEMENTS = False\ndtest: DEBUG: COPY FROM took 0:00:10.348123 
> to import 10 records\ndtest: DEBUG: Exporting to csv file: 
> /tmp/tmpeXLPtz\ndtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO 
> '/tmp/tmpeXLPtz' WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000\ndtest: DEBUG: 
> COPY TO took 0:00:11.681829 to export 10 records\n- 
> >> end captured logging << -"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9054) Let DatabaseDescriptor not implicitly startup services

2016-08-18 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427382#comment-15427382
 ] 

Robert Stupp commented on CASSANDRA-9054:
-

[~jjordan] applied the changes. Wanna take a look?
||trunk|[branch|https://github.com/apache/cassandra/compare/trunk...snazy:9054-followup-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-9054-followup-trunk-testall/lastSuccessfulBuild/]|[dtest|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-9054-followup-trunk-dtest/lastSuccessfulBuild/]


> Let DatabaseDescriptor not implicitly startup services
> --
>
> Key: CASSANDRA-9054
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9054
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeremiah Jordan
>Assignee: Robert Stupp
> Fix For: 3.10
>
>
> Right now to get at Config stuff you go through DatabaseDescriptor.  But when 
> you instantiate DatabaseDescriptor it actually opens system tables and such, 
> which triggers commit log replays, and other things if the right flags aren't 
> set ahead of time.  This makes getting at config stuff from tools annoying, 
> as you have to be very careful about instantiation orders.
> It would be nice if we could break DatabaseDescriptor up into multiple 
> classes, so that getting at config stuff from tools wasn't such a pain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8523) Writes should be sent to a replacement node while it is streaming in data

2016-08-18 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427337#comment-15427337
 ] 

Paulo Motta commented on CASSANDRA-8523:


No, someone from the cassandra-dtest is reviewing, but since this will break 
existing tests it can only be commited after that is merged.

> Writes should be sent to a replacement node while it is streaming in data
> -
>
> Key: CASSANDRA-8523
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8523
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Richard Wagner
>Assignee: Paulo Motta
> Fix For: 2.1.x
>
>
> In our operations, we make heavy use of replace_address (or 
> replace_address_first_boot) in order to replace broken nodes. We now realize 
> that writes are not sent to the replacement nodes while they are in hibernate 
> state and streaming in data. This runs counter to what our expectations were, 
> especially since we know that writes ARE sent to nodes when they are 
> bootstrapped into the ring.
> It seems like cassandra should arrange to send writes to a node that is in 
> the process of replacing another node, just like it does for a nodes that are 
> bootstraping. I hesitate to phrase this as "we should send writes to a node 
> in hibernate" because the concept of hibernate may be useful in other 
> contexts, as per CASSANDRA-8336. Maybe a new state is needed here?
> Among other things, the fact that we don't get writes during this period 
> makes subsequent repairs more expensive, proportional to the number of writes 
> that we miss (and depending on the amount of data that needs to be streamed 
> during replacement and the time it may take to rebuild secondary indexes, we 
> could miss many many hours worth of writes). It also leaves us more exposed 
> to consistency violations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8523) Writes should be sent to a replacement node while it is streaming in data

2016-08-18 Thread Richard Low (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427335#comment-15427335
 ] 

Richard Low commented on CASSANDRA-8523:


Are you waiting for me to review the dtest PR?

> Writes should be sent to a replacement node while it is streaming in data
> -
>
> Key: CASSANDRA-8523
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8523
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Richard Wagner
>Assignee: Paulo Motta
> Fix For: 2.1.x
>
>
> In our operations, we make heavy use of replace_address (or 
> replace_address_first_boot) in order to replace broken nodes. We now realize 
> that writes are not sent to the replacement nodes while they are in hibernate 
> state and streaming in data. This runs counter to what our expectations were, 
> especially since we know that writes ARE sent to nodes when they are 
> bootstrapped into the ring.
> It seems like cassandra should arrange to send writes to a node that is in 
> the process of replacing another node, just like it does for a nodes that are 
> bootstraping. I hesitate to phrase this as "we should send writes to a node 
> in hibernate" because the concept of hibernate may be useful in other 
> contexts, as per CASSANDRA-8336. Maybe a new state is needed here?
> Among other things, the fact that we don't get writes during this period 
> makes subsequent repairs more expensive, proportional to the number of writes 
> that we miss (and depending on the amount of data that needs to be streamed 
> during replacement and the time it may take to rebuild secondary indexes, we 
> could miss many many hours worth of writes). It also leaves us more exposed 
> to consistency violations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Fix rebuild of SASI indexes with existing index files

2016-08-18 Thread xedin
Repository: cassandra
Updated Branches:
  refs/heads/trunk 9797511c5 -> fa1131679


Fix rebuild of SASI indexes with existing index files

Patch by Alex Petrov; reviewed by Pavel Yaskevich for CASSANDRA-12374


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fa113167
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fa113167
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fa113167

Branch: refs/heads/trunk
Commit: fa113167956a6163156a0f475171d1c41f9ed7c2
Parents: 9797511
Author: Alex Petrov 
Authored: Fri Aug 5 18:05:38 2016 +0200
Committer: Pavel Yaskevich 
Committed: Thu Aug 18 15:05:11 2016 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/index/sasi/SASIIndex.java  |  1 +
 .../cassandra/index/sasi/SASIIndexBuilder.java  |  3 +-
 .../cassandra/index/sasi/conf/ColumnIndex.java  |  5 ++
 .../cassandra/index/sasi/conf/DataTracker.java  | 19 
 .../cassandra/index/sasi/SASIIndexTest.java | 49 
 6 files changed, 77 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fa113167/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 07c18c5..0e1e118 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.10
+ * Fix rebuild of SASI indexes with existing index files (CASSANDRA-12374)
  * Let DatabaseDescriptor not implicitly startup services (CASSANDRA-9054)
  * Fix clustering indexes in presence of static columns in SASI 
(CASSANDRA-12378)
  * Fix queries on columns with reversed type on SASI indexes (CASSANDRA-12223)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fa113167/src/java/org/apache/cassandra/index/sasi/SASIIndex.java
--
diff --git a/src/java/org/apache/cassandra/index/sasi/SASIIndex.java 
b/src/java/org/apache/cassandra/index/sasi/SASIIndex.java
index 0b9d900..4375964 100644
--- a/src/java/org/apache/cassandra/index/sasi/SASIIndex.java
+++ b/src/java/org/apache/cassandra/index/sasi/SASIIndex.java
@@ -73,6 +73,7 @@ public class SASIIndex implements Index, INotificationConsumer
.filter((i) -> i instanceof SASIIndex)
.forEach((i) -> {
SASIIndex sasi = (SASIIndex) i;
+   sasi.index.dropData(sstablesToRebuild);
sstablesToRebuild.stream()
 .filter((sstable) -> 
!sasi.index.hasSSTable(sstable))
 .forEach((sstable) -> {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fa113167/src/java/org/apache/cassandra/index/sasi/SASIIndexBuilder.java
--
diff --git a/src/java/org/apache/cassandra/index/sasi/SASIIndexBuilder.java 
b/src/java/org/apache/cassandra/index/sasi/SASIIndexBuilder.java
index 1173d40..d50875a 100644
--- a/src/java/org/apache/cassandra/index/sasi/SASIIndexBuilder.java
+++ b/src/java/org/apache/cassandra/index/sasi/SASIIndexBuilder.java
@@ -99,7 +99,8 @@ class SASIIndexBuilder extends SecondaryIndexBuilder
 try (SSTableIdentityIterator partition = 
SSTableIdentityIterator.create(sstable, dataFile, key))
 {
 // if the row has statics attached, it has to 
be indexed separately
-
indexWriter.nextUnfilteredCluster(partition.staticRow());
+if (cfs.metadata.hasStaticColumns())
+
indexWriter.nextUnfilteredCluster(partition.staticRow());
 
 while (partition.hasNext())
 
indexWriter.nextUnfilteredCluster(partition.next());

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fa113167/src/java/org/apache/cassandra/index/sasi/conf/ColumnIndex.java
--
diff --git a/src/java/org/apache/cassandra/index/sasi/conf/ColumnIndex.java 
b/src/java/org/apache/cassandra/index/sasi/conf/ColumnIndex.java
index 440d475..0958113 100644
--- a/src/java/org/apache/cassandra/index/sasi/conf/ColumnIndex.java
+++ b/src/java/org/apache/cassandra/index/sasi/conf/ColumnIndex.java
@@ -194,6 +194,11 @@ public class ColumnIndex
 return tracker.hasSSTable(sstable);
 }
 
+public void dropData(Collection sstablesToRebuild)
+{
+tracker.dropData(sstablesToRebuild);
+}
+
 public void dropData(long truncateUntil)
 {
 switchMemtable();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fa113167/src/java/org/apache/cassandra/index/sas

[jira] [Updated] (CASSANDRA-12374) Can't rebuild SASI index

2016-08-18 Thread Pavel Yaskevich (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Yaskevich updated CASSANDRA-12374:

   Resolution: Fixed
Fix Version/s: 3.10
   Status: Resolved  (was: Patch Available)

Committed.

> Can't rebuild SASI index
> 
>
> Key: CASSANDRA-12374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
> Fix For: 3.10
>
>
> There's been no real requirement for that so far. 
> As [~beobal] has pointed out, it's not a big issue, since that only could be 
> needed when index files are lost, data corruption on disk (hardware issue) 
> has occurred or there was a bug that'd require an index rebuild.
> During {{rebuild_index}} task, indexes are only "marked" as removed with 
> {{SecondaryIndexManager::markIndexRemoved}} and then {{buildIndexesBlocking}} 
> is called. However, since SASI keeps track of SSTables for the index, it's 
> going to filter them out with {{.filter((sstable) -> 
> !sasi.index.hasSSTable(sstable))}} in {{SASIIndexBuildingSupport}}.
> If I understand the logic correctly, we have to "invalidate" (drop data) 
> right before we re-index them. This is also a blocker for [CASSANDRA-11990] 
> since without it we can't have an upgrade path.
> I have a patch ready in branch, but since it's a bug, it's better to have it 
> released earlier and for all branches affected.
> cc [~xedin]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9935) Repair fails with RuntimeException

2016-08-18 Thread Wei Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Deng updated CASSANDRA-9935:

Labels: lcs  (was: )

> Repair fails with RuntimeException
> --
>
> Key: CASSANDRA-9935
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9935
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.8, Debian Wheezy
>Reporter: mlowicki
>Assignee: Paulo Motta
>  Labels: lcs
> Fix For: 2.1.15, 2.2.7, 3.0.6, 3.6
>
> Attachments: 9935.patch, db1.sync.lati.osa.cassandra.log, 
> db5.sync.lati.osa.cassandra.log, system.log.10.210.3.117, 
> system.log.10.210.3.221, system.log.10.210.3.230
>
>
> We had problems with slow repair in 2.1.7 (CASSANDRA-9702) but after upgrade 
> to 2.1.8 it started to work faster but now it fails with:
> {code}
> ...
> [2015-07-29 20:44:03,956] Repair session 23a811b0-3632-11e5-a93e-4963524a8bde 
> for range (-5474076923322749342,-5468600594078911162] finished
> [2015-07-29 20:44:03,957] Repair session 336f8740-3632-11e5-a93e-4963524a8bde 
> for range (-8631877858109464676,-8624040066373718932] finished
> [2015-07-29 20:44:03,957] Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde 
> for range (-5372806541854279315,-5369354119480076785] finished
> [2015-07-29 20:44:03,957] Repair session 59f129f0-3632-11e5-a93e-4963524a8bde 
> for range (8166489034383821955,8168408930184216281] finished
> [2015-07-29 20:44:03,957] Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde 
> for range (6084602890817326921,6088328703025510057] finished
> [2015-07-29 20:44:03,957] Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde 
> for range (-781874602493000830,-781745173070807746] finished
> [2015-07-29 20:44:03,957] Repair command #4 finished
> error: nodetool failed, check server logs
> -- StackTrace --
> java.lang.RuntimeException: nodetool failed, check server logs
> at 
> org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290)
> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202)
> {code}
> After running:
> {code}
> nodetool repair --partitioner-range --parallel --in-local-dc sync
> {code}
> Last records in logs regarding repair are:
> {code}
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 09ff9e40-3632-11e5-a93e-4963524a8bde for range 
> (-7695808664784761779,-7693529816291585568] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 17d8d860-3632-11e5-a93e-4963524a8bde for range 
> (806371695398849,8065203836608925992] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 23a811b0-3632-11e5-a93e-4963524a8bde for range 
> (-5474076923322749342,-5468600594078911162] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 336f8740-3632-11e5-a93e-4963524a8bde for range 
> (-8631877858109464676,-8624040066373718932] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde for range 
> (-5372806541854279315,-5369354119480076785] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 59f129f0-3632-11e5-a93e-4963524a8bde for range 
> (8166489034383821955,8168408930184216281] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde for range 
> (6084602890817326921,6088328703025510057] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde for range 
> (-781874602493000830,-781745173070807746] finished
> {code}
> but a bit above I see (at least two times in attached log):
> {code}
> ERROR [Thread-173887] 2015-07-29 20:44:03,853 StorageService.java:2959 - 
> Repair session 1b07ea50-3608-11e5-a93e-4963524a8bde for range 
> (5765414319217852786,5781018794516851576] failed with error 
> org.apache.cassandra.exceptions.RepairException: [repair 
> #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, 
> (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.RepairException: [repair 
> #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, 
> (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.get(FutureTask.java:188) 
> [na:1.7.0_80]
> at 
> org.apache.cassandra.service.StorageService$4.runMayThrow(StorageService.java:2950)
>  ~[apache

[jira] [Commented] (CASSANDRA-8523) Writes should be sent to a replacement node while it is streaming in data

2016-08-18 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427310#comment-15427310
 ] 

Paulo Motta commented on CASSANDRA-8523:


Thanks! This will be ready to commit after 
[PR|https://github.com/riptano/cassandra-dtest/pull/1155] is reviewed and 
merged. I rebased with the updated dtests and submitted a new CI run:

||2.2||3.0||trunk||
|[branch|https://github.com/apache/cassandra/compare/cassandra-2.2...pauloricardomg:2.2-8523]|[branch|https://github.com/apache/cassandra/compare/cassandra-3.0...pauloricardomg:3.0-8523]|[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-8523]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-8523-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-8523-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-8523-testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-8523-dtest/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-8523-dtest/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-8523-dtest/lastCompletedBuild/testReport/]|

> Writes should be sent to a replacement node while it is streaming in data
> -
>
> Key: CASSANDRA-8523
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8523
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Richard Wagner
>Assignee: Paulo Motta
> Fix For: 2.1.x
>
>
> In our operations, we make heavy use of replace_address (or 
> replace_address_first_boot) in order to replace broken nodes. We now realize 
> that writes are not sent to the replacement nodes while they are in hibernate 
> state and streaming in data. This runs counter to what our expectations were, 
> especially since we know that writes ARE sent to nodes when they are 
> bootstrapped into the ring.
> It seems like cassandra should arrange to send writes to a node that is in 
> the process of replacing another node, just like it does for a nodes that are 
> bootstraping. I hesitate to phrase this as "we should send writes to a node 
> in hibernate" because the concept of hibernate may be useful in other 
> contexts, as per CASSANDRA-8336. Maybe a new state is needed here?
> Among other things, the fact that we don't get writes during this period 
> makes subsequent repairs more expensive, proportional to the number of writes 
> that we miss (and depending on the amount of data that needs to be streamed 
> during replacement and the time it may take to rebuild secondary indexes, we 
> could miss many many hours worth of writes). It also leaves us more exposed 
> to consistency violations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12484) Unknown exception caught while attempting to update MaterializedView! findkita.kitas java.lang.AssertionErro

2016-08-18 Thread cordlessWool (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427309#comment-15427309
 ] 

cordlessWool commented on CASSANDRA-12484:
--

I do not really fix it, but I get it run. My first container is now complitly 
crahed and is not start able anymore. But I had same problem with other 
container, that are enter able. I run apt-get update and apt-get upgrade and 
get cassandra work again. 

It is not importand if there are any upgrade, only to run the command make 
cassandra call able again. Have to do it at each restart, but better as a 
completle crashed database. 

Hope this is helpful to find the bug

> Unknown exception caught while attempting to update MaterializedView! 
> findkita.kitas java.lang.AssertionErro
> 
>
> Key: CASSANDRA-12484
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12484
> Project: Cassandra
>  Issue Type: Bug
> Environment: Docker Container with Cassandra version 3.7 running on 
> local pc
>Reporter: cordlessWool
>Priority: Critical
>
> After restart my cassandra node does not start anymore. Ends with following 
> error message.
> ERROR 18:39:37 Unknown exception caught while attempting to update 
> MaterializedView! findkita.kitas
> java.lang.AssertionError: We shouldn't have got there is the base row had no 
> associated entry
> Cassandra has heavy cpu usage and use 2,1 gb of memory there is be 1gb more 
> available. I run nodetool cleanup and repair, but did not help.
> I have 5 materialzied views on this table, but the amount of rows in table is 
> under 2000, that is not much.
> The cassandra runs in a docker container. The container is access able, but 
> can not call cqlsh and my website cound not connect too



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12374) Can't rebuild SASI index

2016-08-18 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427267#comment-15427267
 ] 

Pavel Yaskevich commented on CASSANDRA-12374:
-

Test looks good now, I'm going to wait for CI to finish and commit everything, 
thanks, [~ifesdjeen]!

> Can't rebuild SASI index
> 
>
> Key: CASSANDRA-12374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> There's been no real requirement for that so far. 
> As [~beobal] has pointed out, it's not a big issue, since that only could be 
> needed when index files are lost, data corruption on disk (hardware issue) 
> has occurred or there was a bug that'd require an index rebuild.
> During {{rebuild_index}} task, indexes are only "marked" as removed with 
> {{SecondaryIndexManager::markIndexRemoved}} and then {{buildIndexesBlocking}} 
> is called. However, since SASI keeps track of SSTables for the index, it's 
> going to filter them out with {{.filter((sstable) -> 
> !sasi.index.hasSSTable(sstable))}} in {{SASIIndexBuildingSupport}}.
> If I understand the logic correctly, we have to "invalidate" (drop data) 
> right before we re-index them. This is also a blocker for [CASSANDRA-11990] 
> since without it we can't have an upgrade path.
> I have a patch ready in branch, but since it's a bug, it's better to have it 
> released earlier and for all branches affected.
> cc [~xedin]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12374) Can't rebuild SASI index

2016-08-18 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427265#comment-15427265
 ] 

Pavel Yaskevich commented on CASSANDRA-12374:
-

bq.  currently unqueriable indexes / are skipped.

Yes, that was intentional, because there is (and probably still isn't) a good 
way to propagate exceptions in a meaningful way, so we've opted out to writing 
to the log and returning 0 results instead.

> Can't rebuild SASI index
> 
>
> Key: CASSANDRA-12374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> There's been no real requirement for that so far. 
> As [~beobal] has pointed out, it's not a big issue, since that only could be 
> needed when index files are lost, data corruption on disk (hardware issue) 
> has occurred or there was a bug that'd require an index rebuild.
> During {{rebuild_index}} task, indexes are only "marked" as removed with 
> {{SecondaryIndexManager::markIndexRemoved}} and then {{buildIndexesBlocking}} 
> is called. However, since SASI keeps track of SSTables for the index, it's 
> going to filter them out with {{.filter((sstable) -> 
> !sasi.index.hasSSTable(sstable))}} in {{SASIIndexBuildingSupport}}.
> If I understand the logic correctly, we have to "invalidate" (drop data) 
> right before we re-index them. This is also a blocker for [CASSANDRA-11990] 
> since without it we can't have an upgrade path.
> I have a patch ready in branch, but since it's a bug, it's better to have it 
> released earlier and for all branches affected.
> cc [~xedin]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12374) Can't rebuild SASI index

2016-08-18 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427175#comment-15427175
 ] 

Alex Petrov commented on CASSANDRA-12374:
-

You're right, sleep would be not reliable there. I've changed test to just 
write garbage to file and make sure that it gets overwritten plus index is 
query-able after rebuild. So in fact the test without sleep turned out to be 
much more useful.

I've triggered CI, will check results tomorrow morning, but local runs were 
okay.

One question: currently unqueriable indexes / are skipped. For example, if 
there's just one sstable, and it's index is corrupted, it's going to get 
skipped and query will yield no results rather than an exception. That might be 
desired though, just checking.

> Can't rebuild SASI index
> 
>
> Key: CASSANDRA-12374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> There's been no real requirement for that so far. 
> As [~beobal] has pointed out, it's not a big issue, since that only could be 
> needed when index files are lost, data corruption on disk (hardware issue) 
> has occurred or there was a bug that'd require an index rebuild.
> During {{rebuild_index}} task, indexes are only "marked" as removed with 
> {{SecondaryIndexManager::markIndexRemoved}} and then {{buildIndexesBlocking}} 
> is called. However, since SASI keeps track of SSTables for the index, it's 
> going to filter them out with {{.filter((sstable) -> 
> !sasi.index.hasSSTable(sstable))}} in {{SASIIndexBuildingSupport}}.
> If I understand the logic correctly, we have to "invalidate" (drop data) 
> right before we re-index them. This is also a blocker for [CASSANDRA-11990] 
> since without it we can't have an upgrade path.
> I have a patch ready in branch, but since it's a bug, it's better to have it 
> released earlier and for all branches affected.
> cc [~xedin]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9054) Let DatabaseDescriptor not implicitly startup services

2016-08-18 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427156#comment-15427156
 ] 

Jeremiah Jordan commented on CASSANDRA-9054:


Was just looking through this.
{{AuthConfig.applyAuthz()}} is called from both 
{{DatabaseDescriptor.daemonInitialization()}} and 
{{CassandraDaemon.applyConfig()}}, which means it gets called twice, as 
{{applyConfig}} also calls {{daemonInitialization}}

Also a nit on naming {{applyAuthz} should probably just be called {{apply}} or 
{{applyAuth}} we usually use "Authz" to mean specifically Authori*z*ation, and 
that method sets up both Authorization and Authentication.

> Let DatabaseDescriptor not implicitly startup services
> --
>
> Key: CASSANDRA-9054
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9054
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeremiah Jordan
>Assignee: Robert Stupp
> Fix For: 3.10
>
>
> Right now to get at Config stuff you go through DatabaseDescriptor.  But when 
> you instantiate DatabaseDescriptor it actually opens system tables and such, 
> which triggers commit log replays, and other things if the right flags aren't 
> set ahead of time.  This makes getting at config stuff from tools annoying, 
> as you have to be very careful about instantiation orders.
> It would be nice if we could break DatabaseDescriptor up into multiple 
> classes, so that getting at config stuff from tools wasn't such a pain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12498) Shorten the sstable log message as it unnecessarily contains the full path of a SSTable

2016-08-18 Thread Wei Deng (JIRA)
Wei Deng created CASSANDRA-12498:


 Summary: Shorten the sstable log message as it unnecessarily 
contains the full path of a SSTable
 Key: CASSANDRA-12498
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12498
 Project: Cassandra
  Issue Type: Improvement
  Components: Observability
Reporter: Wei Deng
Priority: Minor


There are a lot of places in debug.log where we print out the name of a 
SSTable. This is useful to look at the full path of a SSTable file when you're 
investigating individual SSTable. However, during compaction, we often see 32 
SSTables getting compacted at the same time, and the corresponding log line 
becomes very repetitive and hard to read as most of them are repeating the same 
first part of the file system path again and again, like the following:

{noformat}
DEBUG [CompactionExecutor:94] 2016-08-18 06:33:17,185  CompactionTask.java:146 
- Compacting (a5ca2f10-650d-11e6-95ef-a561ab3c45e8) 
[/var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-200-big-Data.db:level=1,
 
/var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-201-big-Data.db:level=1,
 
/var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-16-big-Data.db:level=0,
 
/var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-204-big-Data.db:level=1,
 
/var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-205-big-Data.db:level=1,
 
/var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-203-big-Data.db:level=1,
 
/var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-202-big-Data.db:level=1,
 
/var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-207-big-Data.db:level=1,
 
/var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-206-big-Data.db:level=1,
 
/var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-3-big-Data.db:level=0,
 
/var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-208-big-Data.db:level=1,
 
/var/lib/cassandra/data/keyspace1/standard1-139cc441650d11e6a038bfe806276de2/mb-209-big-Data.db:level=1,
 ]
{noformat}

We should remove any text one level before ksName/cfName-UUID/ as it's very 
easy to get them from cassandra.yaml. For JBOD configuration where you have 
multiple data directories, keeping the one level before ksName/cfName-UUID 
should be adequate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12497) COPY ... TO STDOUT regression in 2.2.7

2016-08-18 Thread Max Bowsher (JIRA)
Max Bowsher created CASSANDRA-12497:
---

 Summary: COPY ... TO STDOUT regression in 2.2.7
 Key: CASSANDRA-12497
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12497
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Max Bowsher


Cassandra 2.2.7 introduces a regression over 2.2.6 breaking COPY ... TO STDOUT.

In pylib/cqlshlib/copyutil.py, in CopyTask.__init__, self.printmsg is 
conditionally defined to EITHER a module level function accepting arguments 
(msg, eol=, encoding=), OR a lambda accepting arguments only (_, eol=).

Consequently, when the lambda is in use (which requires COPY ... TO STDOUT 
without --debug), any attempt to call CopyTask.printmsg with an encoding 
parameter causes an exception.

This occurs in ExportTask.run, thus rendering all COPY ... TO STDOUT without 
--debug broken.

The fix is to update the lambda's arguments to include encoding, or better, 
replace it with a module-level function defined next to printmsg, so that 
people realize the two argument lists must be kept in sync.

The regression was introduced in this commit:

commit 5de9de1f5832f2a0e92783e2f4412874423e6e15
Author: Tyler Hobbs 
Date:   Thu May 5 11:33:35 2016 -0500

cqlsh: Handle non-ascii chars in error messages

Patch by Tyler Hobbs; reviewed by Paulo Motta for CASSANDRA-11626






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12476) SyntaxException when COPY FROM Counter Table with Null value

2016-08-18 Thread Ashraful Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashraful Islam updated CASSANDRA-12476:
---
Status: Open  (was: Ready to Commit)

> SyntaxException when COPY FROM Counter Table with Null value
> 
>
> Key: CASSANDRA-12476
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12476
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Ashraful Islam
>Priority: Minor
>  Labels: cqlsh
> Fix For: 3.0.x, 3.x
>
>
> I have a simple counter table 
> {noformat}
> CREATE TABLE test (
> a int PRIMARY KEY,
> b counter,
> c counter
> ) ;
> {noformat}
> I have updated b column value with 
> {noformat}
> UPDATE test SET b = b + 1 WHERE a = 1;
> {noformat}
> Now I have export the data with 
> {noformat}
> COPY test TO 'test.csv';
> {noformat}
> And Import it with 
> {noformat}
> COPY test FROM 'test.csv';
> {noformat}
> I get this Error
> {noformat}
> Failed to import 1 rows: SyntaxException - line 1:34 no viable alternative at 
> input 'WHERE' (...=b+1,c=c+ [WHERE]...) -  will retry later, attempt 1 of 5
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12476) SyntaxException when COPY FROM Counter Table with Null value

2016-08-18 Thread Ashraful Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashraful Islam updated CASSANDRA-12476:
---
Status: Ready to Commit  (was: Patch Available)

> SyntaxException when COPY FROM Counter Table with Null value
> 
>
> Key: CASSANDRA-12476
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12476
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Ashraful Islam
>Priority: Minor
>  Labels: cqlsh
> Fix For: 3.0.x, 3.x
>
>
> I have a simple counter table 
> {noformat}
> CREATE TABLE test (
> a int PRIMARY KEY,
> b counter,
> c counter
> ) ;
> {noformat}
> I have updated b column value with 
> {noformat}
> UPDATE test SET b = b + 1 WHERE a = 1;
> {noformat}
> Now I have export the data with 
> {noformat}
> COPY test TO 'test.csv';
> {noformat}
> And Import it with 
> {noformat}
> COPY test FROM 'test.csv';
> {noformat}
> I get this Error
> {noformat}
> Failed to import 1 rows: SyntaxException - line 1:34 no viable alternative at 
> input 'WHERE' (...=b+1,c=c+ [WHERE]...) -  will retry later, attempt 1 of 5
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12476) SyntaxException when COPY FROM Counter Table with Null value

2016-08-18 Thread Ashraful Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashraful Islam updated CASSANDRA-12476:
---
Fix Version/s: 3.x
   3.0.x
   Status: Patch Available  (was: Open)

> SyntaxException when COPY FROM Counter Table with Null value
> 
>
> Key: CASSANDRA-12476
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12476
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Ashraful Islam
>Priority: Minor
>  Labels: cqlsh
> Fix For: 3.0.x, 3.x
>
>
> I have a simple counter table 
> {noformat}
> CREATE TABLE test (
> a int PRIMARY KEY,
> b counter,
> c counter
> ) ;
> {noformat}
> I have updated b column value with 
> {noformat}
> UPDATE test SET b = b + 1 WHERE a = 1;
> {noformat}
> Now I have export the data with 
> {noformat}
> COPY test TO 'test.csv';
> {noformat}
> And Import it with 
> {noformat}
> COPY test FROM 'test.csv';
> {noformat}
> I get this Error
> {noformat}
> Failed to import 1 rows: SyntaxException - line 1:34 no viable alternative at 
> input 'WHERE' (...=b+1,c=c+ [WHERE]...) -  will retry later, attempt 1 of 5
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12420) Duplicated Key in IN clause with a small fetch size will run forever

2016-08-18 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12420:

Fix Version/s: 2.1.x
  Component/s: CQL

> Duplicated Key in IN clause with a small fetch size will run forever
> 
>
> Key: CASSANDRA-12420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12420
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: cassandra 2.1.14, driver 2.1.7.1
>Reporter: ZhaoYang
>Assignee: ZhaoYang
> Fix For: 2.1.x
>
> Attachments: CASSANDRA-12420.patch
>
>
> This can be easily reproduced and fetch size is smaller than the correct 
> number of rows.
> A table has 2 partition key, 1 clustering key, 1 column.
> >Select select = QueryBuilder.select().from("ks", "cf");
> >select.where().and(QueryBuilder.eq("a", 1));
> >select.where().and(QueryBuilder.in("b", Arrays.asList(1, 1, 1)));
> >select.setFetchSize(5);
> Now we put a distinct method in client side to eliminate the duplicated key, 
> but it's better to fix inside Cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12420) Duplicated Key in IN clause with a small fetch size will run forever

2016-08-18 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427108#comment-15427108
 ] 

Tyler Hobbs commented on CASSANDRA-12420:
-

I've confirmed this is reproduceable in 2.1 with the following:

{noformat}
cqlsh> create keyspace ks1 WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': '1' };
cqlsh> use ks1;
cqlsh:ks1> create table foo (a int, b int, c int, d int, PRIMARY KEY ((a, b), 
c));
cqlsh:ks1> insert into foo (a, b, c, d) VALUES (1, 1, 0, 0);
cqlsh:ks1> insert into foo (a, b, c, d) VALUES (1, 1, 1, 1);
cqlsh:ks1> insert into foo (a, b, c, d) VALUES (1, 1, 2, 2);
cqlsh:ks1> insert into foo (a, b, c, d) VALUES (1, 1, 3, 3);
cqlsh:ks1> insert into foo (a, b, c, d) VALUES (1, 1, 4, 4);
cqlsh:ks1> insert into foo (a, b, c, d) VALUES (1, 1, 5, 5);
cqlsh:ks1> insert into foo (a, b, c, d) VALUES (1, 1, 6, 6);
cqlsh:ks1> insert into foo (a, b, c, d) VALUES (1, 1, 7, 7);
cqlsh:ks1> insert into foo (a, b, c, d) VALUES (1, 1, 8, 8);
cqlsh:ks1> insert into foo (a, b, c, d) VALUES (1, 1, 9, 9);
cqlsh:ks1> PAGING 5;
cqlsh:ks1> SELECT * FROM foo WHERE a = 1 AND b IN (1, 1, 1);

 a | b | c | d
---+---+---+---
 1 | 1 | 0 | 0
 1 | 1 | 1 | 1
 1 | 1 | 2 | 2
 1 | 1 | 3 | 3
 1 | 1 | 4 | 4
 
---MORE---·
 a | b | c | d
---+---+---+---
 1 | 1 | 5 | 5
 1 | 1 | 6 | 6
 1 | 1 | 7 | 7
 1 | 1 | 8 | 8
 1 | 1 | 9 | 9
 
---MORE---
 a | b | c | d
---+---+---+---
 1 | 1 | 0 | 0
 1 | 1 | 1 | 1
 1 | 1 | 2 | 2
 1 | 1 | 3 | 3
 1 | 1 | 4 | 4
 
---MORE---
 a | b | c | d
---+---+---+---
 1 | 1 | 5 | 5
 1 | 1 | 6 | 6
 1 | 1 | 7 | 7
 1 | 1 | 8 | 8
 1 | 1 | 9 | 9

... (repeats endlessly)
{noformat}

This does not reproduce in 2.2.

This is somewhat different from CASSANDRA-8276, which was just complaining 
about duplicate result rows when duplicate {{IN}} values are used.  The real 
problem here is that the paged results never end.

> Duplicated Key in IN clause with a small fetch size will run forever
> 
>
> Key: CASSANDRA-12420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12420
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 2.1.14, driver 2.1.7.1
>Reporter: ZhaoYang
>Assignee: ZhaoYang
> Attachments: CASSANDRA-12420.patch
>
>
> This can be easily reproduced and fetch size is smaller than the correct 
> number of rows.
> A table has 2 partition key, 1 clustering key, 1 column.
> >Select select = QueryBuilder.select().from("ks", "cf");
> >select.where().and(QueryBuilder.eq("a", 1));
> >select.where().and(QueryBuilder.in("b", Arrays.asList(1, 1, 1)));
> >select.setFetchSize(5);
> Now we put a distinct method in client side to eliminate the duplicated key, 
> but it's better to fix inside Cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12374) Can't rebuild SASI index

2016-08-18 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427092#comment-15427092
 ] 

Pavel Yaskevich commented on CASSANDRA-12374:
-

Is there any way to get rid of the Thread.sleep there? New test consistently 
fails on my machine now...

> Can't rebuild SASI index
> 
>
> Key: CASSANDRA-12374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> There's been no real requirement for that so far. 
> As [~beobal] has pointed out, it's not a big issue, since that only could be 
> needed when index files are lost, data corruption on disk (hardware issue) 
> has occurred or there was a bug that'd require an index rebuild.
> During {{rebuild_index}} task, indexes are only "marked" as removed with 
> {{SecondaryIndexManager::markIndexRemoved}} and then {{buildIndexesBlocking}} 
> is called. However, since SASI keeps track of SSTables for the index, it's 
> going to filter them out with {{.filter((sstable) -> 
> !sasi.index.hasSSTable(sstable))}} in {{SASIIndexBuildingSupport}}.
> If I understand the logic correctly, we have to "invalidate" (drop data) 
> right before we re-index them. This is also a blocker for [CASSANDRA-11990] 
> since without it we can't have an upgrade path.
> I have a patch ready in branch, but since it's a bug, it's better to have it 
> released earlier and for all branches affected.
> cc [~xedin]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12385) Disk failure policy should not be invoked on out of space

2016-08-18 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-12385:
--
Status: Patch Available  (was: Awaiting Feedback)

> Disk failure policy should not be invoked on out of space
> -
>
> Key: CASSANDRA-12385
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12385
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Minor
> Attachments: CASSANDRA-12385_v2.txt
>
>
> If a node fills up temporarily due to compaction the disk failure policy may 
> be invoked. We
> use stop, so the node will be disabled. This leaves the node down even though 
> it recovers from this
> failure by aborting the compaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12474) Define executeLocally() at the ReadQuery Level

2016-08-18 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427081#comment-15427081
 ] 

Tyler Hobbs commented on CASSANDRA-12474:
-

The patch looks good to me.  The testall results are good, but the dtest run 
appears incomplete based on the number of tests, so I've started another run.  
Assuming that turns out okay as well, +1 on committing this.

> Define executeLocally() at the ReadQuery Level
> --
>
> Key: CASSANDRA-12474
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12474
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
>
> We have {{execute}} and {{executeInternal}} at the {{ReadQuery}} level but 
> {{executeLocally}} is missing and this makes the abstraction incomplete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12385) Disk failure policy should not be invoked on out of space

2016-08-18 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427078#comment-15427078
 ] 

sankalp kohli commented on CASSANDRA-12385:
---

uploaded v2

> Disk failure policy should not be invoked on out of space
> -
>
> Key: CASSANDRA-12385
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12385
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Minor
> Attachments: CASSANDRA-12385_v2.txt
>
>
> If a node fills up temporarily due to compaction the disk failure policy may 
> be invoked. We
> use stop, so the node will be disabled. This leaves the node down even though 
> it recovers from this
> failure by aborting the compaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12385) Disk failure policy should not be invoked on out of space

2016-08-18 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-12385:
--
Attachment: CASSANDRA-12385_v2.txt

> Disk failure policy should not be invoked on out of space
> -
>
> Key: CASSANDRA-12385
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12385
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Minor
> Attachments: CASSANDRA-12385_v2.txt
>
>
> If a node fills up temporarily due to compaction the disk failure policy may 
> be invoked. We
> use stop, so the node will be disabled. This leaves the node down even though 
> it recovers from this
> failure by aborting the compaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12385) Disk failure policy should not be invoked on out of space

2016-08-18 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-12385:
--
Attachment: (was: CASSANDRA-12385_3.0.txt)

> Disk failure policy should not be invoked on out of space
> -
>
> Key: CASSANDRA-12385
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12385
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Minor
> Attachments: CASSANDRA-12385_v2.txt
>
>
> If a node fills up temporarily due to compaction the disk failure policy may 
> be invoked. We
> use stop, so the node will be disabled. This leaves the node down even though 
> it recovers from this
> failure by aborting the compaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12492) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.cql3_non_compound_range_tombstones_test

2016-08-18 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427006#comment-15427006
 ] 

Russ Hatch commented on CASSANDRA-12492:


Local attempt upgrading to same sha doesn't fail (c72a7bb), so it doesn't 
appear to be a problem with that particular commit.

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.cql3_non_compound_range_tombstones_test
> 
>
> Key: CASSANDRA-12492
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12492
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Russ Hatch
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest_upgrade/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/cql3_non_compound_range_tombstones_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 1562, in cql3_non_compound_range_tombstones_test
> ThriftConsistencyLevel.ALL)
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 1175, in batch_mutate
> self.recv_batch_mutate()
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 1201, in recv_batch_mutate
> raise result.te
> "TimedOutException(acknowledged_by=1, paxos_in_progress=None, 
> acknowledged_by_batchlog=None)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12492) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.cql3_non_compound_range_tombstones_test

2016-08-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12492:

Assignee: Russ Hatch  (was: DS Test Eng)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.cql3_non_compound_range_tombstones_test
> 
>
> Key: CASSANDRA-12492
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12492
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Russ Hatch
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest_upgrade/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/cql3_non_compound_range_tombstones_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 1562, in cql3_non_compound_range_tombstones_test
> ThriftConsistencyLevel.ALL)
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 1175, in batch_mutate
> self.recv_batch_mutate()
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 1201, in recv_batch_mutate
> raise result.te
> "TimedOutException(acknowledged_by=1, paxos_in_progress=None, 
> acknowledged_by_batchlog=None)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12493) dtest failure in auth_test.TestAuth.conditional_create_drop_user_test

2016-08-18 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427001#comment-15427001
 ] 

Philip Thompson commented on CASSANDRA-12493:
-

http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/266/

> dtest failure in auth_test.TestAuth.conditional_create_drop_user_test
> -
>
> Key: CASSANDRA-12493
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12493
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/458/testReport/auth_test/TestAuth/conditional_create_drop_user_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/auth_test.py", line 348, in 
> conditional_create_drop_user_test
> self.prepare()
>   File "/home/automaton/cassandra-dtest/auth_test.py", line 978, in prepare
> n = self.wait_for_any_log(self.cluster.nodelist(), 'Created default 
> superuser', 25)
>   File "/home/automaton/cassandra-dtest/dtest.py", line 760, in 
> wait_for_any_log
> found = node.grep_log(pattern, filename=filename)
>   File "/home/automaton/ccm/ccmlib/node.py", line 347, in grep_log
> with open(os.path.join(self.get_path(), 'logs', filename)) as f:
> "[Errno 2] No such file or directory: 
> '/tmp/dtest-XmnSYI/test/node1/logs/system.log'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12493) dtest failure in auth_test.TestAuth.conditional_create_drop_user_test

2016-08-18 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426996#comment-15426996
 ] 

Philip Thompson commented on CASSANDRA-12493:
-

Cluster is starting with wait_for_binary_proto and wait_other_notice so I 
really don't have a good guess as to why the node didn't start

> dtest failure in auth_test.TestAuth.conditional_create_drop_user_test
> -
>
> Key: CASSANDRA-12493
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12493
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/458/testReport/auth_test/TestAuth/conditional_create_drop_user_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/auth_test.py", line 348, in 
> conditional_create_drop_user_test
> self.prepare()
>   File "/home/automaton/cassandra-dtest/auth_test.py", line 978, in prepare
> n = self.wait_for_any_log(self.cluster.nodelist(), 'Created default 
> superuser', 25)
>   File "/home/automaton/cassandra-dtest/dtest.py", line 760, in 
> wait_for_any_log
> found = node.grep_log(pattern, filename=filename)
>   File "/home/automaton/ccm/ccmlib/node.py", line 347, in grep_log
> with open(os.path.join(self.get_path(), 'logs', filename)) as f:
> "[Errno 2] No such file or directory: 
> '/tmp/dtest-XmnSYI/test/node1/logs/system.log'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-12493) dtest failure in auth_test.TestAuth.conditional_create_drop_user_test

2016-08-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reassigned CASSANDRA-12493:
---

Assignee: Philip Thompson  (was: DS Test Eng)

> dtest failure in auth_test.TestAuth.conditional_create_drop_user_test
> -
>
> Key: CASSANDRA-12493
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12493
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/458/testReport/auth_test/TestAuth/conditional_create_drop_user_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/auth_test.py", line 348, in 
> conditional_create_drop_user_test
> self.prepare()
>   File "/home/automaton/cassandra-dtest/auth_test.py", line 978, in prepare
> n = self.wait_for_any_log(self.cluster.nodelist(), 'Created default 
> superuser', 25)
>   File "/home/automaton/cassandra-dtest/dtest.py", line 760, in 
> wait_for_any_log
> found = node.grep_log(pattern, filename=filename)
>   File "/home/automaton/ccm/ccmlib/node.py", line 347, in grep_log
> with open(os.path.join(self.get_path(), 'logs', filename)) as f:
> "[Errno 2] No such file or directory: 
> '/tmp/dtest-XmnSYI/test/node1/logs/system.log'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12494) dtest failure in topology_test.TestTopology.crash_during_decommission_test

2016-08-18 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426991#comment-15426991
 ] 

Philip Thompson commented on CASSANDRA-12494:
-

[~stefania], can you confirm this is another type of streaming failure that is 
acceptable to ignore?

> dtest failure in topology_test.TestTopology.crash_during_decommission_test
> --
>
> Key: CASSANDRA-12494
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12494
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/376/testReport/topology_test/TestTopology/crash_during_decommission_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 673, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> "Unexpected error in log, see stdout
> {code}
> {code}
> Standard Output
> Unexpected error in node1 log, error: 
> ERROR [RMI TCP Connection(2)-127.0.0.1] 2016-08-18 02:15:31,444 
> StorageService.java:3719 - Error while decommissioning node 
> org.apache.cassandra.streaming.StreamException: Stream failed
>   at 
> org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:215)
>  ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:191)
>  ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:448)
>  ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamSession.onError(StreamSession.java:551) 
> ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamSession.start(StreamSession.java:249) 
> ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamCoordinator$StreamSessionConnector.run(StreamCoordinator.java:263)
>  ~[main/:na]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_45]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_45]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12475) dtest failure in consistency_test.TestConsistency.short_read_test

2016-08-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12475:

Assignee: Joel Knighton  (was: DS Test Eng)

Assigning to Joel since he's looking at it. When you're done or give up, just 
pass it back.

> dtest failure in consistency_test.TestConsistency.short_read_test
> -
>
> Key: CASSANDRA-12475
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12475
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Joel Knighton
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest/42/testReport/junit/consistency_test/TestConsistency/short_read_test/
> Error:
> {code}
> Error from server: code=2200 [Invalid query] message="No keyspace has been 
> specified. USE a keyspace, or explicitly specify keyspace.tablename"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-12494) dtest failure in topology_test.TestTopology.crash_during_decommission_test

2016-08-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reassigned CASSANDRA-12494:
---

Assignee: Philip Thompson  (was: DS Test Eng)

> dtest failure in topology_test.TestTopology.crash_during_decommission_test
> --
>
> Key: CASSANDRA-12494
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12494
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/376/testReport/topology_test/TestTopology/crash_during_decommission_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 673, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> "Unexpected error in log, see stdout
> {code}
> {code}
> Standard Output
> Unexpected error in node1 log, error: 
> ERROR [RMI TCP Connection(2)-127.0.0.1] 2016-08-18 02:15:31,444 
> StorageService.java:3719 - Error while decommissioning node 
> org.apache.cassandra.streaming.StreamException: Stream failed
>   at 
> org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:215)
>  ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:191)
>  ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:448)
>  ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamSession.onError(StreamSession.java:551) 
> ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamSession.start(StreamSession.java:249) 
> ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamCoordinator$StreamSessionConnector.run(StreamCoordinator.java:263)
>  ~[main/:na]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_45]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_45]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12496) DirectoriesTest.testStandardDirs() fails due to symlinks

2016-08-18 Thread Tom Petracca (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426979#comment-15426979
 ] 

Tom Petracca edited comment on CASSANDRA-12496 at 8/18/16 6:50 PM:
---

I hit this on (and confirmed the fix against) 2.2.7, but the problem appears to 
still exist on trunk and the fix is trivial and cherry-picks nicely: 
https://github.com/tpetracca/cassandra/commit/14e29df2c165d77f0588f71492290bf7c0890dc1


was (Author: tpetracca):
I hit this on (and confirmed the fix against) 2.2.7, but the problem appears to 
still exist on trunk and the fix is trivial and cherry-picks nicely: 
https://github.com/tpetracca/cassandra/tree/trunk-CASSANDRA-12496

> DirectoriesTest.testStandardDirs() fails due to symlinks
> 
>
> Key: CASSANDRA-12496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12496
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Tom Petracca
>Priority: Trivial
>
> org.apache.cassandra.io.sstable.Descriptor constructor calls 
> directory.getCanonicalFile() on the File directory passed to it.  If this is 
> a symlink, then the assert in DirectoriesTest.testStandardDirs fails, because 
> the File we compare it against contains the symlink, not the canonical path, 
> and File.equals(File other) compares the abstract paths themselves (without 
> resolving symlinks).
> {noformat}
> Testcase: testStandardDirs(org.apache.cassandra.db.DirectoriesTest):FAILED
> expected:
>  but 
> was:
> junit.framework.AssertionFailedError: 
> expected:
>  but 
> was:
> at 
> org.apache.cassandra.db.DirectoriesTest.testStandardDirs(DirectoriesTest.java:149)
> {noformat}
> where:
> {noformat}
> computer:dir user$ ls -al /symlink
> lrwxr-xr-x@ 1 user  group  11 Jun 21 16:34 /symlink -> canonical
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12469) dtest failure in upgrade_tests.repair_test.TestUpgradeRepair.repair_after_upgrade_test

2016-08-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-12469.
-
Resolution: Cannot Reproduce

Didn't reproduce after hundreds of runs. Closing

> dtest failure in 
> upgrade_tests.repair_test.TestUpgradeRepair.repair_after_upgrade_test
> --
>
> Key: CASSANDRA-12469
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12469
> Project: Cassandra
>  Issue Type: Test
>Reporter: Craig Kodman
>Assignee: Philip Thompson
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest_upgrade/24/testReport/upgrade_tests.repair_test/TestUpgradeRepair/repair_after_upgrade_test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12496) DirectoriesTest.testStandardDirs() fails due to symlinks

2016-08-18 Thread Tom Petracca (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426979#comment-15426979
 ] 

Tom Petracca commented on CASSANDRA-12496:
--

I hit this on (and confirmed the fix against) 2.2.7, but the problem appears to 
still exist on trunk and the fix is trivial and cherry-picks nicely: 
https://github.com/tpetracca/cassandra/tree/trunk-CASSANDRA-12496

> DirectoriesTest.testStandardDirs() fails due to symlinks
> 
>
> Key: CASSANDRA-12496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12496
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Tom Petracca
>Priority: Trivial
>
> org.apache.cassandra.io.sstable.Descriptor constructor calls 
> directory.getCanonicalFile() on the File directory passed to it.  If this is 
> a symlink, then the assert in DirectoriesTest.testStandardDirs fails, because 
> the File we compare it against contains the symlink, not the canonical path, 
> and File.equals(File other) compares the abstract paths themselves (without 
> resolving symlinks).
> {noformat}
> Testcase: testStandardDirs(org.apache.cassandra.db.DirectoriesTest):FAILED
> expected:
>  but 
> was:
> junit.framework.AssertionFailedError: 
> expected:
>  but 
> was:
> at 
> org.apache.cassandra.db.DirectoriesTest.testStandardDirs(DirectoriesTest.java:149)
> {noformat}
> where:
> {noformat}
> computer:dir user$ ls -al /symlink
> lrwxr-xr-x@ 1 user  group  11 Jun 21 16:34 /symlink -> canonical
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12480) dtest failure in rebuild_test.TestRebuild.simple_rebuild_test

2016-08-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12480:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> dtest failure in rebuild_test.TestRebuild.simple_rebuild_test
> -
>
> Key: CASSANDRA-12480
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12480
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest
> Attachments: node1.log, node2.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_offheap_dtest/382/testReport/rebuild_test/TestRebuild/simple_rebuild_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/rebuild_test.py", line 75, in 
> simple_rebuild_test
> session.execute("ALTER KEYSPACE system_auth WITH REPLICATION = 
> {'class':'NetworkTopologyStrategy', 'dc1':1, 'dc2':1};")
>   File "cassandra/cluster.py", line 1972, in 
> cassandra.cluster.Session.execute (cassandra/cluster.c:34423)
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile).result()
>   File "cassandra/cluster.py", line 3665, in 
> cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:70216)
> raise self._final_exception
> 'Error from server: code=2200 [Invalid query] message="Unknown keyspace 
> system_auth"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12496) DirectoriesTest.testStandardDirs() fails due to symlinks

2016-08-18 Thread Tom Petracca (JIRA)
Tom Petracca created CASSANDRA-12496:


 Summary: DirectoriesTest.testStandardDirs() fails due to symlinks
 Key: CASSANDRA-12496
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12496
 Project: Cassandra
  Issue Type: Test
  Components: Testing
Reporter: Tom Petracca
Priority: Trivial


org.apache.cassandra.io.sstable.Descriptor constructor calls 
directory.getCanonicalFile() on the File directory passed to it.  If this is a 
symlink, then the assert in DirectoriesTest.testStandardDirs fails, because the 
File we compare it against contains the symlink, not the canonical path, and 
File.equals(File other) compares the abstract paths themselves (without 
resolving symlinks).

{noformat}
Testcase: testStandardDirs(org.apache.cassandra.db.DirectoriesTest):FAILED
expected:
 but 
was:
junit.framework.AssertionFailedError: 
expected:
 but 
was:
at 
org.apache.cassandra.db.DirectoriesTest.testStandardDirs(DirectoriesTest.java:149)
{noformat}

where:
{noformat}
computer:dir user$ ls -al /symlink
lrwxr-xr-x@ 1 user  group  11 Jun 21 16:34 /symlink -> canonical
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[Cassandra Wiki] Trivial Update of "ContributorsGroup" by DaveBrosius

2016-08-18 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "ContributorsGroup" page has been changed by DaveBrosius:
https://wiki.apache.org/cassandra/ContributorsGroup?action=diff&rev1=62&rev2=63

   * ChrisBroome
   * ChrisBurroughs
   * daniels
+  * Danielle Blake
   * DanielleBlake
   * EricEvans
   * ErnieHershey


[Cassandra Wiki] Trivial Update of "ContributorsGroup" by DaveBrosius

2016-08-18 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "ContributorsGroup" page has been changed by DaveBrosius:
https://wiki.apache.org/cassandra/ContributorsGroup?action=diff&rev1=62&rev2=63

   * ChrisBroome
   * ChrisBurroughs
   * daniels
+  * Danielle Blake
   * DanielleBlake
   * EricEvans
   * ErnieHershey


[jira] [Updated] (CASSANDRA-12492) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.cql3_non_compound_range_tombstones_test

2016-08-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12492:

Reproduced In: 3.9

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.cql3_non_compound_range_tombstones_test
> 
>
> Key: CASSANDRA-12492
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12492
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest_upgrade/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/cql3_non_compound_range_tombstones_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 1562, in cql3_non_compound_range_tombstones_test
> ThriftConsistencyLevel.ALL)
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 1175, in batch_mutate
> self.recv_batch_mutate()
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 1201, in recv_batch_mutate
> raise result.te
> "TimedOutException(acknowledged_by=1, paxos_in_progress=None, 
> acknowledged_by_batchlog=None)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12488) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.cql3_non_compound_range_tombstones_test

2016-08-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-12488.
-
Resolution: Duplicate

So this ticket came first, but is less fleshed out than it's dupe, so I'm 
closing this one.

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.cql3_non_compound_range_tombstones_test
> 
>
> Key: CASSANDRA-12488
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12488
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest_upgrade/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/cql3_non_compound_range_tombstones_test/
> It looks like this is failing with a TimedOutException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12418) sstabledump JSON fails after row tombstone

2016-08-18 Thread Chris Lohfink (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Lohfink updated CASSANDRA-12418:
--
Assignee: (was: Chris Lohfink)

> sstabledump JSON fails after row tombstone
> --
>
> Key: CASSANDRA-12418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12418
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Keith Wansbrough
>
> sstabledump fails in JSON generation on an sstable containing a row deletion, 
> using Cassandra 3.10-SNAPSHOT accf7a4724e244d6f1ba921cb11d2554dbb54a76 from 
> 2016-07-26.
> There are two exceptions displayed:
> * Fatal error parsing partition: aye 
> org.codehaus.jackson.JsonGenerationException: Can not start an object, 
> expecting field name
> * org.codehaus.jackson.JsonGenerationException: Current context not an ARRAY 
> but OBJECT
> Steps to reproduce:
> {code}
> cqlsh> create KEYSPACE foo WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> cqlsh> create TABLE foo.bar (id text, str text, primary key (id));
> cqlsh> insert into foo.bar (id, str) values ('aye', 'alpha');
> cqlsh> insert into foo.bar (id, str) values ('bee', 'beta');
> cqlsh> delete from foo.bar where id = 'bee';
> cqlsh> insert into foo.bar (id, str) values ('bee', 'beth');
> cqlsh> select * from foo.bar;
>  id  | str
> -+---
>  bee |  beth
>  aye | alpha
> (2 rows)
> cqlsh> 
> {code}
> Now find the sstable:
> {code}
> $ cassandra/bin/nodetool flush
> $ cassandra/bin/sstableutil foo bar
> [..]
> Listing files...
> [..]
> /home/kw217/cassandra/data/data/foo/bar-407c56f05e1a11e6835def64bf5c656e/mb-1-big-Data.db
> [..]
> {code}
> Now check with sstabledump \-d. This works just fine.
> {code}
> $ cassandra/tools/bin/sstabledump -d 
> /home/kw217/cassandra/data/data/foo/bar-407c56f05e1a11e6835def64bf5c656e/mb-1-big-Data.db
> [bee]@0 deletedAt=1470737827008101, localDeletion=1470737827
> [bee]@0 Row[info=[ts=1470737832405510] ]:  | [str=beth ts=1470737832405510]
> [aye]@31 Row[info=[ts=1470737784401778] ]:  | [str=alpha ts=1470737784401778]
> {code}
> Now run sstabledump. This should work as well, but it fails as follows:
> {code}
> $ cassandra/tools/bin/sstabledump 
> /home/kw217/cassandra/data/data/foo/bar-407c56f05e1a11e6835def64bf5c656e/mb-1-big-Data.db
> ERROR 10:26:07 Fatal error parsing partition: aye
> org.codehaus.jackson.JsonGenerationException: Can not start an object, 
> expecting field name
>   at 
> org.codehaus.jackson.impl.JsonGeneratorBase._reportError(JsonGeneratorBase.java:480)
>  ~[jackson-core-asl-1.9.2.jar:1.9.2]
>   at 
> org.codehaus.jackson.impl.WriterBasedGenerator._verifyValueWrite(WriterBasedGenerator.java:836)
>  ~[jackson-core-asl-1.9.2.jar:1.9.2]
>   at 
> org.codehaus.jackson.impl.WriterBasedGenerator.writeStartObject(WriterBasedGenerator.java:273)
>  ~[jackson-core-asl-1.9.2.jar:1.9.2]
>   at 
> org.apache.cassandra.tools.JsonTransformer.serializePartition(JsonTransformer.java:181)
>  ~[main/:na]
>   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) 
> ~[na:1.8.0_77]
>   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) 
> ~[na:1.8.0_77]
>   at java.util.Iterator.forEachRemaining(Iterator.java:116) ~[na:1.8.0_77]
>   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
>  ~[na:1.8.0_77]
>   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
> ~[na:1.8.0_77]
>   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) 
> ~[na:1.8.0_77]
>   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) 
> ~[na:1.8.0_77]
>   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
>  ~[na:1.8.0_77]
>   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 
> ~[na:1.8.0_77]
>   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) 
> ~[na:1.8.0_77]
>   at 
> org.apache.cassandra.tools.JsonTransformer.toJson(JsonTransformer.java:99) 
> ~[main/:na]
>   at 
> org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:237) 
> ~[main/:na]
> [
>   {
> "partition" : {
>   "key" : [ "bee" ],
>   "position" : 0,
>   "deletion_info" : { "marked_deleted" : "2016-08-09T10:17:07.008101Z", 
> "local_delete_time" : "2016-08-09T10:17:07Z" }
> }
>   }
> ]org.codehaus.jackson.JsonGenerationException: Current context not an ARRAY 
> but OBJECT
>   at 
> org.codehaus.jackson.impl.JsonGeneratorBase._reportError(JsonGeneratorBase.java:480)
>   at 
> org.codehaus.jackson.impl.WriterBasedGenerator.writeEndArray(WriterBasedGenerator.java:257)
>   at 
> org.apache.cassandra.tools.Json

[jira] [Created] (CASSANDRA-12495) dtest failure in snapshot_test.TestArchiveCommitlog.test_archive_commitlog_point_in_time_with_active_commitlog_ln

2016-08-18 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12495:
-

 Summary: dtest failure in 
snapshot_test.TestArchiveCommitlog.test_archive_commitlog_point_in_time_with_active_commitlog_ln
 Key: CASSANDRA-12495
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12495
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log

example failure:
http://cassci.datastax.com/job/trunk_offheap_dtest/376/testReport/snapshot_test/TestArchiveCommitlog/test_archive_commitlog_point_in_time_with_active_commitlog_ln/

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/snapshot_test.py", line 198, in 
test_archive_commitlog_point_in_time_with_active_commitlog_ln
self.run_archive_commitlog(restore_point_in_time=True, 
archive_active_commitlogs=True, archive_command='ln')
  File "/home/automaton/cassandra-dtest/snapshot_test.py", line 281, in 
run_archive_commitlog
set())
  File "/usr/lib/python2.7/unittest/case.py", line 522, in assertNotEqual
raise self.failureException(msg)
"set([]) == set([])
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12494) dtest failure in topology_test.TestTopology.crash_during_decommission_test

2016-08-18 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12494:
-

 Summary: dtest failure in 
topology_test.TestTopology.crash_during_decommission_test
 Key: CASSANDRA-12494
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12494
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log

example failure:

http://cassci.datastax.com/job/trunk_offheap_dtest/376/testReport/topology_test/TestTopology/crash_during_decommission_test

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 358, in run
self.tearDown()
  File "/home/automaton/cassandra-dtest/dtest.py", line 673, in tearDown
raise AssertionError('Unexpected error in log, see stdout')
"Unexpected error in log, see stdout
{code}

{code}
Standard Output

Unexpected error in node1 log, error: 
ERROR [RMI TCP Connection(2)-127.0.0.1] 2016-08-18 02:15:31,444 
StorageService.java:3719 - Error while decommissioning node 
org.apache.cassandra.streaming.StreamException: Stream failed
at 
org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:215)
 ~[main/:na]
at 
org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:191)
 ~[main/:na]
at 
org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:448)
 ~[main/:na]
at 
org.apache.cassandra.streaming.StreamSession.onError(StreamSession.java:551) 
~[main/:na]
at 
org.apache.cassandra.streaming.StreamSession.start(StreamSession.java:249) 
~[main/:na]
at 
org.apache.cassandra.streaming.StreamCoordinator$StreamSessionConnector.run(StreamCoordinator.java:263)
 ~[main/:na]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_45]
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12493) dtest failure in auth_test.TestAuth.conditional_create_drop_user_test

2016-08-18 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12493:
-

 Summary: dtest failure in 
auth_test.TestAuth.conditional_create_drop_user_test
 Key: CASSANDRA-12493
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12493
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng


example failure:

http://cassci.datastax.com/job/trunk_novnode_dtest/458/testReport/auth_test/TestAuth/conditional_create_drop_user_test

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/auth_test.py", line 348, in 
conditional_create_drop_user_test
self.prepare()
  File "/home/automaton/cassandra-dtest/auth_test.py", line 978, in prepare
n = self.wait_for_any_log(self.cluster.nodelist(), 'Created default 
superuser', 25)
  File "/home/automaton/cassandra-dtest/dtest.py", line 760, in wait_for_any_log
found = node.grep_log(pattern, filename=filename)
  File "/home/automaton/ccm/ccmlib/node.py", line 347, in grep_log
with open(os.path.join(self.get_path(), 'logs', filename)) as f:
"[Errno 2] No such file or directory: 
'/tmp/dtest-XmnSYI/test/node1/logs/system.log'
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12492) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.cql3_non_compound_range_tombstones_test

2016-08-18 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12492:
-

 Summary: dtest failure in 
upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.cql3_non_compound_range_tombstones_test
 Key: CASSANDRA-12492
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12492
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log

example failure:

http://cassci.datastax.com/job/cassandra-3.9_dtest_upgrade/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/cql3_non_compound_range_tombstones_test

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 1562, 
in cql3_non_compound_range_tombstones_test
ThriftConsistencyLevel.ALL)
  File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", line 
1175, in batch_mutate
self.recv_batch_mutate()
  File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", line 
1201, in recv_batch_mutate
raise result.te
"TimedOutException(acknowledged_by=1, paxos_in_progress=None, 
acknowledged_by_batchlog=None)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12491) dtest failure in upgrade_tests.paging_test.TestPagingDataNodes2RF1_Upgrade_current_2_2_x_To_indev_3_x.static_columns_paging_test

2016-08-18 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12491:
-

 Summary: dtest failure in 
upgrade_tests.paging_test.TestPagingDataNodes2RF1_Upgrade_current_2_2_x_To_indev_3_x.static_columns_paging_test
 Key: CASSANDRA-12491
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12491
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
node2_debug.log, node2_gc.log

example failure:
http://cassci.datastax.com/job/cassandra-3.9_dtest_upgrade/25/testReport/upgrade_tests.paging_test/TestPagingDataNodes2RF1_Upgrade_current_2_2_x_To_indev_3_x/static_columns_paging_test/

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
f(obj)
  File "/home/automaton/cassandra-dtest/upgrade_tests/paging_test.py", line 
875, in static_columns_paging_test
self.assertEqual([0] * 4 + [1] * 4 + [2] * 4 + [3] * 4, sorted([r.a for r 
in results]))
  File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
assertion_func(first, second, msg=msg)
  File "/usr/lib/python2.7/unittest/case.py", line 742, in assertListEqual
self.assertSequenceEqual(list1, list2, msg, seq_type=list)
  File "/usr/lib/python2.7/unittest/case.py", line 724, in assertSequenceEqual
self.fail(msg)
  File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
raise self.failureException(msg)
"Lists differ: [0, 0, 0, 0, 1, 1, 1, 1, 2, 2,... != [0, 0, 0, 0, 1, 1, 1, 1, 2, 
3,...\n\nFirst differing element 9:\n2\n3\n\nFirst list contains 3 additional 
elements.\nFirst extra element 13:\n3\n\n- [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 
3, 3, 3, 3]\n? -\n\n+ [0, 0, 0, 0, 1, 1, 1, 
1, 2, 3, 3, 3, 3]
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12488) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.cql3_non_compound_range_tombstones_test

2016-08-18 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-12488:
-
Description: 
example failure:

http://cassci.datastax.com/job/cassandra-3.9_dtest_upgrade/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/cql3_non_compound_range_tombstones_test/

It looks like this is failing with a TimedOutException.

  was:
example failure:

http://cassci.datastax.com/job/cassandra-3.9_dtest_upgrade/lastCompletedBuild/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/cql3_non_compound_range_tombstones_test

It looks like this is failing with a TimedOutException.


> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.cql3_non_compound_range_tombstones_test
> 
>
> Key: CASSANDRA-12488
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12488
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest_upgrade/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/cql3_non_compound_range_tombstones_test/
> It looks like this is failing with a TimedOutException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12490) Add sequence distribution type to cassandra stress

2016-08-18 Thread Ben Slater (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Slater updated CASSANDRA-12490:
---
Fix Version/s: 3.x
   Status: Patch Available  (was: Open)

> Add sequence distribution type to cassandra stress
> --
>
> Key: CASSANDRA-12490
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12490
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Ben Slater
>Assignee: Ben Slater
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12490-trunk.patch
>
>
> When using the write command, cassandra stress sequentially generates seeds. 
> This ensures generated values don't overlap (unless the sequence wraps) 
> providing more predictable number of inserted records (and generating a base 
> set of data without wasted writes).
> When using a yaml stress spec there is no sequenced distribution available. 
> It think it would be useful to have this for doing initial load of data for 
> testing 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12490) Add sequence distribution type to cassandra stress

2016-08-18 Thread Ben Slater (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Slater updated CASSANDRA-12490:
---
Attachment: 12490-trunk.patch

Patch attached

> Add sequence distribution type to cassandra stress
> --
>
> Key: CASSANDRA-12490
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12490
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Ben Slater
>Assignee: Ben Slater
>Priority: Minor
> Attachments: 12490-trunk.patch
>
>
> When using the write command, cassandra stress sequentially generates seeds. 
> This ensures generated values don't overlap (unless the sequence wraps) 
> providing more predictable number of inserted records (and generating a base 
> set of data without wasted writes).
> When using a yaml stress spec there is no sequenced distribution available. 
> It think it would be useful to have this for doing initial load of data for 
> testing 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12490) Add sequence distribution type to cassandra stress

2016-08-18 Thread Ben Slater (JIRA)
Ben Slater created CASSANDRA-12490:
--

 Summary: Add sequence distribution type to cassandra stress
 Key: CASSANDRA-12490
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12490
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Ben Slater
Assignee: Ben Slater
Priority: Minor


When using the write command, cassandra stress sequentially generates seeds. 
This ensures generated values don't overlap (unless the sequence wraps) 
providing more predictable number of inserted records (and generating a base 
set of data without wasted writes).

When using a yaml stress spec there is no sequenced distribution available. It 
think it would be useful to have this for doing initial load of data for 
testing 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12374) Can't rebuild SASI index

2016-08-18 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426258#comment-15426258
 ] 

Alex Petrov commented on CASSANDRA-12374:
-

Yes, sorry. I have only ran this test individually so overlooked it'd fail in 
combination with other tests. Fixed, rebased and force-pushed, CI results are 
good.

> Can't rebuild SASI index
> 
>
> Key: CASSANDRA-12374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> There's been no real requirement for that so far. 
> As [~beobal] has pointed out, it's not a big issue, since that only could be 
> needed when index files are lost, data corruption on disk (hardware issue) 
> has occurred or there was a bug that'd require an index rebuild.
> During {{rebuild_index}} task, indexes are only "marked" as removed with 
> {{SecondaryIndexManager::markIndexRemoved}} and then {{buildIndexesBlocking}} 
> is called. However, since SASI keeps track of SSTables for the index, it's 
> going to filter them out with {{.filter((sstable) -> 
> !sasi.index.hasSSTable(sstable))}} in {{SASIIndexBuildingSupport}}.
> If I understand the logic correctly, we have to "invalidate" (drop data) 
> right before we re-index them. This is also a blocker for [CASSANDRA-11990] 
> since without it we can't have an upgrade path.
> I have a patch ready in branch, but since it's a bug, it's better to have it 
> released earlier and for all branches affected.
> cc [~xedin]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12457) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x.bug_5732_test

2016-08-18 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426158#comment-15426158
 ] 

Stefania commented on CASSANDRA-12457:
--

In 2.2 the problem is completely different (shadow sstables no longer exist). 
What's happening in 2.2 is that we fail to flush memtables during a drain, and 
therefore the resources allocated in {{BigTableWriter.openFinal()}} are leaked. 
Further, the shutdown hook in storage service doesn't report these errors 
because it just waits on the flushing futures, it doesn't check if they 
succeeded or not.

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x.bug_5732_test
> 
>
> Key: CASSANDRA-12457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12457
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Craig Kodman
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.2.x
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_upgrade/16/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x/bug_5732_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line 
> 216, in tearDown
> super(UpgradeTester, self).tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 666, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> "Unexpected error in log, see stdout\n >> begin captured 
> logging << \ndtest: DEBUG: Upgrade test beginning, 
> setting CASSANDRA_VERSION to 2.1.15, and jdk to 8. (Prior values will be 
> restored after test).\ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-D8UF3i\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: [[Row(table_name=u'ks', index_name=u'test.testindex')], 
> [Row(table_name=u'ks', index_name=u'test.testindex')]]\ndtest: DEBUG: 
> upgrading node1 to git:91f7387e1f785b18321777311a5c3416af0663c2\nccm: INFO: 
> Fetching Cassandra updates...\ndtest: DEBUG: Querying upgraded node\ndtest: 
> DEBUG: Querying old node\ndtest: DEBUG: removing ccm cluster test at: 
> /mnt/tmp/dtest-D8UF3i\ndtest: DEBUG: clearing ssl stores from 
> [/mnt/tmp/dtest-D8UF3i] directory\n- >> end captured 
> logging << -"
> {code}
> {code}
> Standard Output
> http://git-wip-us.apache.org/repos/asf/cassandra.git 
> git:91f7387e1f785b18321777311a5c3416af0663c2
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@73deb57f) to class 
> org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@2098812276:/mnt/tmp/dtest-D8UF3i/test/node1/data1/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-4
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@7926de0f) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1009016655:[[OffHeapBitSet]]
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@3a5760f9) to class 
> org.apache.cassandra.io.util.MmappedSegmentedFile$Cleanup@223486002:/mnt/tmp/dtest-D8UF3i/test/node1/data0/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-3-Index.db
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,582 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@42cb4131) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1544265728:[Memory@[0..4),
>  Memory@[0..a)] was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-0

[jira] [Commented] (CASSANDRA-12257) data consistency of multi datacenter

2016-08-18 Thread stone (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426141#comment-15426141
 ] 

stone commented on CASSANDRA-12257:
---

dont think this is same thing.
Cassandra-11752 && Cassandra-11569 just get the cross-datacenter latency.what i 
want are more than this.

1.I want to get latency of the record replicated to another datacenter.

2.need to query this latency with a certain time,which mean that may need to 
persist this latency value.

3.want to know the latency of cross datacenter replicate latency.so if we 
depends on application.it may not work well.for instance,at a time,we have not 
receive replicate from other datacenter,but we dont know the reason is no write 
request,or large latency.
so need a standalone mechanism to detect this

> data consistency of multi datacenter 
> -
>
> Key: CASSANDRA-12257
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12257
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: stone
>
> Environment:
> serveral active-active cassandra datacenter.
> set write consistency level=local_quorum to get a high resquest response.
> Concern:
> we dont know the time of the data arrive other datacenter.
> we dont know the size of data that need to be transferred  to other 
> datacenter.
> Thought:
> 1.we can get the latency with ping or other monitor tool,but it can not 
> represent the latency of cassandra data to be transferred from one dc to 
> another dc.
> 2.we can measure the latency between dc.but there is a accumulated value from 
> node started,it cannot represent the latency now.
> https://issues.apache.org/jira/browse/CASSANDRA-11569
> Scenanio:
> one project need to collect information from sensors,which in different 
> region.2 datacenter,DC1,DC2.sensor1,sensor2 in DC1 region.sensor3,sensor4 in 
> DC2.
> one client in DC1,pull data every 10 minutes.
> 1.sensor3 in DC2 write a record to DC2 at 8:59:55 ,and arrived DC1 at 9:00:05
> 2.client in DC1 pull data at 9:00,it should get the record,but it cannot as 
> the 
> record have not arrive DC1.
> 3.then client in DC1 will pull data at 9:10.it also can not get the record as 
> it will pull data from 9:00---9:10,but the record created on 8:59:55
> so we will miss the record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12280) nodetool repair hangs

2016-08-18 Thread Benjamin Roth (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Roth resolved CASSANDRA-12280.
---
   Resolution: Not A Bug
Reproduced In: 3.7, 3.0.8, 3.9  (was: 3.0.8, 3.7, 3.9)

Seems like this was a configuration and/or kernel issue.

For all those who are interested what I did to fix that issue:

- Set tcp keepalive to these values: 
https://docs.datastax.com/en/cassandra/2.0/cassandra/troubleshooting/trblshootIdleFirewall.html.
 Before the total timeout was at about 15min. 90s as in the article fails 
faster. Maybe that helps that things to resume faster causing less hangs.
- otc_coalescing_strategy = DISABLED. Don't know if it had a lot of impact but 
it had no negative effects. I should double test that but ... I'm glad it works 
right now.
- Removed a suspicious host. That was the only host with a different kernel and 
it seemed that streams from / to this host hung more often than others.

After these changes I did not recognize any (obvious and noticeable) hangs any 
more.

> nodetool repair hangs
> -
>
> Key: CASSANDRA-12280
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12280
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Roth
>
> nodetool repair hangs when repairing a keyspace, does not hang when 
> repairting table/mv by table/mv.
> Command executed (both variants make it hang):
> nodetool repair likes like dislike_by_source_mv like_by_contact_mv 
> match_valid_mv like_out dislike match match_by_contact_mv like_valid_mv 
> like_out_by_source_mv
> OR
> nodetool repair likes
> Logs:
> https://gist.github.com/brstgt/bf8b20fa1942d29ab60926ede7340b75
> Nodetool output:
> https://gist.github.com/brstgt/3aa73662da4b0190630ac1aad6c90a6f
> Schema:
> https://gist.github.com/brstgt/3fd59e0166f86f8065085532e3638097



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11198) Materialized view inconsistency

2016-08-18 Thread Benjamin Roth (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426106#comment-15426106
 ] 

Benjamin Roth commented on CASSANDRA-11198:
---

[~kw217] my case is much simpler than yours. In contrast to my case yours 
results in an exception whereas mine did not (AFAIR). Yours sounds like a race 
condition of the delete => update, seems like the delete overrides your update. 
But I cannot tell you if it is better to create a separate ticket :D

> Materialized view inconsistency
> ---
>
> Key: CASSANDRA-11198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11198
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
> Attachments: CASSANDRA-11198.trace
>
>
> Here is a materialized view:
> {code}
> > DESCRIBE MATERIALIZED VIEW unit_by_transport ;
> CREATE MATERIALIZED VIEW unit_by_transport AS
> SELECT *
> FROM unit
> WHERE transportid IS NOT NULL AND type IS NOT NULL
> PRIMARY KEY (transportid, id)
> WITH CLUSTERING ORDER BY (id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> I cannot reproduce this but sometimes and somehow happened the same issue 
> (https://issues.apache.org/jira/browse/CASSANDRA-10910):
> {code}
> > SELECT transportid, id, type FROM unit_by_transport WHERE 
> > transportid=24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 and 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid  | id   
> | type
> --+--+--
>  24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 | 99c05a70-d686-11e5-a169-97287061d5d1 
> | null
> (1 rows)
> > SELECT transportid, id, type FROM unit WHERE 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid | id | type
> -++--
> (0 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11198) Materialized view inconsistency

2016-08-18 Thread Keith Wansbrough (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426093#comment-15426093
 ] 

Keith Wansbrough commented on CASSANDRA-11198:
--

I don't think CASSANDRA-11475 can have fixed my issue, because that's marked as 
fixed in 3.0.7 which is the version I'm running. Shall I raise a separate issue?

> Materialized view inconsistency
> ---
>
> Key: CASSANDRA-11198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11198
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
> Attachments: CASSANDRA-11198.trace
>
>
> Here is a materialized view:
> {code}
> > DESCRIBE MATERIALIZED VIEW unit_by_transport ;
> CREATE MATERIALIZED VIEW unit_by_transport AS
> SELECT *
> FROM unit
> WHERE transportid IS NOT NULL AND type IS NOT NULL
> PRIMARY KEY (transportid, id)
> WITH CLUSTERING ORDER BY (id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> I cannot reproduce this but sometimes and somehow happened the same issue 
> (https://issues.apache.org/jira/browse/CASSANDRA-10910):
> {code}
> > SELECT transportid, id, type FROM unit_by_transport WHERE 
> > transportid=24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 and 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid  | id   
> | type
> --+--+--
>  24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 | 99c05a70-d686-11e5-a169-97287061d5d1 
> | null
> (1 rows)
> > SELECT transportid, id, type FROM unit WHERE 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid | id | type
> -++--
> (0 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12092) dtest failure in consistency_test.TestAccuracy.test_simple_strategy_counters

2016-08-18 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15425952#comment-15425952
 ] 

Stefania edited comment on CASSANDRA-12092 at 8/18/16 8:23 AM:
---

Unfortunately there was still one 
[failure|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-dtest-multiplex/17/testReport/junit/node_3_iter_169.consistency_test/TestAccuracy/test_simple_strategy_counters/]
 out of 1000 attempts, and that was on trunk, not 2.1.

I've reduced the number of debug log messages to just one, which is essential 
to confirm that we are reading from the host we contacted. I've launched one 
more [multiplexed 
run|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-dtest-multiplex/20/]
 hoping that with one single debug log message we can reproduce the failure. 
This should tell us if the problem is with the test or in Cassandra. If we 
still cannot reproduce it, I propose to relax the test. I've already added one 
unit test to ensure that we can correctly read counter values that are updated 
by a different thread.


was (Author: stefania):
Unfortunately there was still one 
[failure|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-dtest-multiplex/17/testReport/junit/node_3_iter_169.consistency_test/TestAccuracy/test_simple_strategy_counters/]
 out of 1000 attempts.

I've reduced the number of debug log messages to just one, which is essential 
to confirm that we are reading from the host we contacted. I will perform one 
more multiplexed run hoping that with one single debug log message we can 
reproduce the failure. This would tell us if the problem is with the test or in 
Cassandra. If we still cannot reproduce it, I propose to relax the test for 
2.1. I've already added one unit test to ensure that we can correctly read 
counter values that are updated by a different thread.

> dtest failure in consistency_test.TestAccuracy.test_simple_strategy_counters
> 
>
> Key: CASSANDRA-12092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12092
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Stefania
>  Labels: dtest
> Attachments: node1.log, node2.log, node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/484/testReport/consistency_test/TestAccuracy/test_simple_strategy_counters
> Failed on CassCI build cassandra-2.1_dtest #484
> {code}
> Standard Error
> Traceback (most recent call last):
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 514, in run
> valid_fcn(v)
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 497, in 
> validate_counters
> check_all_sessions(s, n, c)
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 490, in 
> check_all_sessions
> "value of %s at key %d, instead got these values: %s" % (write_nodes, 
> val, n, results)
> AssertionError: Failed to read value from sufficient number of nodes, 
> required 2 nodes to have a counter value of 1 at key 200, instead got these 
> values: [0, 0, 1]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12457) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x.bug_5732_test

2016-08-18 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426078#comment-15426078
 ] 

Stefania commented on CASSANDRA-12457:
--

Reproduced in 2.2 
[here|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-dtest-multiplex/18/testReport/junit/node_0_iter_024.upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_x/bug_5732_test/]
 with a slightly different allocation call stack.

In 2.1 the leak should be caused by shadow sstables not being released:

|2.1|[patch|https://github.com/stef1927/cassandra/commits/12457-2.1]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-12457-2.1-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-12457-2.1-dtest/]|[multiplexed|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-dtest-multiplex/19/]|


> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x.bug_5732_test
> 
>
> Key: CASSANDRA-12457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12457
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Craig Kodman
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.2.x
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_upgrade/16/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x/bug_5732_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line 
> 216, in tearDown
> super(UpgradeTester, self).tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 666, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> "Unexpected error in log, see stdout\n >> begin captured 
> logging << \ndtest: DEBUG: Upgrade test beginning, 
> setting CASSANDRA_VERSION to 2.1.15, and jdk to 8. (Prior values will be 
> restored after test).\ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-D8UF3i\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: [[Row(table_name=u'ks', index_name=u'test.testindex')], 
> [Row(table_name=u'ks', index_name=u'test.testindex')]]\ndtest: DEBUG: 
> upgrading node1 to git:91f7387e1f785b18321777311a5c3416af0663c2\nccm: INFO: 
> Fetching Cassandra updates...\ndtest: DEBUG: Querying upgraded node\ndtest: 
> DEBUG: Querying old node\ndtest: DEBUG: removing ccm cluster test at: 
> /mnt/tmp/dtest-D8UF3i\ndtest: DEBUG: clearing ssl stores from 
> [/mnt/tmp/dtest-D8UF3i] directory\n- >> end captured 
> logging << -"
> {code}
> {code}
> Standard Output
> http://git-wip-us.apache.org/repos/asf/cassandra.git 
> git:91f7387e1f785b18321777311a5c3416af0663c2
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@73deb57f) to class 
> org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@2098812276:/mnt/tmp/dtest-D8UF3i/test/node1/data1/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-4
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@7926de0f) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1009016655:[[OffHeapBitSet]]
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@3a5760f9) to class 
> org.apache.cassandra.io.util.MmappedSegmentedFile$Cleanup@223486002:/mnt/tmp/dtest-D8UF3i/test/node1/data0/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-3-Index.db
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,582 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassa

[jira] [Commented] (CASSANDRA-12405) node health status inconsistent

2016-08-18 Thread Benjamin Roth (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426077#comment-15426077
 ] 

Benjamin Roth commented on CASSANDRA-12405:
---

I guess this was due to a "suboptimal" configuration. Issue did not reoccur 
since some tuning efforts.

> node health status inconsistent
> ---
>
> Key: CASSANDRA-12405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12405
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.9, Linux Xenial
>Reporter: Benjamin Roth
>
> ATM we run a 4 node cluster with cassandra 3.9
> Due to another issue (hanging repairs) I am forced to restart nodes from time 
> to time. Before I restart a node, all nodes are listed as UP from any other 
> node.
> When I restart one node in the cluster, the health statuses of other nodes 
> are affected as well.
> After having restarted node "cas", the "nodetool status" output on all nodes 
> looks like this during startup phase of cas1: 
> https://gist.github.com/brstgt/9be77470814d2fd160617a1c06579804
> After cas1 is up again, I restart cas2. During the startup phase of cas2 the 
> status looks like this:
> https://gist.github.com/brstgt/d27ef540b2389b3a7d2d015ab83af547
> The nodetool output goes along with log messages like this:
> 2016-08-08T07:30:06+00:00 cas1 [GossipTasks: 1] 
> org.apache.cassandra.gms.Gossiper Convicting /10.23.71.3 with status NORMAL - 
> alive false
> 2016-08-08T07:30:06+00:00 cas1 [GossipTasks: 1] 
> org.apache.cassandra.gms.Gossiper Convicting /10.23.71.2 with status NORMAL - 
> alive false
> In extreme cases, nodes didn't even come up again after a restart with an 
> error that there are no seed hosts (sorry, don't have the error message in 
> current logs), but the seed host(s) were definitively up and running. A 
> reboot fixed that issue, starting the node again and again did not help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12405) node health status inconsistent

2016-08-18 Thread Benjamin Roth (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Roth resolved CASSANDRA-12405.
---
Resolution: Not A Bug

> node health status inconsistent
> ---
>
> Key: CASSANDRA-12405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12405
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.9, Linux Xenial
>Reporter: Benjamin Roth
>
> ATM we run a 4 node cluster with cassandra 3.9
> Due to another issue (hanging repairs) I am forced to restart nodes from time 
> to time. Before I restart a node, all nodes are listed as UP from any other 
> node.
> When I restart one node in the cluster, the health statuses of other nodes 
> are affected as well.
> After having restarted node "cas", the "nodetool status" output on all nodes 
> looks like this during startup phase of cas1: 
> https://gist.github.com/brstgt/9be77470814d2fd160617a1c06579804
> After cas1 is up again, I restart cas2. During the startup phase of cas2 the 
> status looks like this:
> https://gist.github.com/brstgt/d27ef540b2389b3a7d2d015ab83af547
> The nodetool output goes along with log messages like this:
> 2016-08-08T07:30:06+00:00 cas1 [GossipTasks: 1] 
> org.apache.cassandra.gms.Gossiper Convicting /10.23.71.3 with status NORMAL - 
> alive false
> 2016-08-08T07:30:06+00:00 cas1 [GossipTasks: 1] 
> org.apache.cassandra.gms.Gossiper Convicting /10.23.71.2 with status NORMAL - 
> alive false
> In extreme cases, nodes didn't even come up again after a restart with an 
> error that there are no seed hosts (sorry, don't have the error message in 
> current logs), but the seed host(s) were definitively up and running. A 
> reboot fixed that issue, starting the node again and again did not help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12489) consecutive repairs of same range always finds 'out of sync' in sane cluster

2016-08-18 Thread Benjamin Roth (JIRA)
Benjamin Roth created CASSANDRA-12489:
-

 Summary: consecutive repairs of same range always finds 'out of 
sync' in sane cluster
 Key: CASSANDRA-12489
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12489
 Project: Cassandra
  Issue Type: Bug
  Components: Streaming and Messaging
Reporter: Benjamin Roth
 Attachments: trace_3_10.1.log.gz, trace_3_10.2.log.gz, 
trace_3_10.3.log.gz, trace_3_10.4.log.gz, trace_3_9.1.log.gz, trace_3_9.2.log.gz

No matter how often or when I run the same subrange repair, it ALWAYS tells me 
that some ranges are our of sync. Tested in 3.9 + 3.10 (git trunk of 
2016-08-17). The cluster is sane. All nodes are up, cluster is not overloaded.
I guess this is not a desired behaviour. I'd expect that a repair does what it 
says and a consecutive repair shouldn't report "out of syncs" any more if the 
cluster is sane.
Especially for tables with MVs that puts a lot of pressure during repair as 
ranges are repaired over and over again.

See traces of different runs attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11534) cqlsh fails to format collections when using aliases

2016-08-18 Thread Arunkumar M (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426021#comment-15426021
 ] 

Arunkumar M edited comment on CASSANDRA-11534 at 8/18/16 7:29 AM:
--

This is because while formatting the row values the corresponding "cql_types" 
(ex: list[text], text) are None for all the aliases.

In the line /bin/cqlsh.py#L1339
 cql_types = [CqlType(table_meta.columns[c].cql_type, ks_meta)
   if c in table_meta.columns else None for c in 
column_names]
formatted_values = [map(self.myformat_value, row.values(), cql_types) for row 
in rows]


"table_meta.columns" contains the actual column names but "column_names" from 
result query contains the aliases so the cql_types list contains None for the 
aliases.

Related ticket: https://issues.apache.org/jira/browse/CASSANDRA-11274


was (Author: arunkumar):
This is because while formatting the row values the corresponding "cql_types" 
(ex: list[text], text) are None for all the aliases.

In the line /bin/cqlsh.py#L1339
 cql_types = [CqlType(table_meta.columns[c].cql_type, ks_meta)
 if c in table_meta.columns else None for c in 
column_names]

"table_meta.columns" contains the actual column names but "column_names" from 
result query contains the aliases so the cql_types list contains None for the 
aliases.

Related ticket: https://issues.apache.org/jira/browse/CASSANDRA-11274

> cqlsh fails to format collections when using aliases
> 
>
> Key: CASSANDRA-11534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11534
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Priority: Minor
>
> Given is a simple table. Selecting the columns without an alias works fine. 
> However, if the map is selected using an alias, cqlsh fails to format the 
> value.
> {code}
> create keyspace foo WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> CREATE TABLE foo.foo (id int primary key, m map);
> insert into foo.foo (id, m) VALUES ( 1, {1: 'one', 2: 'two', 3:'three'});
> insert into foo.foo (id, m) VALUES ( 2, {1: '1one', 2: '2two', 3:'3three'});
> cqlsh> select id, m from foo.foo;
>  id | m
> +-
>   1 |{1: 'one', 2: 'two', 3: 'three'}
>   2 | {1: '1one', 2: '2two', 3: '3three'}
> (2 rows)
> cqlsh> select id, m as "weofjkewopf" from foo.foo;
>  id | weofjkewopf
> +---
>   1 |OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, u'three')])
>   2 | OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), (3, u'3three')])
> (2 rows)
> Failed to format value OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, 
> u'three')]) : 'NoneType' object has no attribute 'sub_types'
> Failed to format value OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), 
> (3, u'3three')]) : 'NoneType' object has no attribute 'sub_types'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11534) cqlsh fails to format collections when using aliases

2016-08-18 Thread Arunkumar M (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426021#comment-15426021
 ] 

Arunkumar M edited comment on CASSANDRA-11534 at 8/18/16 7:26 AM:
--

This is because while formatting the row values the corresponding "cql_types" 
(ex: list[text], text) are None for all the aliases.

In the line /bin/cqlsh.py#L1339
 cql_types = [CqlType(table_meta.columns[c].cql_type, ks_meta)
 if c in table_meta.columns else None for c in 
column_names]

"table_meta.columns" contains the actual column names but "column_names" from 
result query contains the aliases so the cql_types list contains None for the 
aliases.

Related ticket: https://issues.apache.org/jira/browse/CASSANDRA-11274


was (Author: arunkumar):
This is because while formatting the row values the corresponding "cql_types" 
(ex: list[text], text) are None for all the aliases.

In the line /bin/cqlsh.py#L1339
 cql_types = [CqlType(table_meta.columns[c].cql_type, ks_meta)
 if c in table_meta.columns else None for c in 
column_names]

"table_meta.columns" contains the actual column names but "column_names" from 
result query contains the aliases so the cql_types list contains None for the 
aliases.

> cqlsh fails to format collections when using aliases
> 
>
> Key: CASSANDRA-11534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11534
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Priority: Minor
>
> Given is a simple table. Selecting the columns without an alias works fine. 
> However, if the map is selected using an alias, cqlsh fails to format the 
> value.
> {code}
> create keyspace foo WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> CREATE TABLE foo.foo (id int primary key, m map);
> insert into foo.foo (id, m) VALUES ( 1, {1: 'one', 2: 'two', 3:'three'});
> insert into foo.foo (id, m) VALUES ( 2, {1: '1one', 2: '2two', 3:'3three'});
> cqlsh> select id, m from foo.foo;
>  id | m
> +-
>   1 |{1: 'one', 2: 'two', 3: 'three'}
>   2 | {1: '1one', 2: '2two', 3: '3three'}
> (2 rows)
> cqlsh> select id, m as "weofjkewopf" from foo.foo;
>  id | weofjkewopf
> +---
>   1 |OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, u'three')])
>   2 | OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), (3, u'3three')])
> (2 rows)
> Failed to format value OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, 
> u'three')]) : 'NoneType' object has no attribute 'sub_types'
> Failed to format value OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), 
> (3, u'3three')]) : 'NoneType' object has no attribute 'sub_types'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11534) cqlsh fails to format collections when using aliases

2016-08-18 Thread Arunkumar M (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426021#comment-15426021
 ] 

Arunkumar M commented on CASSANDRA-11534:
-

This is because while formatting the row values the corresponding "cql_types" 
(ex: list[text], text) are None for all the aliases.

In the line /bin/cqlsh.py#L1339
 cql_types = [CqlType(table_meta.columns[c].cql_type, ks_meta)
 if c in table_meta.columns else None for c in 
column_names]

"table_meta.columns" contains the actual column names but "column_names" from 
result query contains the aliases so the cql_types list contains None for the 
aliases.

> cqlsh fails to format collections when using aliases
> 
>
> Key: CASSANDRA-11534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11534
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Priority: Minor
>
> Given is a simple table. Selecting the columns without an alias works fine. 
> However, if the map is selected using an alias, cqlsh fails to format the 
> value.
> {code}
> create keyspace foo WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> CREATE TABLE foo.foo (id int primary key, m map);
> insert into foo.foo (id, m) VALUES ( 1, {1: 'one', 2: 'two', 3:'three'});
> insert into foo.foo (id, m) VALUES ( 2, {1: '1one', 2: '2two', 3:'3three'});
> cqlsh> select id, m from foo.foo;
>  id | m
> +-
>   1 |{1: 'one', 2: 'two', 3: 'three'}
>   2 | {1: '1one', 2: '2two', 3: '3three'}
> (2 rows)
> cqlsh> select id, m as "weofjkewopf" from foo.foo;
>  id | weofjkewopf
> +---
>   1 |OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, u'three')])
>   2 | OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), (3, u'3three')])
> (2 rows)
> Failed to format value OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, 
> u'three')]) : 'NoneType' object has no attribute 'sub_types'
> Failed to format value OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), 
> (3, u'3three')]) : 'NoneType' object has no attribute 'sub_types'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11534) cqlsh fails to format collections when using aliases

2016-08-18 Thread Arunkumar M (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arunkumar M updated CASSANDRA-11534:

Assignee: (was: Arunkumar M)

> cqlsh fails to format collections when using aliases
> 
>
> Key: CASSANDRA-11534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11534
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Priority: Minor
>
> Given is a simple table. Selecting the columns without an alias works fine. 
> However, if the map is selected using an alias, cqlsh fails to format the 
> value.
> {code}
> create keyspace foo WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> CREATE TABLE foo.foo (id int primary key, m map);
> insert into foo.foo (id, m) VALUES ( 1, {1: 'one', 2: 'two', 3:'three'});
> insert into foo.foo (id, m) VALUES ( 2, {1: '1one', 2: '2two', 3:'3three'});
> cqlsh> select id, m from foo.foo;
>  id | m
> +-
>   1 |{1: 'one', 2: 'two', 3: 'three'}
>   2 | {1: '1one', 2: '2two', 3: '3three'}
> (2 rows)
> cqlsh> select id, m as "weofjkewopf" from foo.foo;
>  id | weofjkewopf
> +---
>   1 |OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, u'three')])
>   2 | OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), (3, u'3three')])
> (2 rows)
> Failed to format value OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, 
> u'three')]) : 'NoneType' object has no attribute 'sub_types'
> Failed to format value OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), 
> (3, u'3three')]) : 'NoneType' object has no attribute 'sub_types'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)