[jira] [Assigned] (CASSANDRA-8355) NPE when passing wrong argument in ALTER TABLE statement

2014-12-16 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-8355:
-

Assignee: Benjamin Lerer

 NPE when passing wrong argument in ALTER TABLE statement
 

 Key: CASSANDRA-8355
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8355
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2
Reporter: Pierre Laporte
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 2.1.3


 When I tried to change the caching strategy of a table, I provided a wrong 
 argument {{'rows_per_partition' : ALL}} with unquoted ALL. Cassandra returned 
 a SyntaxError, which is good, but it seems it was because of a 
 NullPointerException.
 *Howto*
 {code}
 CREATE TABLE foo (k int primary key);
 ALTER TABLE foo WITH caching = {'keys' : 'all', 'rows_per_partition' : ALL};
 {code}
 *Output*
 {code}
 ErrorMessage code=2000 [Syntax error in CQL query] message=Failed parsing 
 statement: [ALTER TABLE foo WITH caching = {'keys' : 'all', 
 'rows_per_partition' : ALL};] reason: NullPointerException null
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-6750) Support for UPDATE predicates

2014-12-16 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-6750:
-

Assignee: Benjamin Lerer

 Support for UPDATE predicates
 -

 Key: CASSANDRA-6750
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6750
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
 Environment: 2.0.3
Reporter: nivance
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cql3, ponies

 cqlsh:spdatarp UPDATE t_spdatarpro_ro SET amount = 10  WHERE messageid = 
 '123456';
 Bad Request: Non PRIMARY KEY messageid found in where clause
 In this case, messageid is the secend index. I want to update all rows which 
 messageid is '123456', but it failed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-6173) Unable to delete multiple entries using In clause on clustering part of compound key

2014-12-16 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-6173:
-

Assignee: Benjamin Lerer

 Unable to delete multiple entries using In clause on clustering part of 
 compound key
 

 Key: CASSANDRA-6173
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6173
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Ashot Golovenko
Assignee: Benjamin Lerer
Priority: Minor

 I have the following table:
 CREATE TABLE user_relation (
 u1 bigint,
 u2 bigint,
 mf int,
 i boolean,
 PRIMARY KEY (u1, u2));
 And I'm trying to delete two entries using In clause on clustering part of 
 compound key and I fail to do so:
 cqlsh:bm DELETE from user_relation WHERE u1 = 755349113 and u2 in 
 (13404014120, 12537242743);
 Bad Request: Invalid operator IN for PRIMARY KEY part u2
 Although the select statement works just fine:
 cqlsh:bm select * from user_relation WHERE u1 = 755349113 and u2 in 
 (13404014120, 12537242743);
  u1| u2  | i| mf
 ---+-+--+
  755349113 | 12537242743 | null | 27
  755349113 | 13404014120 | null |  0
 (2 rows)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8490) Tombstone stops paging through resultset when distinct keyword is used.

2014-12-16 Thread Frank Limstrand (JIRA)
Frank Limstrand created CASSANDRA-8490:
--

 Summary: Tombstone stops paging through resultset when distinct 
keyword is used.
 Key: CASSANDRA-8490
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8490
 Project: Cassandra
  Issue Type: Bug
 Environment: Driver version: 2.1.3.
Cassandra version: 2.0.11/2.1.2.
Reporter: Frank Limstrand


Using paging demo code from 
https://github.com/PatrickCallaghan/datastax-paging-demo

The code creates and populates a table with 1000 entries and pages through them 
with setFetchSize set to 100. If we then delete one entry with 'cqlsh':

cqlsh:datastax_paging_demo delete from datastax_paging_demo.products  where 
productId = 'P142'; (The specified productid is number 6 in the resultset.)

and run the same query (Select * from) again we get:

[com.datastax.paging.Main.main()] INFO  com.datastax.paging.Main - Paging demo 
took 0 secs. Total Products : 999

which is what we would expect.


If we then change the select statement in dao/ProductDao.java (line 70) from 
Select * from  to Select DISTINCT productid from  we get this result:

[com.datastax.paging.Main.main()] INFO  com.datastax.paging.Main - Paging demo 
took 0 secs. Total Products : 99

So it looks like the tombstone stops the paging behaviour. Is this a bug?

DEBUG [Native-Transport-Requests:788] 2014-12-16 10:09:13,431 Message.java 
(line 319) Received: QUERY Select DISTINCT productid from 
datastax_paging_demo.products, v=2
DEBUG [Native-Transport-Requests:788] 2014-12-16 10:09:13,434 
AbstractQueryPager.java (line 98) Fetched 99 live rows
DEBUG [Native-Transport-Requests:788] 2014-12-16 10:09:13,434 
AbstractQueryPager.java (line 115) Got result (99) smaller than page size 
(100), considering pager exhausted


with kind regards
Frank Limstrand



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6750) Support for UPDATE predicates

2014-12-16 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248078#comment-14248078
 ] 

Sylvain Lebresne commented on CASSANDRA-6750:
-

I'm not entirely sure we should support this at all because that would require 
a read-before-write and I'd prefer leaving that kind of thing explicit to the 
user (by issuing a select, and then an update). And before someone bring this 
up, I'm well aware that we already have a read-before-write for lists, but 
that's more an example of a mistake we made and shouldn't reproduce than 
anything else.

 Support for UPDATE predicates
 -

 Key: CASSANDRA-6750
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6750
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
 Environment: 2.0.3
Reporter: nivance
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cql3, ponies

 cqlsh:spdatarp UPDATE t_spdatarpro_ro SET amount = 10  WHERE messageid = 
 '123456';
 Bad Request: Non PRIMARY KEY messageid found in where clause
 In this case, messageid is the secend index. I want to update all rows which 
 messageid is '123456', but it failed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8463) Constant compaction under LCS

2014-12-16 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8463:
---
Attachment: 0001-make-sure-we-set-lastCompactedKey-properly.patch

Seems we don't set lastCompactedKeys properly, meaning we always start from the 
sstable with the smallest key when trying to find compaction candidates. This 
could explain the weird compaction candidate picking above (the leveling 
becomes very unbalanced)

Could you try it out [~rbranson]?

 Constant compaction under LCS
 -

 Key: CASSANDRA-8463
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8463
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Hardware is recent 2-socket, 16-core (x2 Hyperthreaded), 
 144G RAM, solid-state storage.
 Platform is Linux 3.2.51, Oracle JDK 64-bit 1.7.0_65.
 Heap is 32G total, 4G newsize.
 8G/8G on-heap/off-heap memtables, offheap_buffer allocator, 0.5 
 memtable_cleanup_threshold
 concurrent_compactors: 20
Reporter: Rick Branson
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 0001-better-logging.patch, 
 0001-make-sure-we-set-lastCompactedKey-properly.patch, log-for-8463.txt


 It appears that tables configured with LCS will completely re-compact 
 themselves over some period of time after upgrading from 2.0 to 2.1 (2.0.11 
 - 2.1.2, specifically). It starts out with 10 pending tasks for an hour or 
 so, then starts building up, now with 50-100 tasks pending across the cluster 
 after 12 hours. These nodes are under heavy write load, but were easily able 
 to keep up in 2.0 (they rarely had 5 pending compaction tasks), so I don't 
 think it's LCS in 2.1 actually being worse, just perhaps some different LCS 
 behavior that causes the layout of tables from 2.0 to prompt the compactor to 
 reorganize them?
 The nodes flushed ~11MB SSTables under 2.0. They're currently flushing ~36MB 
 SSTables due to the improved memtable setup in 2.1. Before I upgraded the 
 entire cluster to 2.1, I noticed the problem and tried several variations on 
 the flush size, thinking perhaps the larger tables in L0 were causing some 
 kind of cascading compactions. Even if they're sized roughly like the 2.0 
 flushes were, same behavior occurs. I also tried both enabling  disabling 
 STCS in L0 with no real change other than L0 began to back up faster, so I 
 left the STCS in L0 enabled.
 Tables are configured with 32MB sstable_size_in_mb, which was found to be an 
 improvement on the 160MB table size for compaction performance. Maybe this is 
 wrong now? Otherwise, the tables are configured with defaults. Compaction has 
 been unthrottled to help them catch-up. The compaction threads stay very 
 busy, with the cluster-wide CPU at 45% nice time. No nodes have completely 
 caught up yet. I'll update JIRA with status about their progress if anything 
 interesting happens.
 From a node around 12 hours ago, around an hour after the upgrade, with 19 
 pending compaction tasks:
 SSTables in each level: [6/4, 10, 105/100, 268, 0, 0, 0, 0, 0]
 SSTables in each level: [6/4, 10, 106/100, 271, 0, 0, 0, 0, 0]
 SSTables in each level: [1, 16/10, 105/100, 269, 0, 0, 0, 0, 0]
 SSTables in each level: [5/4, 10, 103/100, 272, 0, 0, 0, 0, 0]
 SSTables in each level: [4, 11/10, 105/100, 270, 0, 0, 0, 0, 0]
 SSTables in each level: [1, 12/10, 105/100, 271, 0, 0, 0, 0, 0]
 SSTables in each level: [1, 14/10, 104/100, 267, 0, 0, 0, 0, 0]
 SSTables in each level: [9/4, 10, 103/100, 265, 0, 0, 0, 0, 0]
 Recently, with 41 pending compaction tasks:
 SSTables in each level: [4, 13/10, 106/100, 269, 0, 0, 0, 0, 0]
 SSTables in each level: [4, 12/10, 106/100, 273, 0, 0, 0, 0, 0]
 SSTables in each level: [5/4, 11/10, 106/100, 271, 0, 0, 0, 0, 0]
 SSTables in each level: [4, 12/10, 103/100, 275, 0, 0, 0, 0, 0]
 SSTables in each level: [2, 13/10, 106/100, 273, 0, 0, 0, 0, 0]
 SSTables in each level: [3, 10, 104/100, 275, 0, 0, 0, 0, 0]
 SSTables in each level: [6/4, 11/10, 103/100, 269, 0, 0, 0, 0, 0]
 SSTables in each level: [4, 16/10, 105/100, 264, 0, 0, 0, 0, 0]
 More information about the use case: writes are roughly uniform across these 
 tables. The data is sharded across these 8 tables by key to improve 
 compaction parallelism. Each node receives up to 75,000 writes/sec sustained 
 at peak, and a small number of reads. This is a pre-production cluster that's 
 being warmed up with new data, so the low volume of reads (~100/sec per node) 
 is just from automatic sampled data checks, otherwise we'd just use STCS :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6382) Allow indexing nested types

2014-12-16 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6382:

Fix Version/s: (was: 2.1.3)
   3.0

 Allow indexing nested types
 ---

 Key: CASSANDRA-6382
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6382
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
  Labels: cql
 Fix For: 3.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6382) Allow indexing nested types

2014-12-16 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248152#comment-14248152
 ] 

Sylvain Lebresne commented on CASSANDRA-6382:
-

Yep, done. Though I'll note that while this ticket will not modify the 2ndary 
API to allow multiple indexes to a single column, that is left to 
CASSANDRA-7771, and so until the later is resolved, we'll have that limitation 
that only one index can be created for a given column.

 Allow indexing nested types
 ---

 Key: CASSANDRA-6382
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6382
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
  Labels: cql
 Fix For: 3.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8477) CMS GC can not recycle objects

2014-12-16 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248165#comment-14248165
 ] 

Sylvain Lebresne commented on CASSANDRA-8477:
-

As Benedict said, I think that's more of a modeling issue (which Philo seems to 
have corrected since then). This is really exactly the same problem than with 
the queue anti-pattern, but applied to CQL so instead of cell tombstones we 
have CQL row tombstones (so, in practice, range tombstones). And as with cell 
tombstones I don't think we can prune those range because we need them on the 
coordinator for reconciliation (they may shadow some row on another replica). 
So for 2.0/2.1, I'm not sure there is much to do except maybe extend the 
tombstone_warning_threshold to include range tombstones too (since again, 
that's exactly the same problem)?

Longer term, CASSANDRA-8099 should indeed fix that since it won't materialize 
the full resultSet on the replicas anymore.

 CMS GC can not recycle objects
 --

 Key: CASSANDRA-8477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8477
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 2.1.1 or 2.1.2-SNAPSHOT(after CASSANDRA-8459 resolved)
Reporter: Philo Yang
Assignee: Sylvain Lebresne
 Attachments: cassandra.yaml, histo.txt, jstack.txt, log1.txt, 
 log2.txt, system.log, system.log.2014-12-15_1226


 I have a trouble in my cluster that CMS full gc can not reduce the size of 
 old gen. Days ago I post this problem to the maillist, people think it will 
 be solved by tuning the gc setting, however it doesn't work for me. 
 Then I saw a similar bug in CASSANDRA-8447, but [~benedict] think it is not 
 related. With the jstack on 
 https://gist.github.com/yangzhe1991/755ea2a10520be1fe59a, [~benedict] find a 
 bug and resolved it in CASSANDRA-8459. So I build a latest version on 2.1 
 branch and run the SNAPSHOT version on the nodes with gc trouble. 
 However, there is still the gc issue. So I think opening a new tick and post 
 more information is a good idea. Thanks for helping me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8491) Updating counter after previous removal doesn't work

2014-12-16 Thread mlowicki (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mlowicki updated CASSANDRA-8491:

Description: 
{code}
cqlsh:sync desc table user_quota;

CREATE TABLE sync.user_quota (
user_id text PRIMARY KEY,
entities counter
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

cqlsh:sync select * from user_quota where user_id = 'test';

 user_id | entities
-+--

(0 rows)
cqlsh:sync update user_quota set entities = entities + 100 where user_id = 
'test';
cqlsh:sync select * from user_quota where user_id = 'test';

 user_id | entities
-+--
test |  100

(1 rows)
cqlsh:sync update user_quota set entities = entities + 100 where user_id = 
'test';
cqlsh:sync select * from user_quota where user_id = 'test';

 user_id | entities
-+--
test |  200

(1 rows)
cqlsh:sync delete from user_quota where user_id = 'test';
cqlsh:sync select * from user_quota where user_id = 'test';

 user_id | entities
-+--

(0 rows)
cqlsh:sync update user_quota set entities = entities + 100 where user_id = 
'test';
cqlsh:sync select * from user_quota where user_id = 'test';

 user_id | entities
-+--

(0 rows)
{code}

  was:
{code}
cqlsh:sync desc table user_quota;

CREATE TABLE sync.user_quota (
user_id text PRIMARY KEY,
entities counter
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

cqlsh:sync select * from user_quota where user_id = 'test';

 user_id | entities
-+--

(0 rows)
cqlsh:sync update user_quota set entities = entities + 100 where user_id = 
'test';
cqlsh:sync select * from user_quota where user_id = 'test';

 user_id | entities
-+--
test |  100

(1 rows)
cqlsh:sync update user_quota set entities = entities + 100 where user_id = 
'test';
cqlsh:sync select * from user_quota where user_id = 'test';

 user_id | entities
-+--
test |  200

(1 rows)
cqlsh:sync delete from user_quota where user_id = 'test';
cqlsh:sync select * from user_quota where user_id = 'test';

 user_id | entities
-+--

(0 rows)
cqlsh:sync update user_quota set entities = entities + 100 where user_id = 
'test';
cqlsh:sync select * from user_quota where user_id = 'test';

 user_id | entities
-+--

(0 rows)



 Updating counter after previous removal doesn't work
 

 Key: CASSANDRA-8491
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8491
 Project: Cassandra
  Issue Type: Bug
Reporter: mlowicki

 {code}
 cqlsh:sync desc table user_quota;
 CREATE TABLE sync.user_quota (
 user_id text PRIMARY KEY,
 entities counter
 ) WITH bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 864000
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 cqlsh:sync select * from user_quota where user_id = 'test';
  user_id | entities
 -+--
 (0 rows)
 cqlsh:sync update user_quota set entities = entities + 100 where user_id = 
 'test';
 cqlsh:sync select * from user_quota where user_id = 'test';
  user_id | entities
 

[jira] [Created] (CASSANDRA-8491) Updating counter after previous removal doesn't work

2014-12-16 Thread mlowicki (JIRA)
mlowicki created CASSANDRA-8491:
---

 Summary: Updating counter after previous removal doesn't work
 Key: CASSANDRA-8491
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8491
 Project: Cassandra
  Issue Type: Bug
Reporter: mlowicki


{code}
cqlsh:sync desc table user_quota;

CREATE TABLE sync.user_quota (
user_id text PRIMARY KEY,
entities counter
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

cqlsh:sync select * from user_quota where user_id = 'test';

 user_id | entities
-+--

(0 rows)
cqlsh:sync update user_quota set entities = entities + 100 where user_id = 
'test';
cqlsh:sync select * from user_quota where user_id = 'test';

 user_id | entities
-+--
test |  100

(1 rows)
cqlsh:sync update user_quota set entities = entities + 100 where user_id = 
'test';
cqlsh:sync select * from user_quota where user_id = 'test';

 user_id | entities
-+--
test |  200

(1 rows)
cqlsh:sync delete from user_quota where user_id = 'test';
cqlsh:sync select * from user_quota where user_id = 'test';

 user_id | entities
-+--

(0 rows)
cqlsh:sync update user_quota set entities = entities + 100 where user_id = 
'test';
cqlsh:sync select * from user_quota where user_id = 'test';

 user_id | entities
-+--

(0 rows)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8491) Updating counter after previous removal doesn't work

2014-12-16 Thread mlowicki (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mlowicki updated CASSANDRA-8491:

Environment: Cassandra 2.1.2

 Updating counter after previous removal doesn't work
 

 Key: CASSANDRA-8491
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8491
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2
Reporter: mlowicki

 {code}
 cqlsh:sync desc table user_quota;
 CREATE TABLE sync.user_quota (
 user_id text PRIMARY KEY,
 entities counter
 ) WITH bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 864000
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 cqlsh:sync select * from user_quota where user_id = 'test';
  user_id | entities
 -+--
 (0 rows)
 cqlsh:sync update user_quota set entities = entities + 100 where user_id = 
 'test';
 cqlsh:sync select * from user_quota where user_id = 'test';
  user_id | entities
 -+--
 test |  100
 (1 rows)
 cqlsh:sync update user_quota set entities = entities + 100 where user_id = 
 'test';
 cqlsh:sync select * from user_quota where user_id = 'test';
  user_id | entities
 -+--
 test |  200
 (1 rows)
 cqlsh:sync delete from user_quota where user_id = 'test';
 cqlsh:sync select * from user_quota where user_id = 'test';
  user_id | entities
 -+--
 (0 rows)
 cqlsh:sync update user_quota set entities = entities + 100 where user_id = 
 'test';
 cqlsh:sync select * from user_quota where user_id = 'test';
  user_id | entities
 -+--
 (0 rows)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8491) Updating counter after previous removal doesn't work

2014-12-16 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-8491.
--
Resolution: Not a Problem

Unfortunately, that's just how counters work. Deletes don't commute with 
updates.

See CASSANDRA-7346.

 Updating counter after previous removal doesn't work
 

 Key: CASSANDRA-8491
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8491
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2
Reporter: mlowicki

 {code}
 cqlsh:sync desc table user_quota;
 CREATE TABLE sync.user_quota (
 user_id text PRIMARY KEY,
 entities counter
 ) WITH bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 864000
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 cqlsh:sync select * from user_quota where user_id = 'test';
  user_id | entities
 -+--
 (0 rows)
 cqlsh:sync update user_quota set entities = entities + 100 where user_id = 
 'test';
 cqlsh:sync select * from user_quota where user_id = 'test';
  user_id | entities
 -+--
 test |  100
 (1 rows)
 cqlsh:sync update user_quota set entities = entities + 100 where user_id = 
 'test';
 cqlsh:sync select * from user_quota where user_id = 'test';
  user_id | entities
 -+--
 test |  200
 (1 rows)
 cqlsh:sync delete from user_quota where user_id = 'test';
 cqlsh:sync select * from user_quota where user_id = 'test';
  user_id | entities
 -+--
 (0 rows)
 cqlsh:sync update user_quota set entities = entities + 100 where user_id = 
 'test';
 cqlsh:sync select * from user_quota where user_id = 'test';
  user_id | entities
 -+--
 (0 rows)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8458) Don't give out positions in an sstable beyond its first/last tokens

2014-12-16 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248232#comment-14248232
 ] 

Marcus Eriksson commented on CASSANDRA-8458:


bq. Doesn't it strike you as a bug to use Operation.GT, instead of Operation.GE?
hmm, we use range.toRowBounds which creates a maxKeyBound for both the left and 
the right in the range, meaning it can never be equal. Say we have an sstable 
with tokens 1, 2, 3, 4, 5, and we request positions for range (2,4], here we 
should not include the 2, we want the position for the key after. Also, for the 
right bound, we want the start-position for the key that is GT than 4, so that 
we include the 4. Using GE will also work since we use the max key bound, but 
since we actually want GT, I think we should keep it that way?

Your refactoring looks good, but we still need to check if left == right (if 
there are no keys in the requested range, we should not include the positions). 
Pushed here (with GT instead of GE): 
https://github.com/krummas/cassandra/commits/marcuse/8458-2

 Don't give out positions in an sstable beyond its first/last tokens
 ---

 Key: CASSANDRA-8458
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8458
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 
 0001-Make-sure-we-don-t-give-out-positions-from-an-sstabl.patch


 Looks like we include tmplink sstables in streams in 2.1+, and when we do, 
 sometimes we get this error message on the receiving side: 
 {{java.io.IOException: Corrupt input data, block did not start with 2 byte 
 signature ('ZV') followed by type byte, 2-byte length)}}. I've only seen this 
 happen when a tmplink sstable is included in the stream.
 We can not just exclude the tmplink files when starting the stream - we need 
 to include the original file, which we might miss since we check if the 
 requested stream range intersects the sstable range.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6750) Support for UPDATE predicates

2014-12-16 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248233#comment-14248233
 ] 

Aleksey Yeschenko commented on CASSANDRA-6750:
--

Agreed. We must not introduce any more implicit read-before-write ops. Some 
convenience is not worth breaking write request assumptions for.

 Support for UPDATE predicates
 -

 Key: CASSANDRA-6750
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6750
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
 Environment: 2.0.3
Reporter: nivance
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cql3, ponies

 cqlsh:spdatarp UPDATE t_spdatarpro_ro SET amount = 10  WHERE messageid = 
 '123456';
 Bad Request: Non PRIMARY KEY messageid found in where clause
 In this case, messageid is the secend index. I want to update all rows which 
 messageid is '123456', but it failed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8490) Tombstone stops paging through resultset when distinct keyword is used.

2014-12-16 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8490:
---
Reproduced In: 2.1.2, 2.0.11
Fix Version/s: 2.1.3
   2.0.12
 Assignee: Tyler Hobbs

 Tombstone stops paging through resultset when distinct keyword is used.
 ---

 Key: CASSANDRA-8490
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8490
 Project: Cassandra
  Issue Type: Bug
 Environment: Driver version: 2.1.3.
 Cassandra version: 2.0.11/2.1.2.
Reporter: Frank Limstrand
Assignee: Tyler Hobbs
 Fix For: 2.0.12, 2.1.3


 Using paging demo code from 
 https://github.com/PatrickCallaghan/datastax-paging-demo
 The code creates and populates a table with 1000 entries and pages through 
 them with setFetchSize set to 100. If we then delete one entry with 'cqlsh':
 cqlsh:datastax_paging_demo delete from datastax_paging_demo.products  where 
 productId = 'P142'; (The specified productid is number 6 in the resultset.)
 and run the same query (Select * from) again we get:
 [com.datastax.paging.Main.main()] INFO  com.datastax.paging.Main - Paging 
 demo took 0 secs. Total Products : 999
 which is what we would expect.
 If we then change the select statement in dao/ProductDao.java (line 70) 
 from Select * from  to Select DISTINCT productid from  we get this result:
 [com.datastax.paging.Main.main()] INFO  com.datastax.paging.Main - Paging 
 demo took 0 secs. Total Products : 99
 So it looks like the tombstone stops the paging behaviour. Is this a bug?
 DEBUG [Native-Transport-Requests:788] 2014-12-16 10:09:13,431 Message.java 
 (line 319) Received: QUERY Select DISTINCT productid from 
 datastax_paging_demo.products, v=2
 DEBUG [Native-Transport-Requests:788] 2014-12-16 10:09:13,434 
 AbstractQueryPager.java (line 98) Fetched 99 live rows
 DEBUG [Native-Transport-Requests:788] 2014-12-16 10:09:13,434 
 AbstractQueryPager.java (line 115) Got result (99) smaller than page size 
 (100), considering pager exhausted
 with kind regards
 Frank Limstrand



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6750) Support for UPDATE predicates

2014-12-16 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248263#comment-14248263
 ] 

Benjamin Lerer commented on CASSANDRA-6750:
---

I think that part of the problem here is the fact that for years people have 
worked with SQL where this operation is perfectly normal (I do not know any 
serious relational database that do not support that). I know that we have 
other problem that relational database do not have but from a user point of 
view it is not obvious and it is really confusing.

What is the problem of read before write?

 Support for UPDATE predicates
 -

 Key: CASSANDRA-6750
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6750
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
 Environment: 2.0.3
Reporter: nivance
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cql3, ponies

 cqlsh:spdatarp UPDATE t_spdatarpro_ro SET amount = 10  WHERE messageid = 
 '123456';
 Bad Request: Non PRIMARY KEY messageid found in where clause
 In this case, messageid is the secend index. I want to update all rows which 
 messageid is '123456', but it failed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8399) Reference Counter exception when dropping user type

2014-12-16 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248286#comment-14248286
 ] 

Marcus Eriksson commented on CASSANDRA-8399:


patch is +1 from me, could you make sure you can't repro after this 
[~philipthompson]?

 Reference Counter exception when dropping user type
 ---

 Key: CASSANDRA-8399
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8399
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Joshua McKenzie
 Fix For: 2.1.3

 Attachments: node2.log, ubuntu-8399.log


 When running the dtest 
 {{user_types_test.py:TestUserTypes.test_type_keyspace_permission_isolation}} 
 with the current 2.1-HEAD code, very frequently, but not always, when 
 dropping a type, the following exception is seen:{code}
 ERROR [MigrationStage:1] 2014-12-01 13:54:54,824 CassandraDaemon.java:170 - 
 Exception in thread Thread[MigrationStage:1,5,main]
 java.lang.AssertionError: Reference counter -1 for 
 /var/folders/v3/z4wf_34n1q506_xjdy49gb78gn/T/dtest-eW2RXj/test/node2/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-sche
 ma_keyspaces-ka-14-Data.db
 at 
 org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:1662)
  ~[main/:na]
 at 
 org.apache.cassandra.io.sstable.SSTableScanner.close(SSTableScanner.java:164) 
 ~[main/:na]
 at 
 org.apache.cassandra.utils.MergeIterator.close(MergeIterator.java:62) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore$8.close(ColumnFamilyStore.java:1943)
  ~[main/:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:2116) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:2029)
  ~[main/:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1963)
  ~[main/:na]
 at 
 org.apache.cassandra.db.SystemKeyspace.serializedSchema(SystemKeyspace.java:744)
  ~[main/:na]
 at 
 org.apache.cassandra.db.SystemKeyspace.serializedSchema(SystemKeyspace.java:731)
  ~[main/:na]
 at org.apache.cassandra.config.Schema.updateVersion(Schema.java:374) 
 ~[main/:na]
 at 
 org.apache.cassandra.config.Schema.updateVersionAndAnnounce(Schema.java:399) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.DefsTables.mergeSchema(DefsTables.java:167) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:49)
  ~[main/:na]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_67]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_67]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_67]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_67]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]{code}
 Log of the node with the error is attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8399) Reference Counter exception when dropping user type

2014-12-16 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8399:
---
Tester: Philip Thompson

 Reference Counter exception when dropping user type
 ---

 Key: CASSANDRA-8399
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8399
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Joshua McKenzie
 Fix For: 2.1.3

 Attachments: node2.log, ubuntu-8399.log


 When running the dtest 
 {{user_types_test.py:TestUserTypes.test_type_keyspace_permission_isolation}} 
 with the current 2.1-HEAD code, very frequently, but not always, when 
 dropping a type, the following exception is seen:{code}
 ERROR [MigrationStage:1] 2014-12-01 13:54:54,824 CassandraDaemon.java:170 - 
 Exception in thread Thread[MigrationStage:1,5,main]
 java.lang.AssertionError: Reference counter -1 for 
 /var/folders/v3/z4wf_34n1q506_xjdy49gb78gn/T/dtest-eW2RXj/test/node2/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-sche
 ma_keyspaces-ka-14-Data.db
 at 
 org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:1662)
  ~[main/:na]
 at 
 org.apache.cassandra.io.sstable.SSTableScanner.close(SSTableScanner.java:164) 
 ~[main/:na]
 at 
 org.apache.cassandra.utils.MergeIterator.close(MergeIterator.java:62) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore$8.close(ColumnFamilyStore.java:1943)
  ~[main/:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:2116) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:2029)
  ~[main/:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1963)
  ~[main/:na]
 at 
 org.apache.cassandra.db.SystemKeyspace.serializedSchema(SystemKeyspace.java:744)
  ~[main/:na]
 at 
 org.apache.cassandra.db.SystemKeyspace.serializedSchema(SystemKeyspace.java:731)
  ~[main/:na]
 at org.apache.cassandra.config.Schema.updateVersion(Schema.java:374) 
 ~[main/:na]
 at 
 org.apache.cassandra.config.Schema.updateVersionAndAnnounce(Schema.java:399) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.DefsTables.mergeSchema(DefsTables.java:167) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:49)
  ~[main/:na]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_67]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_67]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_67]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_67]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]{code}
 Log of the node with the error is attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7909) Do not exit nodetool repair when receiving JMX NOTIF_LOST

2014-12-16 Thread Razi Khaja (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248303#comment-14248303
 ] 

Razi Khaja commented on CASSANDRA-7909:
---

This is still not fixed in version 2.1.2

 Do not exit nodetool repair when receiving JMX NOTIF_LOST
 -

 Key: CASSANDRA-7909
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7909
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Trivial
 Fix For: 2.0.11, 2.1.1

 Attachments: 
 0001-Do-not-quit-nodetool-when-JMX-notif_lost-received.patch


 {{nodetool repair}} prints out 'Lost notification...' and exits when JMX 
 NOTIF_LOST message is received. But we should not exit right away since that 
 message just indicates some messages are lost because they arrive so fast 
 that they cannot be delivered to the remote client quickly enough according 
 to 
 https://weblogs.java.net/blog/emcmanus/archive/2007/08/when_can_jmx_no.html. 
 So we should just continue to listen to events until repair finishes or 
 connection is really closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8365) CamelCase name is used as index name instead of lowercase

2014-12-16 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-8365:
--
Attachment: CASSANDRA-8365.txt

The patch changes the way index names are handled in {{Cql.g}} and in 
{{cql3handling.py}} and add unit tests to verify the behaviour.

 CamelCase name is used as index name instead of lowercase
 -

 Key: CASSANDRA-8365
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8365
 Project: Cassandra
  Issue Type: Bug
Reporter: Pierre Laporte
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cqlsh
 Fix For: 2.1.3

 Attachments: CASSANDRA-8365.txt


 In cqlsh, when I execute a CREATE INDEX FooBar ... statement, the CamelCase 
 name is used as index name, even though it is unquoted. Trying to quote the 
 index name results in a syntax error.
 However, when I try to delete the index, I have to quote the index name, 
 otherwise I get an invalid-query error telling me that the index (lowercase) 
 does not exist.
 This seems inconsistent.  Shouldn't the index name be lowercased before the 
 index is created ?
 Here is the code to reproduce the issue :
 {code}
 cqlsh:schemabuilderit CREATE TABLE IndexTest (a int primary key, b int);
 cqlsh:schemabuilderit CREATE INDEX FooBar on indextest (b);
 cqlsh:schemabuilderit DESCRIBE TABLE indextest ;
 CREATE TABLE schemabuilderit.indextest (
 a int PRIMARY KEY,
 b int
 ) ;
 CREATE INDEX FooBar ON schemabuilderit.indextest (b);
 cqlsh:schemabuilderit DROP INDEX FooBar;
 code=2200 [Invalid query] message=Index 'foobar' could not be found in any 
 of the tables of keyspace 'schemabuilderit'
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8365) CamelCase name is used as index name instead of lowercase

2014-12-16 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248330#comment-14248330
 ] 

Benjamin Lerer commented on CASSANDRA-8365:
---

[~thobbs] could you review? 

 CamelCase name is used as index name instead of lowercase
 -

 Key: CASSANDRA-8365
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8365
 Project: Cassandra
  Issue Type: Bug
Reporter: Pierre Laporte
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cqlsh, docs
 Fix For: 2.1.3

 Attachments: CASSANDRA-8365.txt


 In cqlsh, when I execute a CREATE INDEX FooBar ... statement, the CamelCase 
 name is used as index name, even though it is unquoted. Trying to quote the 
 index name results in a syntax error.
 However, when I try to delete the index, I have to quote the index name, 
 otherwise I get an invalid-query error telling me that the index (lowercase) 
 does not exist.
 This seems inconsistent.  Shouldn't the index name be lowercased before the 
 index is created ?
 Here is the code to reproduce the issue :
 {code}
 cqlsh:schemabuilderit CREATE TABLE IndexTest (a int primary key, b int);
 cqlsh:schemabuilderit CREATE INDEX FooBar on indextest (b);
 cqlsh:schemabuilderit DESCRIBE TABLE indextest ;
 CREATE TABLE schemabuilderit.indextest (
 a int PRIMARY KEY,
 b int
 ) ;
 CREATE INDEX FooBar ON schemabuilderit.indextest (b);
 cqlsh:schemabuilderit DROP INDEX FooBar;
 code=2200 [Invalid query] message=Index 'foobar' could not be found in any 
 of the tables of keyspace 'schemabuilderit'
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8393) support quoted identifiers for index names

2014-12-16 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer resolved CASSANDRA-8393.
---
Resolution: Fixed

This problem was fixed as part of the patch for CASSANDRA-8365

 support quoted identifiers for index names
 --

 Key: CASSANDRA-8393
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8393
 Project: Cassandra
  Issue Type: Bug
 Environment: v2.1.2
Reporter: Jonathan Halliday
Assignee: Benjamin Lerer
 Fix For: 2.1.3


 CREATE TABLE quoted_ident ...
 is valid in cql, whilst
 CREATE INDEX quoted_ident ...
 is not.
 This is inconsistent and troublesome for frameworks or tooling that needs to 
 sling around case sensitive identifiers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8477) CMS GC can not recycle objects

2014-12-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248344#comment-14248344
 ] 

Jonathan Ellis commented on CASSANDRA-8477:
---

bq. for 2.0/2.1, I'm not sure there is much to do except maybe extend the 
tombstone_warning_threshold to include range tombstones too 

Yes, we should do that.

 CMS GC can not recycle objects
 --

 Key: CASSANDRA-8477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8477
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 2.1.1 or 2.1.2-SNAPSHOT(after CASSANDRA-8459 resolved)
Reporter: Philo Yang
Assignee: Sylvain Lebresne
 Attachments: cassandra.yaml, histo.txt, jstack.txt, log1.txt, 
 log2.txt, system.log, system.log.2014-12-15_1226


 I have a trouble in my cluster that CMS full gc can not reduce the size of 
 old gen. Days ago I post this problem to the maillist, people think it will 
 be solved by tuning the gc setting, however it doesn't work for me. 
 Then I saw a similar bug in CASSANDRA-8447, but [~benedict] think it is not 
 related. With the jstack on 
 https://gist.github.com/yangzhe1991/755ea2a10520be1fe59a, [~benedict] find a 
 bug and resolved it in CASSANDRA-8459. So I build a latest version on 2.1 
 branch and run the SNAPSHOT version on the nodes with gc trouble. 
 However, there is still the gc issue. So I think opening a new tick and post 
 more information is a good idea. Thanks for helping me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8449) Allow zero-copy reads again

2014-12-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248358#comment-14248358
 ] 

Jonathan Ellis commented on CASSANDRA-8449:
---

Jake, are you taking a stab at the OpOrder approach?

 Allow zero-copy reads again
 ---

 Key: CASSANDRA-8449
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8449
 Project: Cassandra
  Issue Type: Improvement
Reporter: T Jake Luciani
Assignee: T Jake Luciani
Priority: Minor
  Labels: performance
 Fix For: 3.0


 We disabled zero-copy reads in CASSANDRA-3179 due to in flight reads 
 accessing a ByteBuffer when the data was unmapped by compaction.  Currently 
 this code path is only used for uncompressed reads.
 The actual bytes are in fact copied to the client output buffers for both 
 netty and thrift before being sent over the wire, so the only issue really is 
 the time it takes to process the read internally.  
 This patch adds a slow network read test and changes the tidy() method to 
 actually delete a sstable once the readTimeout has elapsed giving plenty of 
 time to serialize the read.
 Removing this copy causes significantly less GC on the read path and improves 
 the tail latencies:
 http://cstar.datastax.com/graph?stats=c0c8ce16-7fea-11e4-959d-42010af0688fmetric=gc_countoperation=2_readsmoothing=1show_aggregates=truexmin=0xmax=109.34ymin=0ymax=5.5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8449) Allow zero-copy reads again

2014-12-16 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248359#comment-14248359
 ] 

T Jake Luciani commented on CASSANDRA-8449:
---

Yes

 Allow zero-copy reads again
 ---

 Key: CASSANDRA-8449
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8449
 Project: Cassandra
  Issue Type: Improvement
Reporter: T Jake Luciani
Assignee: T Jake Luciani
Priority: Minor
  Labels: performance
 Fix For: 3.0


 We disabled zero-copy reads in CASSANDRA-3179 due to in flight reads 
 accessing a ByteBuffer when the data was unmapped by compaction.  Currently 
 this code path is only used for uncompressed reads.
 The actual bytes are in fact copied to the client output buffers for both 
 netty and thrift before being sent over the wire, so the only issue really is 
 the time it takes to process the read internally.  
 This patch adds a slow network read test and changes the tidy() method to 
 actually delete a sstable once the readTimeout has elapsed giving plenty of 
 time to serialize the read.
 Removing this copy causes significantly less GC on the read path and improves 
 the tail latencies:
 http://cstar.datastax.com/graph?stats=c0c8ce16-7fea-11e4-959d-42010af0688fmetric=gc_countoperation=2_readsmoothing=1show_aggregates=truexmin=0xmax=109.34ymin=0ymax=5.5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8480) Update of primary key should be possible

2014-12-16 Thread Jason Kania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248363#comment-14248363
 ] 

Jason Kania commented on CASSANDRA-8480:


Ultimately as active contributors, you will decide whether you want to do this, 
but the issue is that usability of the DB is seriously hampered. I have read 
many responses from the Cassandra development team that make the statement that 
the schema modeling is incorrect, but without a really comprehensive set of 
examples to explain how one would model in such scenarios, it will drive away 
users. I personally have no idea how I could model other than what I have done 
given the circular dependencies that I stated above. I have had to model to 
accommodate restrictions as the majority of my efforts and if you look at the 
problems many people encounter and ask for assistance with, it is usually tied 
to these restrictions.

The problem I stated above that if you can update a column, you can't search 
for it, or if you can search on a column, you can't update it shuts down many 
uses of the database.

 Update of primary key should be possible
 

 Key: CASSANDRA-8480
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8480
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Reporter: Jason Kania

 While attempting to update a column in a row, I encountered the error
 PRIMARY KEY part thingy found in SET part
 The error is not helpful as it doesn't state why this is problem so I looked 
 on google and encountered many, many entries from people who have experienced 
 the issue including those with single column table who have to hack to work 
 around this.
 After looking around further in the documentation, I discovered that it is 
 not possible to update a primary key but I still have not found a good 
 explanation. I suspect that that this is because it would change the indexing 
 location of the record effectively requiring a delete followed by an insert. 
 If the question is one of guaranteeing no update to a deleted row, a client 
 will have the same issue.
 To me, this really should be handled behind the API because:
 1) it is an expected capability in a database to update all columns and 
 having these limitations only puts off potential users especially when they 
 have to discover the limitation after the fact
 2) being able to use a column in a WHERE clause requires it to be part of the 
 primary key so what this limitation means is if you can update a column, you 
 can't search for it, or if you can search on a column, you can't update it 
 which leaves a serious gap in handling a wide number of use cases.
 3) deleting and inserting a row with an updated primary key will mean sucking 
 in all the data from the row up to the client and sending it all back down 
 even when a single column in the primary key was all that was updated.
 Why not document the issue but make the interface more usable by supporting 
 the operation?
 Jason



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8399) Reference Counter exception when dropping user type

2014-12-16 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248365#comment-14248365
 ] 

Philip Thompson commented on CASSANDRA-8399:


+1 from me. So far I'm unable to reproduce after the patch.

 Reference Counter exception when dropping user type
 ---

 Key: CASSANDRA-8399
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8399
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Joshua McKenzie
 Fix For: 2.1.3

 Attachments: node2.log, ubuntu-8399.log


 When running the dtest 
 {{user_types_test.py:TestUserTypes.test_type_keyspace_permission_isolation}} 
 with the current 2.1-HEAD code, very frequently, but not always, when 
 dropping a type, the following exception is seen:{code}
 ERROR [MigrationStage:1] 2014-12-01 13:54:54,824 CassandraDaemon.java:170 - 
 Exception in thread Thread[MigrationStage:1,5,main]
 java.lang.AssertionError: Reference counter -1 for 
 /var/folders/v3/z4wf_34n1q506_xjdy49gb78gn/T/dtest-eW2RXj/test/node2/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-sche
 ma_keyspaces-ka-14-Data.db
 at 
 org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:1662)
  ~[main/:na]
 at 
 org.apache.cassandra.io.sstable.SSTableScanner.close(SSTableScanner.java:164) 
 ~[main/:na]
 at 
 org.apache.cassandra.utils.MergeIterator.close(MergeIterator.java:62) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore$8.close(ColumnFamilyStore.java:1943)
  ~[main/:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:2116) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:2029)
  ~[main/:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1963)
  ~[main/:na]
 at 
 org.apache.cassandra.db.SystemKeyspace.serializedSchema(SystemKeyspace.java:744)
  ~[main/:na]
 at 
 org.apache.cassandra.db.SystemKeyspace.serializedSchema(SystemKeyspace.java:731)
  ~[main/:na]
 at org.apache.cassandra.config.Schema.updateVersion(Schema.java:374) 
 ~[main/:na]
 at 
 org.apache.cassandra.config.Schema.updateVersionAndAnnounce(Schema.java:399) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.DefsTables.mergeSchema(DefsTables.java:167) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:49)
  ~[main/:na]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_67]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_67]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_67]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_67]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]{code}
 Log of the node with the error is attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8480) Update of primary key should be possible

2014-12-16 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248388#comment-14248388
 ] 

Aleksey Yeschenko commented on CASSANDRA-8480:
--

Don't get me wrong. We obviously want Cassandra to be as usable as possible, 
and CQL as expressive as possible, too.

Yet, there are things that are fundamentally anti-Cassandra, and this request 
is one of those things. One of the core principles of Cassandra is not 
introducing implicit reads to the write path, thus having the write path have 
consistent performance characteristics.

Another ticket similar to this one is CASSANDRA-6750. That one would also be 
good to have. We recognize that. That said, being a distributed database with a 
focus on scaling out, we have to restrict certain functionality that doesn't 
fit the overall direction of the project :(

 Update of primary key should be possible
 

 Key: CASSANDRA-8480
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8480
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Reporter: Jason Kania

 While attempting to update a column in a row, I encountered the error
 PRIMARY KEY part thingy found in SET part
 The error is not helpful as it doesn't state why this is problem so I looked 
 on google and encountered many, many entries from people who have experienced 
 the issue including those with single column table who have to hack to work 
 around this.
 After looking around further in the documentation, I discovered that it is 
 not possible to update a primary key but I still have not found a good 
 explanation. I suspect that that this is because it would change the indexing 
 location of the record effectively requiring a delete followed by an insert. 
 If the question is one of guaranteeing no update to a deleted row, a client 
 will have the same issue.
 To me, this really should be handled behind the API because:
 1) it is an expected capability in a database to update all columns and 
 having these limitations only puts off potential users especially when they 
 have to discover the limitation after the fact
 2) being able to use a column in a WHERE clause requires it to be part of the 
 primary key so what this limitation means is if you can update a column, you 
 can't search for it, or if you can search on a column, you can't update it 
 which leaves a serious gap in handling a wide number of use cases.
 3) deleting and inserting a row with an updated primary key will mean sucking 
 in all the data from the row up to the client and sending it all back down 
 even when a single column in the primary key was all that was updated.
 Why not document the issue but make the interface more usable by supporting 
 the operation?
 Jason



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8448) Comparison method violates its general contract in AbstractEndpointSnitch

2014-12-16 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248389#comment-14248389
 ] 

Sylvain Lebresne commented on CASSANDRA-8448:
-

Pretty sure that's because DynamicEndpointSnitch scores can change in the 
middle of {{sortByProximity}}. We'd need to snapshot the scores somehow before 
the sorting. 

 Comparison method violates its general contract in AbstractEndpointSnitch
 ---

 Key: CASSANDRA-8448
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8448
 Project: Cassandra
  Issue Type: Bug
Reporter: J.B. Langston

 Seen in both 1.2 and 2.0.  The error is occurring here: 
 https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/locator/AbstractEndpointSnitch.java#L49
 {code}
 ERROR [Thrift:9] 2014-12-04 20:12:28,732 CustomTThreadPoolServer.java (line 
 219) Error occurred during processing of message.
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.IllegalArgumentException: Comparison method violates its general 
 contract!
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2199)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3936)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4806)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:352)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:224)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:218)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:202)
   at 
 org.apache.cassandra.thrift.CassandraServer.createMutationList(CassandraServer.java:822)
   at 
 org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:954)
   at com.datastax.bdp.server.DseServer.batch_mutate(DseServer.java:576)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3922)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3906)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.IllegalArgumentException: Comparison method violates its 
 general contract!
   at java.util.TimSort.mergeHi(TimSort.java:868)
   at java.util.TimSort.mergeAt(TimSort.java:485)
   at java.util.TimSort.mergeCollapse(TimSort.java:410)
   at java.util.TimSort.sort(TimSort.java:214)
   at java.util.TimSort.sort(TimSort.java:173)
   at java.util.Arrays.sort(Arrays.java:659)
   at java.util.Collections.sort(Collections.java:217)
   at 
 org.apache.cassandra.locator.AbstractEndpointSnitch.sortByProximity(AbstractEndpointSnitch.java:49)
   at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.sortByProximityWithScore(DynamicEndpointSnitch.java:157)
   at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.sortByProximityWithBadness(DynamicEndpointSnitch.java:186)
   at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.sortByProximity(DynamicEndpointSnitch.java:151)
   at 
 org.apache.cassandra.service.StorageProxy.getLiveSortedEndpoints(StorageProxy.java:1408)
   at 
 org.apache.cassandra.service.StorageProxy.getLiveSortedEndpoints(StorageProxy.java:1402)
   at 
 org.apache.cassandra.service.AbstractReadExecutor.getReadExecutor(AbstractReadExecutor.java:148)
   at 
 org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1223)
   at 
 org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1165)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:255)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:225)
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:243)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:69)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:338)
   at 

[jira] [Updated] (CASSANDRA-8448) Comparison method violates its general contract in AbstractEndpointSnitch

2014-12-16 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8448:

Assignee: Brandon Williams

 Comparison method violates its general contract in AbstractEndpointSnitch
 ---

 Key: CASSANDRA-8448
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8448
 Project: Cassandra
  Issue Type: Bug
Reporter: J.B. Langston
Assignee: Brandon Williams

 Seen in both 1.2 and 2.0.  The error is occurring here: 
 https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/locator/AbstractEndpointSnitch.java#L49
 {code}
 ERROR [Thrift:9] 2014-12-04 20:12:28,732 CustomTThreadPoolServer.java (line 
 219) Error occurred during processing of message.
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.IllegalArgumentException: Comparison method violates its general 
 contract!
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2199)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3936)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4806)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:352)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:224)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:218)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:202)
   at 
 org.apache.cassandra.thrift.CassandraServer.createMutationList(CassandraServer.java:822)
   at 
 org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:954)
   at com.datastax.bdp.server.DseServer.batch_mutate(DseServer.java:576)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3922)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3906)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.IllegalArgumentException: Comparison method violates its 
 general contract!
   at java.util.TimSort.mergeHi(TimSort.java:868)
   at java.util.TimSort.mergeAt(TimSort.java:485)
   at java.util.TimSort.mergeCollapse(TimSort.java:410)
   at java.util.TimSort.sort(TimSort.java:214)
   at java.util.TimSort.sort(TimSort.java:173)
   at java.util.Arrays.sort(Arrays.java:659)
   at java.util.Collections.sort(Collections.java:217)
   at 
 org.apache.cassandra.locator.AbstractEndpointSnitch.sortByProximity(AbstractEndpointSnitch.java:49)
   at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.sortByProximityWithScore(DynamicEndpointSnitch.java:157)
   at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.sortByProximityWithBadness(DynamicEndpointSnitch.java:186)
   at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.sortByProximity(DynamicEndpointSnitch.java:151)
   at 
 org.apache.cassandra.service.StorageProxy.getLiveSortedEndpoints(StorageProxy.java:1408)
   at 
 org.apache.cassandra.service.StorageProxy.getLiveSortedEndpoints(StorageProxy.java:1402)
   at 
 org.apache.cassandra.service.AbstractReadExecutor.getReadExecutor(AbstractReadExecutor.java:148)
   at 
 org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1223)
   at 
 org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1165)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:255)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:225)
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:243)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:69)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:338)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:335)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
   at 
 

[jira] [Commented] (CASSANDRA-8480) Update of primary key should be possible

2014-12-16 Thread Jason Kania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248421#comment-14248421
 ] 

Jason Kania commented on CASSANDRA-8480:


Thanks for the response and explanation.

I am quite certain that with all your combined efforts to date you are looking 
to make Cassandra as usable as possible. It is just a question of what you rule 
out as possible versus documenting as performance impacting. I have worked in 
the capacity of performance architect on several large scale production systems 
and have had to balance user needs versus performance many times. My experience 
is that when choosing between stopping what users can do versus having the big 
flashing danger sign, the big flashing danger sign is usually what the end 
users are looking for.

I would suggest that it might be worth polling users about which approach would 
work versus falling back to core principles that may not be in the best 
interests of wider product adoption.

 Update of primary key should be possible
 

 Key: CASSANDRA-8480
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8480
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Reporter: Jason Kania

 While attempting to update a column in a row, I encountered the error
 PRIMARY KEY part thingy found in SET part
 The error is not helpful as it doesn't state why this is problem so I looked 
 on google and encountered many, many entries from people who have experienced 
 the issue including those with single column table who have to hack to work 
 around this.
 After looking around further in the documentation, I discovered that it is 
 not possible to update a primary key but I still have not found a good 
 explanation. I suspect that that this is because it would change the indexing 
 location of the record effectively requiring a delete followed by an insert. 
 If the question is one of guaranteeing no update to a deleted row, a client 
 will have the same issue.
 To me, this really should be handled behind the API because:
 1) it is an expected capability in a database to update all columns and 
 having these limitations only puts off potential users especially when they 
 have to discover the limitation after the fact
 2) being able to use a column in a WHERE clause requires it to be part of the 
 primary key so what this limitation means is if you can update a column, you 
 can't search for it, or if you can search on a column, you can't update it 
 which leaves a serious gap in handling a wide number of use cases.
 3) deleting and inserting a row with an updated primary key will mean sucking 
 in all the data from the row up to the client and sending it all back down 
 even when a single column in the primary key was all that was updated.
 Why not document the issue but make the interface more usable by supporting 
 the operation?
 Jason



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8303) Provide strict mode for CQL Queries

2014-12-16 Thread Mike Adamson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248468#comment-14248468
 ] 

Mike Adamson commented on CASSANDRA-8303:
-

User level management will become less tedious after CASSANDRA-7653 allowing 
blocks of functionality (ALLOW FILTERING, etc.) to be authorized to roles 
instead of individual users.

 Provide strict mode for CQL Queries
 -

 Key: CASSANDRA-8303
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8303
 Project: Cassandra
  Issue Type: Improvement
Reporter: Anupam Arora
 Fix For: 3.0


 Please provide a strict mode option in cassandra that will kick out any CQL 
 queries that are expensive, e.g. any query with ALLOWS FILTERING, 
 multi-partition queries, secondary index queries, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-7814) enable describe on indices

2014-12-16 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-7814:
-

Assignee: Benjamin Lerer

 enable describe on indices
 --

 Key: CASSANDRA-7814
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7814
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: radha
Assignee: Benjamin Lerer
Priority: Minor

 Describe index should be supported, right now, the only way is to export the 
 schema and find what it really is before updating/dropping the index.
 verified in 
 [cqlsh 3.1.8 | Cassandra 1.2.18.1 | CQL spec 3.0.0 | Thrift protocol 19.36.2]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8480) Update of primary key should be possible

2014-12-16 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248485#comment-14248485
 ] 

Aleksey Yeschenko commented on CASSANDRA-8480:
--

You are not wrong. But it's not just about core principles (which are 
important, but not the only things that matters). It's also about complexity.

Imagine a query like `UPDATE foo SET partition_key = 'bar' WHERE partition_key 
= 'baz'`. Executing it would involve reading the whole partition (potentially 
from many nodes, depending on CL), and very likely streaming it to a whole new 
set of replicas. This would require an entirely new write code path, and would 
break in spectacular ways from time to time. Operations like that are what 
Spark is for, not a good fit for CQL. Besides, it would not be idempotent, and 
writes must be idempotent in C* (with the exception of counters, and let's 
forget about lists for a second).

Ultimately, this is not an often-requested feature, if at all, and it's not 
something that can be implemented *well* on the Cassandra side, if at all, in a 
general way. So, on balance (complexity of implementation plus fundamental 
incompatibility with Cassandra write path vs. users' desire for the feature) 
the chances of this wish materializing are not high. Sorry.

 Update of primary key should be possible
 

 Key: CASSANDRA-8480
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8480
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Reporter: Jason Kania

 While attempting to update a column in a row, I encountered the error
 PRIMARY KEY part thingy found in SET part
 The error is not helpful as it doesn't state why this is problem so I looked 
 on google and encountered many, many entries from people who have experienced 
 the issue including those with single column table who have to hack to work 
 around this.
 After looking around further in the documentation, I discovered that it is 
 not possible to update a primary key but I still have not found a good 
 explanation. I suspect that that this is because it would change the indexing 
 location of the record effectively requiring a delete followed by an insert. 
 If the question is one of guaranteeing no update to a deleted row, a client 
 will have the same issue.
 To me, this really should be handled behind the API because:
 1) it is an expected capability in a database to update all columns and 
 having these limitations only puts off potential users especially when they 
 have to discover the limitation after the fact
 2) being able to use a column in a WHERE clause requires it to be part of the 
 primary key so what this limitation means is if you can update a column, you 
 can't search for it, or if you can search on a column, you can't update it 
 which leaves a serious gap in handling a wide number of use cases.
 3) deleting and inserting a row with an updated primary key will mean sucking 
 in all the data from the row up to the client and sending it all back down 
 even when a single column in the primary key was all that was updated.
 Why not document the issue but make the interface more usable by supporting 
 the operation?
 Jason



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8492) Support IF NOT EXISTS for ALTER TABLE ADD COLUMN

2014-12-16 Thread JIRA
Peter Mädel created CASSANDRA-8492:
--

 Summary: Support IF NOT EXISTS for ALTER TABLE ADD COLUMN
 Key: CASSANDRA-8492
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8492
 Project: Cassandra
  Issue Type: Improvement
Reporter: Peter Mädel
Priority: Minor


would enable creation of schema update scripts that can be repeatable executed 
without having to worry about invalid query exceptions



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8458) Don't give out positions in an sstable beyond its first/last tokens

2014-12-16 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248491#comment-14248491
 ] 

Benedict commented on CASSANDRA-8458:
-

Good point! (and the left==right bit was just a cut/paste boundary)

Given that, though, we should just perform getPosition(last, 
Operator.EQ).position, since it's known to occur in both files - and it leaves 
me feeling more comfortable for some reason. It should be completely fine as 
is, though, so +1 either way. 

 Don't give out positions in an sstable beyond its first/last tokens
 ---

 Key: CASSANDRA-8458
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8458
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 
 0001-Make-sure-we-don-t-give-out-positions-from-an-sstabl.patch


 Looks like we include tmplink sstables in streams in 2.1+, and when we do, 
 sometimes we get this error message on the receiving side: 
 {{java.io.IOException: Corrupt input data, block did not start with 2 byte 
 signature ('ZV') followed by type byte, 2-byte length)}}. I've only seen this 
 happen when a tmplink sstable is included in the stream.
 We can not just exclude the tmplink files when starting the stream - we need 
 to include the original file, which we might miss since we check if the 
 requested stream range intersects the sstable range.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6750) Support for UPDATE predicates

2014-12-16 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248492#comment-14248492
 ] 

Aleksey Yeschenko commented on CASSANDRA-6750:
--

In this case it's not just a regular read before write (which we used to do 
until 1.2? for index updates, out of necessity, which we still do for counters 
(locally) and for lists (a design mistake)). This would require a 2i read 
before write, potentially a very expensive one, and unpredictable in general. 
Such writes would also not be retriable in an idempotent fashion, which is a 
big no-no when it comes to C* writes.

 Support for UPDATE predicates
 -

 Key: CASSANDRA-6750
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6750
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
 Environment: 2.0.3
Reporter: nivance
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cql3, ponies

 cqlsh:spdatarp UPDATE t_spdatarpro_ro SET amount = 10  WHERE messageid = 
 '123456';
 Bad Request: Non PRIMARY KEY messageid found in where clause
 In this case, messageid is the secend index. I want to update all rows which 
 messageid is '123456', but it failed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8480) Update of primary key should be possible

2014-12-16 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-8480.
--
Resolution: Won't Fix

Resolving as Won't Fix for now, since there is just no way to have it done in 
an acceptable way with the current architecture.

If that changes some day (our architecture) to make this possible, please, feel 
free to reopen the ticket.

 Update of primary key should be possible
 

 Key: CASSANDRA-8480
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8480
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Reporter: Jason Kania

 While attempting to update a column in a row, I encountered the error
 PRIMARY KEY part thingy found in SET part
 The error is not helpful as it doesn't state why this is problem so I looked 
 on google and encountered many, many entries from people who have experienced 
 the issue including those with single column table who have to hack to work 
 around this.
 After looking around further in the documentation, I discovered that it is 
 not possible to update a primary key but I still have not found a good 
 explanation. I suspect that that this is because it would change the indexing 
 location of the record effectively requiring a delete followed by an insert. 
 If the question is one of guaranteeing no update to a deleted row, a client 
 will have the same issue.
 To me, this really should be handled behind the API because:
 1) it is an expected capability in a database to update all columns and 
 having these limitations only puts off potential users especially when they 
 have to discover the limitation after the fact
 2) being able to use a column in a WHERE clause requires it to be part of the 
 primary key so what this limitation means is if you can update a column, you 
 can't search for it, or if you can search on a column, you can't update it 
 which leaves a serious gap in handling a wide number of use cases.
 3) deleting and inserting a row with an updated primary key will mean sucking 
 in all the data from the row up to the client and sending it all back down 
 even when a single column in the primary key was all that was updated.
 Why not document the issue but make the interface more usable by supporting 
 the operation?
 Jason



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-6750) Support for UPDATE predicates

2014-12-16 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-6750.
--
Resolution: Not a Problem

Closing as Not a Problem for now, for the listed reasons.

I'm okay with reviving it once we have global indexes and RAMP done, however, 
maybe then we can implement it in a more or less acceptable manner. But not 
with the current local 2i implementation.

 Support for UPDATE predicates
 -

 Key: CASSANDRA-6750
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6750
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
 Environment: 2.0.3
Reporter: nivance
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cql3, ponies

 cqlsh:spdatarp UPDATE t_spdatarpro_ro SET amount = 10  WHERE messageid = 
 '123456';
 Bad Request: Non PRIMARY KEY messageid found in where clause
 In this case, messageid is the secend index. I want to update all rows which 
 messageid is '123456', but it failed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Remove linux-centric references from windows launch scripts

2014-12-16 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 a6c6d0f60 - d66ae7538


Remove linux-centric references from windows launch scripts

Patch by jmckenzie; reviewed by pthompson for CASSANDRA-8223


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d66ae753
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d66ae753
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d66ae753

Branch: refs/heads/cassandra-2.1
Commit: d66ae75387e365eff2e83626fdef09438178c421
Parents: a6c6d0f
Author: Joshua McKenzie jmcken...@apache.org
Authored: Tue Dec 16 11:16:14 2014 -0600
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Tue Dec 16 11:16:14 2014 -0600

--
 conf/cassandra-env.ps1 | 24 +++-
 1 file changed, 11 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d66ae753/conf/cassandra-env.ps1
--
diff --git a/conf/cassandra-env.ps1 b/conf/cassandra-env.ps1
index 5450ac8..9c6b6f4 100644
--- a/conf/cassandra-env.ps1
+++ b/conf/cassandra-env.ps1
@@ -134,14 +134,14 @@ Function CalculateHeapSizes
 }
 if (($env:MAX_HEAP_SIZE -and !$env:HEAP_NEWSIZE) -or (!$env:MAX_HEAP_SIZE 
-and $env:HEAP_NEWSIZE))
 {
-echo please set or unset MAX_HEAP_SIZE and HEAP_NEWSIZE in pairs
+echo Please set or unset MAX_HEAP_SIZE and HEAP_NEWSIZE in pairs.  
Aborting startup.
 exit 1
 }
 
 $memObject = Get-WMIObject -class win32_physicalmemory
 if ($memObject -eq $null)
 {
-echo WARNING!  Could not determine system memory.  Defaulting to 2G 
heap, 512M newgen.  Manually override in conf/cassandra-env.ps1 for different 
heap values.
+echo WARNING!  Could not determine system memory.  Defaulting to 2G 
heap, 512M newgen.  Manually override in conf\cassandra-env.ps1 for different 
heap values.
 $env:MAX_HEAP_SIZE = 2048M
 $env:HEAP_NEWSIZE = 512M
 return
@@ -323,14 +323,13 @@ Function SetCassandraEnvironment
 # enable thread priorities, primarily so we can give periodic tasks
 # a lower priority to avoid interfering with client workload
 $env:JVM_OPTS=$env:JVM_OPTS -XX:+UseThreadPriorities
-# allows lowering thread priority without being root.  see
-# http://tech.stolsvik.com/2010/01/linux-java-thread-priorities-workar
+# allows lowering thread priority without being root on linux - probably
+# not necessary on Windows but doesn't harm anything.
+# see http://tech.stolsvik.com/2010/01/linux-java-thread-priorities-workar
 $env:JVM_OPTS=$env:JVM_OPTS -XX:ThreadPriorityPolicy=42
 
 # min and max heap sizes should be set to the same value to avoid
-# stop-the-world GC pauses during resize, and so that we can lock the
-# heap in memory on startup to prevent any of it from being swapped
-# out.
+# stop-the-world GC pauses during resize.
 $env:JVM_OPTS=$env:JVM_OPTS -Xms$env:MAX_HEAP_SIZE
 $env:JVM_OPTS=$env:JVM_OPTS -Xmx$env:MAX_HEAP_SIZE
 $env:JVM_OPTS=$env:JVM_OPTS -Xmn$env:HEAP_NEWSIZE
@@ -369,17 +368,18 @@ Function SetCassandraEnvironment
 # $env:JVM_OPTS=$env:JVM_OPTS -XX:+PrintGCApplicationStoppedTime
 # $env:JVM_OPTS=$env:JVM_OPTS -XX:+PrintPromotionFailure
 # $env:JVM_OPTS=$env:JVM_OPTS -XX:PrintFLSStatistics=1
-# $env:JVM_OPTS=$env:JVM_OPTS -Xloggc:/var/log/cassandra/gc-`date 
+%s`.log
+# $currentDate = (Get-Date).ToString('.MM.dd')
+# $env:JVM_OPTS=$env:JVM_OPTS 
-Xloggc:$env:CASSANDRA_HOME/logs/gc-$currentDate.log
 
 # If you are using JDK 6u34 7u2 or later you can enable GC log rotation
 # don't stick the date in the log name if rotation is on.
-# $env:JVM_OPTS=$env:JVM_OPTS -Xloggc:/var/log/cassandra/gc.log
+# $env:JVM_OPTS=$env:JVM_OPTS -Xloggc:$env:CASSANDRA_HOME/logs/gc.log
 # $env:JVM_OPTS=$env:JVM_OPTS -XX:+UseGCLogFileRotation
 # $env:JVM_OPTS=$env:JVM_OPTS -XX:NumberOfGCLogFiles=10
 # $env:JVM_OPTS=$env:JVM_OPTS -XX:GCLogFileSize=10M
 
 # Configure the following for JEMallocAllocator and if jemalloc is not 
available in the system
-# library path (Example: /usr/local/lib/). Usually make install will do 
the right thing.
+# library path.
 # set LD_LIBRARY_PATH=JEMALLOC_HOME/lib/
 # $env:JVM_OPTS=$env:JVM_OPTS -Djava.library.path=JEMALLOC_HOME/lib/
 
@@ -403,10 +403,8 @@ Function SetCassandraEnvironment
 $env:JVM_OPTS=$env:JVM_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT
 $env:JVM_OPTS=$env:JVM_OPTS -Dcom.sun.management.jmxremote.ssl=false
 $env:JVM_OPTS=$env:JVM_OPTS 
-Dcom.sun.management.jmxremote.authenticate=false
-#$env:JVM_OPTS=$env:JVM_OPTS 

[1/2] cassandra git commit: Remove linux-centric references from windows launch scripts

2014-12-16 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk d0e3f4531 - 847fef833


Remove linux-centric references from windows launch scripts

Patch by jmckenzie; reviewed by pthompson for CASSANDRA-8223


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d66ae753
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d66ae753
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d66ae753

Branch: refs/heads/trunk
Commit: d66ae75387e365eff2e83626fdef09438178c421
Parents: a6c6d0f
Author: Joshua McKenzie jmcken...@apache.org
Authored: Tue Dec 16 11:16:14 2014 -0600
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Tue Dec 16 11:16:14 2014 -0600

--
 conf/cassandra-env.ps1 | 24 +++-
 1 file changed, 11 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d66ae753/conf/cassandra-env.ps1
--
diff --git a/conf/cassandra-env.ps1 b/conf/cassandra-env.ps1
index 5450ac8..9c6b6f4 100644
--- a/conf/cassandra-env.ps1
+++ b/conf/cassandra-env.ps1
@@ -134,14 +134,14 @@ Function CalculateHeapSizes
 }
 if (($env:MAX_HEAP_SIZE -and !$env:HEAP_NEWSIZE) -or (!$env:MAX_HEAP_SIZE 
-and $env:HEAP_NEWSIZE))
 {
-echo please set or unset MAX_HEAP_SIZE and HEAP_NEWSIZE in pairs
+echo Please set or unset MAX_HEAP_SIZE and HEAP_NEWSIZE in pairs.  
Aborting startup.
 exit 1
 }
 
 $memObject = Get-WMIObject -class win32_physicalmemory
 if ($memObject -eq $null)
 {
-echo WARNING!  Could not determine system memory.  Defaulting to 2G 
heap, 512M newgen.  Manually override in conf/cassandra-env.ps1 for different 
heap values.
+echo WARNING!  Could not determine system memory.  Defaulting to 2G 
heap, 512M newgen.  Manually override in conf\cassandra-env.ps1 for different 
heap values.
 $env:MAX_HEAP_SIZE = 2048M
 $env:HEAP_NEWSIZE = 512M
 return
@@ -323,14 +323,13 @@ Function SetCassandraEnvironment
 # enable thread priorities, primarily so we can give periodic tasks
 # a lower priority to avoid interfering with client workload
 $env:JVM_OPTS=$env:JVM_OPTS -XX:+UseThreadPriorities
-# allows lowering thread priority without being root.  see
-# http://tech.stolsvik.com/2010/01/linux-java-thread-priorities-workar
+# allows lowering thread priority without being root on linux - probably
+# not necessary on Windows but doesn't harm anything.
+# see http://tech.stolsvik.com/2010/01/linux-java-thread-priorities-workar
 $env:JVM_OPTS=$env:JVM_OPTS -XX:ThreadPriorityPolicy=42
 
 # min and max heap sizes should be set to the same value to avoid
-# stop-the-world GC pauses during resize, and so that we can lock the
-# heap in memory on startup to prevent any of it from being swapped
-# out.
+# stop-the-world GC pauses during resize.
 $env:JVM_OPTS=$env:JVM_OPTS -Xms$env:MAX_HEAP_SIZE
 $env:JVM_OPTS=$env:JVM_OPTS -Xmx$env:MAX_HEAP_SIZE
 $env:JVM_OPTS=$env:JVM_OPTS -Xmn$env:HEAP_NEWSIZE
@@ -369,17 +368,18 @@ Function SetCassandraEnvironment
 # $env:JVM_OPTS=$env:JVM_OPTS -XX:+PrintGCApplicationStoppedTime
 # $env:JVM_OPTS=$env:JVM_OPTS -XX:+PrintPromotionFailure
 # $env:JVM_OPTS=$env:JVM_OPTS -XX:PrintFLSStatistics=1
-# $env:JVM_OPTS=$env:JVM_OPTS -Xloggc:/var/log/cassandra/gc-`date 
+%s`.log
+# $currentDate = (Get-Date).ToString('.MM.dd')
+# $env:JVM_OPTS=$env:JVM_OPTS 
-Xloggc:$env:CASSANDRA_HOME/logs/gc-$currentDate.log
 
 # If you are using JDK 6u34 7u2 or later you can enable GC log rotation
 # don't stick the date in the log name if rotation is on.
-# $env:JVM_OPTS=$env:JVM_OPTS -Xloggc:/var/log/cassandra/gc.log
+# $env:JVM_OPTS=$env:JVM_OPTS -Xloggc:$env:CASSANDRA_HOME/logs/gc.log
 # $env:JVM_OPTS=$env:JVM_OPTS -XX:+UseGCLogFileRotation
 # $env:JVM_OPTS=$env:JVM_OPTS -XX:NumberOfGCLogFiles=10
 # $env:JVM_OPTS=$env:JVM_OPTS -XX:GCLogFileSize=10M
 
 # Configure the following for JEMallocAllocator and if jemalloc is not 
available in the system
-# library path (Example: /usr/local/lib/). Usually make install will do 
the right thing.
+# library path.
 # set LD_LIBRARY_PATH=JEMALLOC_HOME/lib/
 # $env:JVM_OPTS=$env:JVM_OPTS -Djava.library.path=JEMALLOC_HOME/lib/
 
@@ -403,10 +403,8 @@ Function SetCassandraEnvironment
 $env:JVM_OPTS=$env:JVM_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT
 $env:JVM_OPTS=$env:JVM_OPTS -Dcom.sun.management.jmxremote.ssl=false
 $env:JVM_OPTS=$env:JVM_OPTS 
-Dcom.sun.management.jmxremote.authenticate=false
-#$env:JVM_OPTS=$env:JVM_OPTS 
-Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password
+

[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-12-16 Thread jmckenzie
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/847fef83
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/847fef83
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/847fef83

Branch: refs/heads/trunk
Commit: 847fef833670c9853546f97ce6930165245c5c85
Parents: d0e3f45 d66ae75
Author: Joshua McKenzie jmcken...@apache.org
Authored: Tue Dec 16 11:17:04 2014 -0600
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Tue Dec 16 11:17:04 2014 -0600

--
 conf/cassandra-env.ps1 | 24 +++-
 1 file changed, 11 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/847fef83/conf/cassandra-env.ps1
--



[jira] [Commented] (CASSANDRA-8463) Constant compaction under LCS

2014-12-16 Thread Rick Branson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248518#comment-14248518
 ] 

Rick Branson commented on CASSANDRA-8463:
-

I applied that patch to one node. What should I be looking for? Is there any 
persisted state for that data? Shortly after restarting the daemon it did 
another one of those single table L1 compactions into L2. After finishing that, 
it immediately compacted that single new L2 file into 452 L3 tables. That 
pattern is what's causing the problem.

 Constant compaction under LCS
 -

 Key: CASSANDRA-8463
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8463
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Hardware is recent 2-socket, 16-core (x2 Hyperthreaded), 
 144G RAM, solid-state storage.
 Platform is Linux 3.2.51, Oracle JDK 64-bit 1.7.0_65.
 Heap is 32G total, 4G newsize.
 8G/8G on-heap/off-heap memtables, offheap_buffer allocator, 0.5 
 memtable_cleanup_threshold
 concurrent_compactors: 20
Reporter: Rick Branson
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 0001-better-logging.patch, 
 0001-make-sure-we-set-lastCompactedKey-properly.patch, log-for-8463.txt


 It appears that tables configured with LCS will completely re-compact 
 themselves over some period of time after upgrading from 2.0 to 2.1 (2.0.11 
 - 2.1.2, specifically). It starts out with 10 pending tasks for an hour or 
 so, then starts building up, now with 50-100 tasks pending across the cluster 
 after 12 hours. These nodes are under heavy write load, but were easily able 
 to keep up in 2.0 (they rarely had 5 pending compaction tasks), so I don't 
 think it's LCS in 2.1 actually being worse, just perhaps some different LCS 
 behavior that causes the layout of tables from 2.0 to prompt the compactor to 
 reorganize them?
 The nodes flushed ~11MB SSTables under 2.0. They're currently flushing ~36MB 
 SSTables due to the improved memtable setup in 2.1. Before I upgraded the 
 entire cluster to 2.1, I noticed the problem and tried several variations on 
 the flush size, thinking perhaps the larger tables in L0 were causing some 
 kind of cascading compactions. Even if they're sized roughly like the 2.0 
 flushes were, same behavior occurs. I also tried both enabling  disabling 
 STCS in L0 with no real change other than L0 began to back up faster, so I 
 left the STCS in L0 enabled.
 Tables are configured with 32MB sstable_size_in_mb, which was found to be an 
 improvement on the 160MB table size for compaction performance. Maybe this is 
 wrong now? Otherwise, the tables are configured with defaults. Compaction has 
 been unthrottled to help them catch-up. The compaction threads stay very 
 busy, with the cluster-wide CPU at 45% nice time. No nodes have completely 
 caught up yet. I'll update JIRA with status about their progress if anything 
 interesting happens.
 From a node around 12 hours ago, around an hour after the upgrade, with 19 
 pending compaction tasks:
 SSTables in each level: [6/4, 10, 105/100, 268, 0, 0, 0, 0, 0]
 SSTables in each level: [6/4, 10, 106/100, 271, 0, 0, 0, 0, 0]
 SSTables in each level: [1, 16/10, 105/100, 269, 0, 0, 0, 0, 0]
 SSTables in each level: [5/4, 10, 103/100, 272, 0, 0, 0, 0, 0]
 SSTables in each level: [4, 11/10, 105/100, 270, 0, 0, 0, 0, 0]
 SSTables in each level: [1, 12/10, 105/100, 271, 0, 0, 0, 0, 0]
 SSTables in each level: [1, 14/10, 104/100, 267, 0, 0, 0, 0, 0]
 SSTables in each level: [9/4, 10, 103/100, 265, 0, 0, 0, 0, 0]
 Recently, with 41 pending compaction tasks:
 SSTables in each level: [4, 13/10, 106/100, 269, 0, 0, 0, 0, 0]
 SSTables in each level: [4, 12/10, 106/100, 273, 0, 0, 0, 0, 0]
 SSTables in each level: [5/4, 11/10, 106/100, 271, 0, 0, 0, 0, 0]
 SSTables in each level: [4, 12/10, 103/100, 275, 0, 0, 0, 0, 0]
 SSTables in each level: [2, 13/10, 106/100, 273, 0, 0, 0, 0, 0]
 SSTables in each level: [3, 10, 104/100, 275, 0, 0, 0, 0, 0]
 SSTables in each level: [6/4, 11/10, 103/100, 269, 0, 0, 0, 0, 0]
 SSTables in each level: [4, 16/10, 105/100, 264, 0, 0, 0, 0, 0]
 More information about the use case: writes are roughly uniform across these 
 tables. The data is sharded across these 8 tables by key to improve 
 compaction parallelism. Each node receives up to 75,000 writes/sec sustained 
 at peak, and a small number of reads. This is a pre-production cluster that's 
 being warmed up with new data, so the low volume of reads (~100/sec per node) 
 is just from automatic sampled data checks, otherwise we'd just use STCS :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8365) CamelCase name is used as index name instead of lowercase

2014-12-16 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8365:
---
Reviewer: Tyler Hobbs

 CamelCase name is used as index name instead of lowercase
 -

 Key: CASSANDRA-8365
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8365
 Project: Cassandra
  Issue Type: Bug
Reporter: Pierre Laporte
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cqlsh, docs
 Fix For: 2.1.3

 Attachments: CASSANDRA-8365.txt


 In cqlsh, when I execute a CREATE INDEX FooBar ... statement, the CamelCase 
 name is used as index name, even though it is unquoted. Trying to quote the 
 index name results in a syntax error.
 However, when I try to delete the index, I have to quote the index name, 
 otherwise I get an invalid-query error telling me that the index (lowercase) 
 does not exist.
 This seems inconsistent.  Shouldn't the index name be lowercased before the 
 index is created ?
 Here is the code to reproduce the issue :
 {code}
 cqlsh:schemabuilderit CREATE TABLE IndexTest (a int primary key, b int);
 cqlsh:schemabuilderit CREATE INDEX FooBar on indextest (b);
 cqlsh:schemabuilderit DESCRIBE TABLE indextest ;
 CREATE TABLE schemabuilderit.indextest (
 a int PRIMARY KEY,
 b int
 ) ;
 CREATE INDEX FooBar ON schemabuilderit.indextest (b);
 cqlsh:schemabuilderit DROP INDEX FooBar;
 code=2200 [Invalid query] message=Index 'foobar' could not be found in any 
 of the tables of keyspace 'schemabuilderit'
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7275) Errors in FlushRunnable may leave threads hung

2014-12-16 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248554#comment-14248554
 ] 

Benedict commented on CASSANDRA-7275:
-

bq. This is not going to help if the problem data driven or external, you just 
going to trash flusher threads without doing any useful work.

Well, let's try and address each problem independently. A data induced bug that 
can occur across many nodes simultaneously is likely to occur repeatedly and 
cause the cluster to degrade probably quite rapidly, and will likely occur on 
all owners of a given token at once. Coupled with the stop-gap measures we're 
discussing might well run the risk of actual data loss or data corruption 
cross-cluster. Read repair would _not_ help for such a data bug, since none of 
the nodes would be in a safe state.

However the transient file system problems you're encountering would be helped 
by reattempting the flush. So, an initial and completely safe approach would be 
to retry a few times and _then_ crash the server (possibly with some random 
waiting involved to avoid a disastrous cascade of cluster-wide death). Wasting 
work isn't really a big problem if the system cannot make progress without this 
success, so I don't see a downside on that front. It's possible if, once this 
fails, we could negotiate a safe crash with our peers, so that if there is a 
data bug at most one replica dies, the operator is well aware of the problem, 
but the cluster continues to operate. Although this is difficult with vnodes, 
and perhaps a little contrived for the current state of c*.

Separately, we can look into perhaps weakening our constraints in various ways. 
The big issue you raise is that compaction is specifically held up. There seem 
to be two things we can do to help this:

1) We can make the dependency queue for marking commit log records unused 
table-specific, so that compactions only get held up if there has been an error 
on the compaction queue;
2) We can report these exceptions back to the waiter on the Future result, and 
this waiter can choose how to behave. If, say, the memtable of a system column 
family that can be worked-around fails to flush (for instance, 
compactions_in_progress) then instead of retrying, it can simply take some 
other action to ensure the system continues to make safe progress. If a data 
table fails to flush it can attempt to retry. 

Eventually, if it cannot recover safely, it should die though, as there will 
need to be some operator involvement and the reality is not everybody monitors 
their log files. I am very -1 on introducing a change that knowingly produces a 
complex failure condition that will not be widely known or understood, but I 
may be alone on that.

 Errors in FlushRunnable may leave threads hung
 --

 Key: CASSANDRA-7275
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7275
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Pavel Yaskevich
Priority: Minor
 Fix For: 2.0.12

 Attachments: 0001-Move-latch.countDown-into-finally-block.patch, 
 7252-2.0-v2.txt, CASSANDRA-7275-flush-info.patch


 In Memtable.FlushRunnable, the CountDownLatch will never be counted down if 
 there are errors, which results in hanging any threads that are waiting for 
 the flush to complete.  For example, an error like this causes the problem:
 {noformat}
 ERROR [FlushWriter:474] 2014-05-20 12:10:31,137 CassandraDaemon.java (line 
 198) Exception in thread Thread[FlushWriter:474,5,main]
 java.lang.IllegalArgumentException
 at java.nio.Buffer.position(Unknown Source)
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:64)
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:72)
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.split(AbstractCompositeType.java:138)
 at 
 org.apache.cassandra.io.sstable.ColumnNameHelper.minComponents(ColumnNameHelper.java:103)
 at 
 org.apache.cassandra.db.ColumnFamily.getColumnStats(ColumnFamily.java:439)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:194)
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:397)
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:350)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 

[jira] [Updated] (CASSANDRA-8471) mapred/hive queries fail when there is just 1 node down RF is 1

2014-12-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Kołaczkowski updated CASSANDRA-8471:
--
Reviewer: Piotr Kołaczkowski

 mapred/hive queries fail when there is just 1 node down RF is  1
 -

 Key: CASSANDRA-8471
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8471
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Artem Aliev
  Labels: easyfix, hadoop, patch
 Fix For: 2.0.12, 2.1.3

 Attachments: cassandra-2.0-8471.txt


 The hive and map reduce queries fail when just 1 node is down, even with RF=3 
 (in a 6 node cluster) and default consistency levels for Read and Write.
 The simpliest way to reproduce it is to use DataStax integrated hadoop 
 environment with hive.
 {quote}
 alter keyspace HiveMetaStore WITH replication = 
 {'class':'NetworkTopologyStrategy', 'DC1':3} ;
 alter keyspace cfs WITH replication = {'class':'NetworkTopologyStrategy', 
 'DC1':3} ;
 alter keyspace cfs_archive WITH replication = 
 {'class':'NetworkTopologyStrategy', 'DC1':3} ;
 CREATE KEYSPACE datamart WITH replication = {
   'class': 'NetworkTopologyStrategy',
   'DC1': '3'
 };
 CREATE TABLE users1 (
   id int,
   name text,
   PRIMARY KEY ((id))
 )
 {quote}
 Insert data.
 Shutdown one cassandra node.
 Run map reduce task. Hive in this case
 {quote}
 $ dse hive
 hive use datamart;
 hive select count(*) from users1;
 {quote}
 {quote}
 ...
 ...
 2014-12-10 18:33:53,090 Stage-1 map = 75%,  reduce = 25%, Cumulative CPU 6.39 
 sec
 2014-12-10 18:33:54,093 Stage-1 map = 75%,  reduce = 25%, Cumulative CPU 6.39 
 sec
 2014-12-10 18:33:55,096 Stage-1 map = 75%,  reduce = 25%, Cumulative CPU 6.39 
 sec
 2014-12-10 18:33:56,099 Stage-1 map = 75%,  reduce = 25%, Cumulative CPU 6.39 
 sec
 2014-12-10 18:33:57,102 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
 6.39 sec
 MapReduce Total cumulative CPU time: 6 seconds 390 msec
 Ended Job = job_201412100017_0006 with errors
 Error during job, obtaining debugging information...
 Job Tracking URL: 
 http://i-9d0306706.c.eng-gce-support.internal:50030/jobdetails.jsp?jobid=job_201412100017_0006
 Examining task ID: task_201412100017_0006_m_05 (and more) from job 
 job_201412100017_0006
 Task with the most failures(4):
 -
 Task ID:
   task_201412100017_0006_m_01
 URL:
   
 http://i-9d0306706.c.eng-gce-support.internal:50030/taskdetails.jsp?jobid=job_201412100017_0006tipid=task_201412100017_0006_m_01
 -
 Diagnostic Messages for this Task:
 java.io.IOException: java.io.IOException: 
 com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
 tried for query failed (tried: 
 i-6ac985f7d.c.eng-gce-support.internal/10.240.124.16:9042 
 (com.datastax.driver.core.TransportException: 
 [i-6ac985f7d.c.eng-gce-support.internal/10.240.124.16:9042] Cannot connect))
   at 
 org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
   at 
 org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
   at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:244)
   at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:538)
   at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.init(MapTask.java:197)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:418)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
   at org.apache.hadoop.mapred.Child.main(Child.java:260)
 Caused by: java.io.IOException: 
 com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
 tried for query failed (tried: 
 i-6ac985f7d.c.eng-gce-support.internal/10.240.124.16:9042 
 (com.datastax.driver.core.TransportException: 
 [i-6ac985f7d.c.eng-gce-support.internal/10.240.124.16:9042] Cannot connect))
   at 
 org.apache.hadoop.hive.cassandra.cql3.input.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:206)
   at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:241)
   ... 9 more
 Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
 host(s) tried for query failed (tried: 
 i-6ac985f7d.c.eng-gce-support.internal/10.240.124.16:9042 
 (com.datastax.driver.core.TransportException: 
 [i-6ac985f7d.c.eng-gce-support.internal/10.240.124.16:9042] 

[jira] [Commented] (CASSANDRA-8261) Clean up schema metadata classes

2014-12-16 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248626#comment-14248626
 ] 

Tyler Hobbs commented on CASSANDRA-8261:


+1 on the 8261-isolate-serialization-code-v2.txt patch.

 Clean up schema metadata classes
 

 Key: CASSANDRA-8261
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8261
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 3.0

 Attachments: 8261-isolate-hadcoded-system-tables.txt, 
 8261-isolate-serialization-code-v2.txt, 8261-isolate-serialization-code.txt, 
 8261-isolate-thrift-code.txt


 While working on CASSANDRA-6717, I've made some general cleanup changes to 
 schema metadata classes - distracted from the core purpose. Also, being 
 distracted from it by other things, every time I come back to it gives me a 
 bit of a rebase hell.
 Thus I'm isolating those changes into a separate issue here, hoping to commit 
 them one by one, before I go back and finalize CASSANDRA-6717.
 The changes include:
 - moving all the toThrift/fromThrift conversion code to ThriftConversion, 
 where it belongs
 - moving the complied system CFMetaData objects away from CFMetaData (to 
 SystemKeyspace and TracesKeyspace)
 - isolating legacy toSchema/fromSchema code into a separate class 
 (LegacySchemaTables - former DefsTables)
 - refactoring CFMetaData/KSMetaData fields to match CQL CREATE TABLE syntax, 
 and encapsulating more things in 
 CompactionOptions/CompressionOptions/ReplicationOptions classes
 - moving the definition classes to the new 'schema' package



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6993) Windows: remove mmap'ed I/O for index files and force standard file access

2014-12-16 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248620#comment-14248620
 ] 

Benedict commented on CASSANDRA-6993:
-

Might as well normalise the log message from Non-unix to Windows. Otherwise 
+1.

 Windows: remove mmap'ed I/O for index files and force standard file access
 --

 Key: CASSANDRA-6993
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6993
 Project: Cassandra
  Issue Type: Improvement
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
Priority: Minor
  Labels: Windows
 Fix For: 3.0, 2.1.3

 Attachments: 6993_2.1_v1.txt, 6993_v1.txt, 6993_v2.txt, 6993_v3.txt


 Memory-mapped I/O on Windows causes issues with hard-links; we're unable to 
 delete hard-links to open files with memory-mapped segments even using nio.  
 We'll need to push for close to performance parity between mmap'ed I/O and 
 buffered going forward as the buffered / compressed path offers other 
 benefits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8493) nodetool options parsing doesn't allow -h hostname to be at the end anymore

2014-12-16 Thread Jeremiah Jordan (JIRA)
Jeremiah Jordan created CASSANDRA-8493:
--

 Summary: nodetool options parsing doesn't allow -h hostname to 
be at the end anymore
 Key: CASSANDRA-8493
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8493
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jeremiah Jordan
Priority: Minor


This used to work:

{noformat}
$ ./nodetool ring -h 127.0.0.1
Error: The keyspace 127.0.0.1, does not exist
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8156) Fix validation of indexes on composite column components of COMPACT tables

2014-12-16 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248654#comment-14248654
 ] 

Tyler Hobbs commented on CASSANDRA-8156:


[~slebresne] don't forget to commit this at some point.

 Fix validation of indexes on composite column components of COMPACT tables
 --

 Key: CASSANDRA-8156
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8156
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Denis Angilella
Assignee: Sylvain Lebresne
Priority: Trivial
 Fix For: 2.0.12

 Attachments: 8156.txt


 CASSANDRA-5125 added support of indexes on composite column components for 
 *non-compact* tables (see CASSANDRA-5125 comments for additional information).
 This is a follow up for *compact* tables.
 Using compact tables it is possible to CREATE INDEX on composite primary key 
 columns, but queries returns no results for the tests below.
 {code:sql}
 CREATE TABLE users2 (
userID uuid,
fname text,
zip int,
state text,
   PRIMARY KEY ((userID, fname))
 ) WITH COMPACT STORAGE;
 CREATE INDEX ON users2 (userID);
 CREATE INDEX ON users2 (fname);
 INSERT INTO users2 (userID, fname, zip, state) VALUES 
 (b3e3bc33-b237-4b55-9337-3d41de9a5649, 'John', 10007, 'NY');
 -- the following queries returns 0 rows, instead of 1 expected
 SELECT * FROM users2 WHERE fname='John'; 
 SELECT * FROM users2 WHERE userid=b3e3bc33-b237-4b55-9337-3d41de9a5649;
 SELECT * FROM users2 WHERE userid=b3e3bc33-b237-4b55-9337-3d41de9a5649 AND 
 fname='John';
 -- dropping 2ndary indexes restore normal behavior
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Fix validation of indexes on composite column components of COMPACT tables

2014-12-16 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 9dc9185f5 - 345e69e1c


Fix validation of indexes on composite column components of COMPACT tables

patch by slebresne; reviewed by thobbs for CASSANDRA-8156


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/345e69e1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/345e69e1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/345e69e1

Branch: refs/heads/cassandra-2.0
Commit: 345e69e1c197faf73f303034c9a895b1dac65402
Parents: 9dc9185
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Tue Dec 16 20:32:08 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Tue Dec 16 20:32:08 2014 +0100

--
 CHANGES.txt  | 1 +
 .../apache/cassandra/cql3/statements/CreateIndexStatement.java   | 4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/345e69e1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6cecf99..d429f2e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.12:
+ * Fix validation of indexes in COMPACT tables (CASSANDRA-8156)
  * Avoid StackOverflowError when a large list of IN values
is used for a clustering column (CASSANDRA-8410)
  * Fix NPE when writetime() or ttl() calls are wrapped by

http://git-wip-us.apache.org/repos/asf/cassandra/blob/345e69e1/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
index e173e8c..5710290 100644
--- a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
@@ -85,8 +85,8 @@ public class CreateIndexStatement extends 
SchemaAlteringStatement
 properties.validate();
 
 // TODO: we could lift that limitation
-if (cfm.getCfDef().isCompact  cd.type != 
ColumnDefinition.Type.REGULAR)
-throw new InvalidRequestException(String.format(Secondary index 
on %s column %s is not yet supported for compact table, cd.type, columnName));
+if ((cfm.getCfDef().isCompact || !cfm.getCfDef().isComposite)  
cd.type != ColumnDefinition.Type.REGULAR)
+throw new InvalidRequestException(Secondary indexes are not 
supported on PRIMARY KEY columns in COMPACT STORAGE tables);
 
 // It would be possible to support 2ndary index on static columns (but 
not without modifications of at least ExtendedFilter and
 // CompositesIndex) and maybe we should, but that means a query like:



[jira] [Created] (CASSANDRA-8494) incremental bootstrap

2014-12-16 Thread Jon Haddad (JIRA)
Jon Haddad created CASSANDRA-8494:
-

 Summary: incremental bootstrap
 Key: CASSANDRA-8494
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8494
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jon Haddad


Current bootstrapping involves (to my knowledge) picking tokens and streaming 
data before the node is available for requests.  This can be problematic with 
fat nodes, since it may require 20TB of data to be streamed over before the 
machine can be useful.  This can result in a massive window of time before the 
machine can do anything useful.

As a potential approach to mitigate the huge window of time before a node is 
available, I suggest modifying the bootstrap process to only acquire a single 
initial token before being marked UP.  This would likely be a configuration 
parameter incremental_bootstrap or something similar.

After the node is bootstrapped with this one token, it could go into UP state, 
and could then acquire additional tokens (one or a handful at a time), which 
would be streamed over while the node is active and serving requests.  The 
benefit here is that with the default 256 tokens a node could become an active 
part of the cluster with less than 1% of it's final data streamed over.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8234) CTAS for COPY

2014-12-16 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248756#comment-14248756
 ] 

Jon Haddad commented on CASSANDRA-8234:
---

This is now solved pretty trivially with Spark.  I've got a trivial example 
here: 
https://github.com/rustyrazorblade/learn-spark-cassandra/blob/master/src/main/scala/DataMigration.scala

 CTAS for COPY
 -

 Key: CASSANDRA-8234
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8234
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Robin Schumacher
 Fix For: 3.1


 Continuous request from users is the ability to do CREATE TABLE AS SELECT... 
 The COPY command can be enhanced to perform simple and customized copies of 
 existing tables to satisfy the need. 
 - Simple copy is COPY table a TO new table b.
 - Custom copy can mimic Postgres: (e.g. COPY (SELECT * FROM country WHERE 
 country_name LIKE 'A%') TO …)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8234) CTAS for COPY

2014-12-16 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248756#comment-14248756
 ] 

Jon Haddad edited comment on CASSANDRA-8234 at 12/16/14 7:35 PM:
-

This is now solved pretty trivially with Spark.  I've got an example here: 
https://github.com/rustyrazorblade/learn-spark-cassandra/blob/master/src/main/scala/DataMigration.scala


was (Author: rustyrazorblade):
This is now solved pretty trivially with Spark.  I've got a trivial example 
here: 
https://github.com/rustyrazorblade/learn-spark-cassandra/blob/master/src/main/scala/DataMigration.scala

 CTAS for COPY
 -

 Key: CASSANDRA-8234
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8234
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Robin Schumacher
 Fix For: 3.1


 Continuous request from users is the ability to do CREATE TABLE AS SELECT... 
 The COPY command can be enhanced to perform simple and customized copies of 
 existing tables to satisfy the need. 
 - Simple copy is COPY table a TO new table b.
 - Custom copy can mimic Postgres: (e.g. COPY (SELECT * FROM country WHERE 
 country_name LIKE 'A%') TO …)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Fix validation of indexes on composite column components of COMPACT tables

2014-12-16 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 d66ae7538 - b1b9cba2a


Fix validation of indexes on composite column components of COMPACT tables

patch by slebresne; reviewed by thobbs for CASSANDRA-8156


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/345e69e1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/345e69e1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/345e69e1

Branch: refs/heads/cassandra-2.1
Commit: 345e69e1c197faf73f303034c9a895b1dac65402
Parents: 9dc9185
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Tue Dec 16 20:32:08 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Tue Dec 16 20:32:08 2014 +0100

--
 CHANGES.txt  | 1 +
 .../apache/cassandra/cql3/statements/CreateIndexStatement.java   | 4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/345e69e1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6cecf99..d429f2e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.12:
+ * Fix validation of indexes in COMPACT tables (CASSANDRA-8156)
  * Avoid StackOverflowError when a large list of IN values
is used for a clustering column (CASSANDRA-8410)
  * Fix NPE when writetime() or ttl() calls are wrapped by

http://git-wip-us.apache.org/repos/asf/cassandra/blob/345e69e1/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
index e173e8c..5710290 100644
--- a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
@@ -85,8 +85,8 @@ public class CreateIndexStatement extends 
SchemaAlteringStatement
 properties.validate();
 
 // TODO: we could lift that limitation
-if (cfm.getCfDef().isCompact  cd.type != 
ColumnDefinition.Type.REGULAR)
-throw new InvalidRequestException(String.format(Secondary index 
on %s column %s is not yet supported for compact table, cd.type, columnName));
+if ((cfm.getCfDef().isCompact || !cfm.getCfDef().isComposite)  
cd.type != ColumnDefinition.Type.REGULAR)
+throw new InvalidRequestException(Secondary indexes are not 
supported on PRIMARY KEY columns in COMPACT STORAGE tables);
 
 // It would be possible to support 2ndary index on static columns (but 
not without modifications of at least ExtendedFilter and
 // CompositesIndex) and maybe we should, but that means a query like:



[jira] [Comment Edited] (CASSANDRA-8234) CTAS for COPY

2014-12-16 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248756#comment-14248756
 ] 

Jon Haddad edited comment on CASSANDRA-8234 at 12/16/14 7:35 PM:
-

This is solved pretty trivially with Spark.  I've got an example here: 
https://github.com/rustyrazorblade/learn-spark-cassandra/blob/master/src/main/scala/DataMigration.scala


was (Author: rustyrazorblade):
This is now solved pretty trivially with Spark.  I've got an example here: 
https://github.com/rustyrazorblade/learn-spark-cassandra/blob/master/src/main/scala/DataMigration.scala

 CTAS for COPY
 -

 Key: CASSANDRA-8234
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8234
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Robin Schumacher
 Fix For: 3.1


 Continuous request from users is the ability to do CREATE TABLE AS SELECT... 
 The COPY command can be enhanced to perform simple and customized copies of 
 existing tables to satisfy the need. 
 - Simple copy is COPY table a TO new table b.
 - Custom copy can mimic Postgres: (e.g. COPY (SELECT * FROM country WHERE 
 country_name LIKE 'A%') TO …)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-12-16 Thread slebresne
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b1b9cba2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b1b9cba2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b1b9cba2

Branch: refs/heads/cassandra-2.1
Commit: b1b9cba2a57878a104ab31854e1be6293fbd71ee
Parents: d66ae75 345e69e
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Tue Dec 16 20:36:10 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Tue Dec 16 20:36:10 2014 +0100

--
 CHANGES.txt  | 1 +
 .../apache/cassandra/cql3/statements/CreateIndexStatement.java   | 4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b1b9cba2/CHANGES.txt
--
diff --cc CHANGES.txt
index 142d5aa,d429f2e..964de54
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,32 -1,5 +1,33 @@@
 -2.0.12:
 +2.1.3
 + * Add auth support to cassandra-stress (CASSANDRA-7985)
 + * Fix ArrayIndexOutOfBoundsException when generating error message
 +   for some CQL syntax errors (CASSANDRA-8455)
 + * Scale memtable slab allocation logarithmically (CASSANDRA-7882)
 + * cassandra-stress simultaneous inserts over same seed (CASSANDRA-7964)
 + * Reduce cassandra-stress sampling memory requirements (CASSANDRA-7926)
 + * Ensure memtable flush cannot expire commit log entries from its future 
(CASSANDRA-8383)
 + * Make read defrag async to reclaim memtables (CASSANDRA-8459)
 + * Remove tmplink files for offline compactions (CASSANDRA-8321)
 + * Reduce maxHintsInProgress (CASSANDRA-8415)
 + * BTree updates may call provided update function twice (CASSANDRA-8018)
 + * Release sstable references after anticompaction (CASSANDRA-8386)
 + * Handle abort() in SSTableRewriter properly (CASSANDRA-8320)
 + * Fix high size calculations for prepared statements (CASSANDRA-8231)
 + * Centralize shared executors (CASSANDRA-8055)
 + * Fix filtering for CONTAINS (KEY) relations on frozen collection
 +   clustering columns when the query is restricted to a single
 +   partition (CASSANDRA-8203)
 + * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243)
 + * Add more log info if readMeter is null (CASSANDRA-8238)
 + * add check of the system wall clock time at startup (CASSANDRA-8305)
 + * Support for frozen collections (CASSANDRA-7859)
 + * Fix overflow on histogram computation (CASSANDRA-8028)
 + * Have paxos reuse the timestamp generation of normal queries 
(CASSANDRA-7801)
 + * Fix incremental repair not remove parent session on remote (CASSANDRA-8291)
 + * Improve JBOD disk utilization (CASSANDRA-7386)
 + * Log failed host when preparing incremental repair (CASSANDRA-8228)
 +Merged from 2.0:
+  * Fix validation of indexes in COMPACT tables (CASSANDRA-8156)
   * Avoid StackOverflowError when a large list of IN values
 is used for a clustering column (CASSANDRA-8410)
   * Fix NPE when writetime() or ttl() calls are wrapped by

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b1b9cba2/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
--
diff --cc 
src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
index 3032897,5710290..ac19f5c
--- a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
@@@ -119,8 -85,8 +119,8 @@@ public class CreateIndexStatement exten
  properties.validate();
  
  // TODO: we could lift that limitation
- if (cfm.comparator.isDense()  cd.kind != 
ColumnDefinition.Kind.REGULAR)
- throw new InvalidRequestException(String.format(Secondary index 
on %s column %s is not yet supported for compact table, cd.kind, 
target.column));
 -if ((cfm.getCfDef().isCompact || !cfm.getCfDef().isComposite)  
cd.type != ColumnDefinition.Type.REGULAR)
++if ((cfm.comparator.isDense() || !cfm.comparator.isCompound())  
cd.kind != ColumnDefinition.Kind.REGULAR)
+ throw new InvalidRequestException(Secondary indexes are not 
supported on PRIMARY KEY columns in COMPACT STORAGE tables);
  
  // It would be possible to support 2ndary index on static columns 
(but not without modifications of at least ExtendedFilter and
  // CompositesIndex) and maybe we should, but that means a query like:



[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-12-16 Thread slebresne
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/88478c6a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/88478c6a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/88478c6a

Branch: refs/heads/trunk
Commit: 88478c6a6098d6150cd7982c6949ffe28e5873a0
Parents: 847fef8 b1b9cba
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Tue Dec 16 20:37:18 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Tue Dec 16 20:37:18 2014 +0100

--
 CHANGES.txt  | 1 +
 .../apache/cassandra/cql3/statements/CreateIndexStatement.java   | 4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/88478c6a/CHANGES.txt
--



[1/3] cassandra git commit: Fix validation of indexes on composite column components of COMPACT tables

2014-12-16 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk 847fef833 - 88478c6a6


Fix validation of indexes on composite column components of COMPACT tables

patch by slebresne; reviewed by thobbs for CASSANDRA-8156


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/345e69e1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/345e69e1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/345e69e1

Branch: refs/heads/trunk
Commit: 345e69e1c197faf73f303034c9a895b1dac65402
Parents: 9dc9185
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Tue Dec 16 20:32:08 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Tue Dec 16 20:32:08 2014 +0100

--
 CHANGES.txt  | 1 +
 .../apache/cassandra/cql3/statements/CreateIndexStatement.java   | 4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/345e69e1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6cecf99..d429f2e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.12:
+ * Fix validation of indexes in COMPACT tables (CASSANDRA-8156)
  * Avoid StackOverflowError when a large list of IN values
is used for a clustering column (CASSANDRA-8410)
  * Fix NPE when writetime() or ttl() calls are wrapped by

http://git-wip-us.apache.org/repos/asf/cassandra/blob/345e69e1/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
index e173e8c..5710290 100644
--- a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
@@ -85,8 +85,8 @@ public class CreateIndexStatement extends 
SchemaAlteringStatement
 properties.validate();
 
 // TODO: we could lift that limitation
-if (cfm.getCfDef().isCompact  cd.type != 
ColumnDefinition.Type.REGULAR)
-throw new InvalidRequestException(String.format(Secondary index 
on %s column %s is not yet supported for compact table, cd.type, columnName));
+if ((cfm.getCfDef().isCompact || !cfm.getCfDef().isComposite)  
cd.type != ColumnDefinition.Type.REGULAR)
+throw new InvalidRequestException(Secondary indexes are not 
supported on PRIMARY KEY columns in COMPACT STORAGE tables);
 
 // It would be possible to support 2ndary index on static columns (but 
not without modifications of at least ExtendedFilter and
 // CompositesIndex) and maybe we should, but that means a query like:



[2/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-12-16 Thread slebresne
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b1b9cba2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b1b9cba2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b1b9cba2

Branch: refs/heads/trunk
Commit: b1b9cba2a57878a104ab31854e1be6293fbd71ee
Parents: d66ae75 345e69e
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Tue Dec 16 20:36:10 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Tue Dec 16 20:36:10 2014 +0100

--
 CHANGES.txt  | 1 +
 .../apache/cassandra/cql3/statements/CreateIndexStatement.java   | 4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b1b9cba2/CHANGES.txt
--
diff --cc CHANGES.txt
index 142d5aa,d429f2e..964de54
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,32 -1,5 +1,33 @@@
 -2.0.12:
 +2.1.3
 + * Add auth support to cassandra-stress (CASSANDRA-7985)
 + * Fix ArrayIndexOutOfBoundsException when generating error message
 +   for some CQL syntax errors (CASSANDRA-8455)
 + * Scale memtable slab allocation logarithmically (CASSANDRA-7882)
 + * cassandra-stress simultaneous inserts over same seed (CASSANDRA-7964)
 + * Reduce cassandra-stress sampling memory requirements (CASSANDRA-7926)
 + * Ensure memtable flush cannot expire commit log entries from its future 
(CASSANDRA-8383)
 + * Make read defrag async to reclaim memtables (CASSANDRA-8459)
 + * Remove tmplink files for offline compactions (CASSANDRA-8321)
 + * Reduce maxHintsInProgress (CASSANDRA-8415)
 + * BTree updates may call provided update function twice (CASSANDRA-8018)
 + * Release sstable references after anticompaction (CASSANDRA-8386)
 + * Handle abort() in SSTableRewriter properly (CASSANDRA-8320)
 + * Fix high size calculations for prepared statements (CASSANDRA-8231)
 + * Centralize shared executors (CASSANDRA-8055)
 + * Fix filtering for CONTAINS (KEY) relations on frozen collection
 +   clustering columns when the query is restricted to a single
 +   partition (CASSANDRA-8203)
 + * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243)
 + * Add more log info if readMeter is null (CASSANDRA-8238)
 + * add check of the system wall clock time at startup (CASSANDRA-8305)
 + * Support for frozen collections (CASSANDRA-7859)
 + * Fix overflow on histogram computation (CASSANDRA-8028)
 + * Have paxos reuse the timestamp generation of normal queries 
(CASSANDRA-7801)
 + * Fix incremental repair not remove parent session on remote (CASSANDRA-8291)
 + * Improve JBOD disk utilization (CASSANDRA-7386)
 + * Log failed host when preparing incremental repair (CASSANDRA-8228)
 +Merged from 2.0:
+  * Fix validation of indexes in COMPACT tables (CASSANDRA-8156)
   * Avoid StackOverflowError when a large list of IN values
 is used for a clustering column (CASSANDRA-8410)
   * Fix NPE when writetime() or ttl() calls are wrapped by

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b1b9cba2/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
--
diff --cc 
src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
index 3032897,5710290..ac19f5c
--- a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
@@@ -119,8 -85,8 +119,8 @@@ public class CreateIndexStatement exten
  properties.validate();
  
  // TODO: we could lift that limitation
- if (cfm.comparator.isDense()  cd.kind != 
ColumnDefinition.Kind.REGULAR)
- throw new InvalidRequestException(String.format(Secondary index 
on %s column %s is not yet supported for compact table, cd.kind, 
target.column));
 -if ((cfm.getCfDef().isCompact || !cfm.getCfDef().isComposite)  
cd.type != ColumnDefinition.Type.REGULAR)
++if ((cfm.comparator.isDense() || !cfm.comparator.isCompound())  
cd.kind != ColumnDefinition.Kind.REGULAR)
+ throw new InvalidRequestException(Secondary indexes are not 
supported on PRIMARY KEY columns in COMPACT STORAGE tables);
  
  // It would be possible to support 2ndary index on static columns 
(but not without modifications of at least ExtendedFilter and
  // CompositesIndex) and maybe we should, but that means a query like:



[jira] [Commented] (CASSANDRA-8234) CTAS for COPY

2014-12-16 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248763#comment-14248763
 ] 

Aleksey Yeschenko commented on CASSANDRA-8234:
--

[~rustyrazorblade] Using Spark is the plan, yes.

 CTAS for COPY
 -

 Key: CASSANDRA-8234
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8234
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Robin Schumacher
 Fix For: 3.1


 Continuous request from users is the ability to do CREATE TABLE AS SELECT... 
 The COPY command can be enhanced to perform simple and customized copies of 
 existing tables to satisfy the need. 
 - Simple copy is COPY table a TO new table b.
 - Custom copy can mimic Postgres: (e.g. COPY (SELECT * FROM country WHERE 
 country_name LIKE 'A%') TO …)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8234) CTAS for COPY

2014-12-16 Thread Robin Schumacher (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248788#comment-14248788
 ] 

Robin Schumacher commented on CASSANDRA-8234:
-

Question for clarity: A DBA coming from Oracle, MySQL, etc., wants to do a CTAS 
in C*. Is step 1 set up Spark node/cluster, followed by step 2, which is 
write scala routine? 

 CTAS for COPY
 -

 Key: CASSANDRA-8234
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8234
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Robin Schumacher
 Fix For: 3.1


 Continuous request from users is the ability to do CREATE TABLE AS SELECT... 
 The COPY command can be enhanced to perform simple and customized copies of 
 existing tables to satisfy the need. 
 - Simple copy is COPY table a TO new table b.
 - Custom copy can mimic Postgres: (e.g. COPY (SELECT * FROM country WHERE 
 country_name LIKE 'A%') TO …)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8234) CTAS for COPY

2014-12-16 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248796#comment-14248796
 ] 

Aleksey Yeschenko commented on CASSANDRA-8234:
--

[~schumacr] I meant using Spark internally. Once we start bundling it with 
Cassandra itself, it's only a matter of providing a nice frontend to it. One 
not involving writing Scala (:

 CTAS for COPY
 -

 Key: CASSANDRA-8234
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8234
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Robin Schumacher
 Fix For: 3.1


 Continuous request from users is the ability to do CREATE TABLE AS SELECT... 
 The COPY command can be enhanced to perform simple and customized copies of 
 existing tables to satisfy the need. 
 - Simple copy is COPY table a TO new table b.
 - Custom copy can mimic Postgres: (e.g. COPY (SELECT * FROM country WHERE 
 country_name LIKE 'A%') TO …)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7713) CommitLogTest failure causes cascading unit test failures

2014-12-16 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-7713:
---
Reviewer: Joshua McKenzie

 CommitLogTest failure causes cascading unit test failures
 -

 Key: CASSANDRA-7713
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7713
 Project: Cassandra
  Issue Type: Test
Reporter: Michael Shuler
Assignee: Bogdan Kanivets
 Fix For: 2.0.12

 Attachments: CommitLogTest.system.log.txt, cassandra-2.0-7713.txt


 When CommitLogTest.testCommitFailurePolicy_stop fails or times out, 
 {{commitDir.setWritable(true)}} is never reached, so the 
 build/test/cassandra/commitlog directory is left without write permissions, 
 causing cascading failure of all subsequent tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8253) cassandra-stress 2.1 doesn't support LOCAL_ONE

2014-12-16 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8253:
---
Reviewer: T Jake Luciani

 cassandra-stress 2.1 doesn't support LOCAL_ONE
 --

 Key: CASSANDRA-8253
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8253
 Project: Cassandra
  Issue Type: Bug
Reporter: J.B. Langston
Assignee: Liang Xie
Priority: Minor
 Fix For: 2.1.3

 Attachments: CASSANDRA-8253.txt


 Looks like a simple oversight in argument parsing:
 ➜  bin  ./cassandra-stress write cl=LOCAL_ONE
 Invalid value LOCAL_ONE; must match pattern 
 ONE|QUORUM|LOCAL_QUORUM|EACH_QUORUM|ALL|ANY
 Also, CASSANDRA-7077 argues that it should be using LOCAL_ONE by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8484) Value of variable JVM_OPTS is duplicated in command line arguments

2014-12-16 Thread Andrey Trubachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Trubachev updated CASSANDRA-8484:

Description: 
For example:
{noformat}
$ ps aux | grep cassandra
cassand+   322  100 27.3 14942216 8995592 ?SLl  16:28 117:30 java -ea 
-javaagent:/usr/share/cassandra/lib/jamm-0.2.8.jar 
-XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
-XX:ThreadPriorityPolicy=42 -Xms8043M -Xmx8043M -Xmn1600M 
-XX:+HeapDumpOnOutOfMemoryError -Xss256k -XX:StringTableSize=103 
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled 
-XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 
-XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly 
-XX:+UseTLAB -XX:CompileCommandFile=/hotspot_compiler -XX:CMSWaitDuration=1 
-XX:+UseCondCardMark -Djava.net.preferIPv6Addresses=true 
-Dcom.sun.management.jmxremote.port=7199 
-Dcom.sun.management.jmxremote.rmi.port=7199 
-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcassandra.metricsReporterConfigFile=/etc/vs/cassandra/graphite.yaml -ea 
-javaagent:/usr/share/cassandra/lib/jamm-0.2.8.jar 
-XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
-XX:ThreadPriorityPolicy=42 -Xms8043M -Xmx8043M -Xmn1600M 
-XX:+HeapDumpOnOutOfMemoryError -Xss256k -XX:StringTableSize=103 
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled 
-XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 
-XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly 
-XX:+UseTLAB -XX:CompileCommandFile=/etc/cassandra/hotspot_compiler 
-XX:CMSWaitDuration=1 -XX:+UseCondCardMark 
-Djava.net.preferIPv6Addresses=true -Dcom.sun.management.jmxremote.port=7199 
-Dcom.sun.management.jmxremote.rmi.port=7199 
-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcassandra.metricsReporterConfigFile=/etc/vs/cassandra/graphite.yaml 
-Dlogback.configurationFile=logback.xml -Dcassandra.logdir=/var/log/cassandra 
-Dcassandra.storagedir= -Dcassandra-pidfile=/var/run/cassandra/cassandra.pid 
-cp 
/etc/cassandra:/usr/share/cassandra/lib/airline-0.6.jar:/usr/share/cassandra/lib/antlr-runtime-3.5.2.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.2.jar:/usr/share/cassandra/lib/commons-lang3-3.1.jar:/usr/share/cassandra/lib/commons-math3-3.2.jar:/usr/share/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.4.jar:/usr/share/cassandra/lib/disruptor-3.0.1.jar:/usr/share/cassandra/lib/guava-16.0.jar:/usr/share/cassandra/lib/high-scale-lib-1.0.6.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.2.8.jar:/usr/share/cassandra/lib/javax.inject.jar:/usr/share/cassandra/lib/jbcrypt-0.3m.jar:/usr/share/cassandra/lib/jline-1.0.jar:/usr/share/cassandra/lib/jna-4.0.0.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/libthrift-0.9.1.jar:/usr/share/cassandra/lib/logback-classic-1.1.2.jar:/usr/share/cassandra/lib/logback-core-1.1.2.jar:/usr/share/cassandra/lib/lz4-1.2.0.jar:/usr/share/cassandra/lib/metrics-core-2.2.0.jar:/usr/share/cassandra/lib/metrics-graphite-2.2.0.jar:/usr/share/cassandra/lib/netty-all-4.0.23.Final.jar:/usr/share/cassandra/lib/reporter-config-2.2.0-SNAPSHOT.jar:/usr/share/cassandra/lib/slf4j-api-1.7.2.jar:/usr/share/cassandra/lib/snakeyaml-1.11.jar:/usr/share/cassandra/lib/snappy-java-1.0.5.2.jar:/usr/share/cassandra/lib/stream-2.5.2.jar:/usr/share/cassandra/lib/stringtemplate-4.0.2.jar:/usr/share/cassandra/lib/super-csv-2.1.0.jar:/usr/share/cassandra/lib/thrift-server-0.3.7.jar:/usr/share/cassandra/apache-cassandra-2.1.2.jar:/usr/share/cassandra/apache-cassandra.jar:/usr/share/cassandra/apache-cassandra-thrift-2.1.2.jar:/usr/share/cassandra/cassandra-driver-core-2.0.5.jar:/usr/share/cassandra/netty-3.9.0.Final.jar:/usr/share/cassandra/stress.jar:
 -XX:HeapDumpPath=/var/lib/cassandra/java_1418650085.hprof 
-XX:ErrorFile=/var/lib/cassandra/hs_err_1418650085.log 
org.apache.cassandra.service.CassandraDaemon
{noformat}
Variable JVM_OPTS isn't cleaned up in cassandra-env.sh.
And cassandra-env.sh is called twice: first time in /etc/init.d/cassandra and 
second time in /usr/sbin/cassandra.
{noformat}
$ fgrep cassandra-env.sh /etc/init.d/cassandra /usr/sbin/cassandra
/etc/init.d/cassandra:[ -e /etc/cassandra/cassandra-env.sh ] || exit 0
/etc/init.d/cassandra:. /etc/cassandra/cassandra-env.sh
/usr/sbin/cassandra:if [ -f $CASSANDRA_CONF/cassandra-env.sh ]; then
/usr/sbin/cassandra:. $CASSANDRA_CONF/cassandra-env.sh
{noformat}


  was:
For example:
{noformat}
$ ps aux | grep cassandra
cassand+   322  100 27.3 14942216 8995592 ?SLl  16:28 117:30 java -ea 
-javaagent:/usr/share/cassandra/lib/jamm-0.2.8.jar 
-XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 

[jira] [Commented] (CASSANDRA-8234) CTAS for COPY

2014-12-16 Thread Robin Schumacher (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248833#comment-14248833
 ] 

Robin Schumacher commented on CASSANDRA-8234:
-

Ah, got it, thanks!

 CTAS for COPY
 -

 Key: CASSANDRA-8234
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8234
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Robin Schumacher
 Fix For: 3.1


 Continuous request from users is the ability to do CREATE TABLE AS SELECT... 
 The COPY command can be enhanced to perform simple and customized copies of 
 existing tables to satisfy the need. 
 - Simple copy is COPY table a TO new table b.
 - Custom copy can mimic Postgres: (e.g. COPY (SELECT * FROM country WHERE 
 country_name LIKE 'A%') TO …)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8473) Secondary index support for key-value pairs in CQL3 maps

2014-12-16 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248854#comment-14248854
 ] 

Tyler Hobbs commented on CASSANDRA-8473:


Thanks for the quick turnaround on the patch.  Most of your changes and 
comments are good, I'll just comment on a few things here:

bq. Added a test. I think trunk is okay (although the error message is 
imperfect: Invalid STRING constant (foo) for myfrozenmap of type 
frozenmaptext, text). The 2.1 patch incorrectly gave no error for queries 
with this condition; that's been fixed.

Hmm, that error message indicates that the logic for frozen collections in 
{{toReceivers()}} still isn't quite correct.  I think it needs to specifically 
check for map-entry relations on frozen collections.  Right now, it's falling 
through to the default behavior of {{return 
Collections.singletonList(receiver);}}, so it's expecting a full collection as 
the value in the relation.

bq. The block has been removed. Note that it was an attempt to preserve the 
logic that existed in the MAP case previously if expr.isContainsKey() were 
false; since the only kind of expressions apart from CONTAINS KEY that were 
valid for map columns were CONTAINS relations, this seemed like the right 
choice. But your analysis appears to be correct. Is there some other way that 
code would have been reachable? (It looks like the code may have been intended 
to check map\[key\] = value conditions, but AFAIK there would have been no way 
to trigger its execution.)

You're right.  Thinking about this method a little more, the only things that 
should be handled after the {{isContains()}} check should be CONTAINS KEY and 
{{map\[key\] = value}}.  Frozen collections aren't tested with this method, and 
lists and sets only support CONTAINS.  So you could go ahead and simplify this 
whole method to reflect that.  (I think I may have failed to remove old code 
here when I added support for frozen collections.)

bq. Corrected both error messages. It's not obvious to me how to reorganize the 
switch to make it clearer (very likely because I wrote it) – did you have 
something specific in mind?

I think this could replace the switch statement (although I haven't tested it):
{code}
if (isFrozenCollection)
{
if (target.type != IndexTarget.TargetType.FULL)
throw new InvalidRequestException(Frozen collections currently 
only support full-collection indexes.  +
  For example, 'CREATE INDEX 
ON table(full(columnName))'.);
}
else
{
if (target.type == IndexTarget.TargetType.FULL)
throw new InvalidRequestException(full() indexes can only be 
created on frozen collections);

if (!cd.type.isMultiCell())
throw new InvalidRequestException(String.format(Cannot create 
index on %s of column %s; only non-frozen collections support %s indexes,
target.type, 
target.column, target.type));

if (target.type == IndexTarget.TargetType.KEYS || target.type == 
IndexTarget.TargetType.KEYS_AND_VALUES)
{
if (!isMap)
throw new InvalidRequestException(String.format(Cannot 
create index on %s of column %s with non-map type,
target.type, target.column));
}
}
{code}

bq.  I've also refactored the code for clarity; if you still think it needs 
comments, I'll certainly add them.

It looks perfectly clear as-is, thanks.

bq. I'm not positive the abstractions are quite right (is 
CompositesIndexOnCollectionKeyAndValue really a kind of 
CompositesIndexOnCollectionKey? Would it be better to use a common superclass?

I would be okay with a common abstract superclass.  I agree that the 
abstraction doesn't line up 100%.

Regarding 2.1, I understand your desire to not have to run a patched version of 
C*, but we have to weigh that against potentially destabilizing 2.1 for other 
users.  My preference is still to stick with a 3.0 target.  Maybe [~slebresne] 
can weigh in here?

 Secondary index support for key-value pairs in CQL3 maps
 

 Key: CASSANDRA-8473
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8473
 Project: Cassandra
  Issue Type: Improvement
Reporter: Samuel Klock
Assignee: Samuel Klock
 Fix For: 3.0

 Attachments: cassandra-2.1-8473-actual-v1.txt, 
 cassandra-2.1-8473-v2.txt, cassandra-2.1-8473.txt, trunk-8473-v2.txt


 CASSANDRA-4511 and CASSANDRA-6383 made substantial progress on secondary 
 indexes on CQL3 maps, but support for a natural use case is still missing: 
 queries to find rows with map columns containing some key-value 

cassandra git commit: Add missing consistency levels to cassandra-stress

2014-12-16 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 b1b9cba2a - 93f679e00


Add missing consistency levels to cassandra-stress

Patch by Liang Xie; Reviewed by tjake for CASSANDRA-8253


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/93f679e0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/93f679e0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/93f679e0

Branch: refs/heads/cassandra-2.1
Commit: 93f679e00f0dbd81972acf641c5a47f3ee4727ee
Parents: b1b9cba
Author: T Jake Luciani j...@apache.org
Authored: Tue Dec 16 15:24:40 2014 -0500
Committer: T Jake Luciani j...@apache.org
Committed: Tue Dec 16 15:24:40 2014 -0500

--
 CHANGES.txt| 1 +
 .../org/apache/cassandra/stress/settings/SettingsCommand.java  | 2 +-
 .../src/org/apache/cassandra/stress/util/JavaDriverClient.java | 6 ++
 3 files changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/93f679e0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 964de54..6c0f2da 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.3
+ * Add missing ConsistencyLevels to cassandra-stress (CASSANDRA-8253)
  * Add auth support to cassandra-stress (CASSANDRA-7985)
  * Fix ArrayIndexOutOfBoundsException when generating error message
for some CQL syntax errors (CASSANDRA-8455)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/93f679e0/tools/stress/src/org/apache/cassandra/stress/settings/SettingsCommand.java
--
diff --git 
a/tools/stress/src/org/apache/cassandra/stress/settings/SettingsCommand.java 
b/tools/stress/src/org/apache/cassandra/stress/settings/SettingsCommand.java
index 8751dbf..e8b45ec 100644
--- a/tools/stress/src/org/apache/cassandra/stress/settings/SettingsCommand.java
+++ b/tools/stress/src/org/apache/cassandra/stress/settings/SettingsCommand.java
@@ -107,7 +107,7 @@ public abstract class SettingsCommand implements 
Serializable
 static abstract class Options extends GroupedOptions
 {
 final OptionSimple noWarmup = new OptionSimple(no-warmup, , null, 
Do not warmup the process, false);
-final OptionSimple consistencyLevel = new OptionSimple(cl=, 
ONE|QUORUM|LOCAL_QUORUM|EACH_QUORUM|ALL|ANY, ONE, Consistency level to 
use, false);
+final OptionSimple consistencyLevel = new OptionSimple(cl=, 
ONE|QUORUM|LOCAL_QUORUM|EACH_QUORUM|ALL|ANY|TWO|THREE|SERIAL|LOCAL_SERIAL|LOCAL_ONE,
 ONE, Consistency level to use, false);
 }
 
 static class Count extends Options

http://git-wip-us.apache.org/repos/asf/cassandra/blob/93f679e0/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java
--
diff --git 
a/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java 
b/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java
index 7aa7257..43d9ddc 100644
--- a/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java
+++ b/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java
@@ -178,6 +178,12 @@ public class JavaDriverClient
 return com.datastax.driver.core.ConsistencyLevel.LOCAL_QUORUM;
 case EACH_QUORUM:
 return com.datastax.driver.core.ConsistencyLevel.EACH_QUORUM;
+case SERIAL:
+return com.datastax.driver.core.ConsistencyLevel.SERIAL;
+case LOCAL_SERIAL:
+return com.datastax.driver.core.ConsistencyLevel.LOCAL_SERIAL;
+case LOCAL_ONE:
+return com.datastax.driver.core.ConsistencyLevel.LOCAL_ONE;
 }
 throw new AssertionError();
 }



[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-12-16 Thread jake
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dc1bc81f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dc1bc81f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dc1bc81f

Branch: refs/heads/trunk
Commit: dc1bc81f6ab528c7cd2ad89bf4dc9699aea2733e
Parents: 88478c6 93f679e
Author: T Jake Luciani j...@apache.org
Authored: Tue Dec 16 15:27:43 2014 -0500
Committer: T Jake Luciani j...@apache.org
Committed: Tue Dec 16 15:27:43 2014 -0500

--
 CHANGES.txt| 1 +
 .../org/apache/cassandra/stress/settings/SettingsCommand.java  | 2 +-
 .../src/org/apache/cassandra/stress/util/JavaDriverClient.java | 6 ++
 3 files changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dc1bc81f/CHANGES.txt
--
diff --cc CHANGES.txt
index e859150,6c0f2da..fc4d3f6
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,45 -1,5 +1,46 @@@
 +3.0
 + * Support for user-defined aggregation functions (CASSANDRA-8053)
 + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419)
 + * Refactor SelectStatement, return IN results in natural order instead
 +   of IN value list order (CASSANDRA-7981)
 + * Support UDTs, tuples, and collections in user-defined
 +   functions (CASSANDRA-7563)
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer 
apis (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support pure user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 
7781, 7929,
 +   7924, 7812, 8063, 7813)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 (CASSANDRA-5918)
 + * Add Thrift get_multi_slice call (CASSANDRA-6757)
 + * Optimize fetching multiple cells by name (CASSANDRA-6933)
 + * Allow compilation in java 8 (CASSANDRA-7028)
 + * Make incremental repair default (CASSANDRA-7250)
 + * Enable code coverage thru JaCoCo (CASSANDRA-7226)
 + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) 
 + * Shorten SSTable path (CASSANDRA-6962)
 + * Use unsafe mutations for most unit tests (CASSANDRA-6969)
 + * Fix race condition during calculation of pending ranges (CASSANDRA-7390)
 + * Fail on very large batch sizes (CASSANDRA-8011)
 + * Improve concurrency of repair (CASSANDRA-6455, 8208)
 +
 +
  2.1.3
+  * Add missing ConsistencyLevels to cassandra-stress (CASSANDRA-8253)
   * Add auth support to cassandra-stress (CASSANDRA-7985)
   * Fix ArrayIndexOutOfBoundsException when generating error message
 for some CQL syntax errors (CASSANDRA-8455)



[1/2] cassandra git commit: Add missing consistency levels to cassandra-stress

2014-12-16 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk 88478c6a6 - dc1bc81f6


Add missing consistency levels to cassandra-stress

Patch by Liang Xie; Reviewed by tjake for CASSANDRA-8253


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/93f679e0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/93f679e0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/93f679e0

Branch: refs/heads/trunk
Commit: 93f679e00f0dbd81972acf641c5a47f3ee4727ee
Parents: b1b9cba
Author: T Jake Luciani j...@apache.org
Authored: Tue Dec 16 15:24:40 2014 -0500
Committer: T Jake Luciani j...@apache.org
Committed: Tue Dec 16 15:24:40 2014 -0500

--
 CHANGES.txt| 1 +
 .../org/apache/cassandra/stress/settings/SettingsCommand.java  | 2 +-
 .../src/org/apache/cassandra/stress/util/JavaDriverClient.java | 6 ++
 3 files changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/93f679e0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 964de54..6c0f2da 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.3
+ * Add missing ConsistencyLevels to cassandra-stress (CASSANDRA-8253)
  * Add auth support to cassandra-stress (CASSANDRA-7985)
  * Fix ArrayIndexOutOfBoundsException when generating error message
for some CQL syntax errors (CASSANDRA-8455)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/93f679e0/tools/stress/src/org/apache/cassandra/stress/settings/SettingsCommand.java
--
diff --git 
a/tools/stress/src/org/apache/cassandra/stress/settings/SettingsCommand.java 
b/tools/stress/src/org/apache/cassandra/stress/settings/SettingsCommand.java
index 8751dbf..e8b45ec 100644
--- a/tools/stress/src/org/apache/cassandra/stress/settings/SettingsCommand.java
+++ b/tools/stress/src/org/apache/cassandra/stress/settings/SettingsCommand.java
@@ -107,7 +107,7 @@ public abstract class SettingsCommand implements 
Serializable
 static abstract class Options extends GroupedOptions
 {
 final OptionSimple noWarmup = new OptionSimple(no-warmup, , null, 
Do not warmup the process, false);
-final OptionSimple consistencyLevel = new OptionSimple(cl=, 
ONE|QUORUM|LOCAL_QUORUM|EACH_QUORUM|ALL|ANY, ONE, Consistency level to 
use, false);
+final OptionSimple consistencyLevel = new OptionSimple(cl=, 
ONE|QUORUM|LOCAL_QUORUM|EACH_QUORUM|ALL|ANY|TWO|THREE|SERIAL|LOCAL_SERIAL|LOCAL_ONE,
 ONE, Consistency level to use, false);
 }
 
 static class Count extends Options

http://git-wip-us.apache.org/repos/asf/cassandra/blob/93f679e0/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java
--
diff --git 
a/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java 
b/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java
index 7aa7257..43d9ddc 100644
--- a/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java
+++ b/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java
@@ -178,6 +178,12 @@ public class JavaDriverClient
 return com.datastax.driver.core.ConsistencyLevel.LOCAL_QUORUM;
 case EACH_QUORUM:
 return com.datastax.driver.core.ConsistencyLevel.EACH_QUORUM;
+case SERIAL:
+return com.datastax.driver.core.ConsistencyLevel.SERIAL;
+case LOCAL_SERIAL:
+return com.datastax.driver.core.ConsistencyLevel.LOCAL_SERIAL;
+case LOCAL_ONE:
+return com.datastax.driver.core.ConsistencyLevel.LOCAL_ONE;
 }
 throw new AssertionError();
 }



[jira] [Commented] (CASSANDRA-8432) Standalone Scrubber broken for LCS

2014-12-16 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248862#comment-14248862
 ] 

Carl Yeksigian commented on CASSANDRA-8432:
---

I think it's useful to have the output that we're checking the leveled 
manifest; otherwise, +1.

 Standalone Scrubber broken for LCS
 --

 Key: CASSANDRA-8432
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8432
 Project: Cassandra
  Issue Type: Bug
Reporter: Carl Yeksigian
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.1.3

 Attachments: 0001-make-sstablescrub-work-again.patch


 After CASSANDRA-8004, the compaction strategy for a column family will not be 
 instanceof LeveledCompactionStrategy (StandaloneScrubber.java:100), so we 
 don't check the manifest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-12-16 Thread jmckenzie
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/config/DatabaseDescriptor.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e73fccdc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e73fccdc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e73fccdc

Branch: refs/heads/trunk
Commit: e73fccdcd1ee39e494fff55b53bb33c4440220fb
Parents: dc1bc81 9871914
Author: Joshua McKenzie jmcken...@apache.org
Authored: Tue Dec 16 14:34:14 2014 -0600
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Tue Dec 16 14:34:14 2014 -0600

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e73fccdc/CHANGES.txt
--
diff --cc CHANGES.txt
index fc4d3f6,d95e02e..3571c1e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,45 -1,5 +1,46 @@@
 +3.0
 + * Support for user-defined aggregation functions (CASSANDRA-8053)
 + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419)
 + * Refactor SelectStatement, return IN results in natural order instead
 +   of IN value list order (CASSANDRA-7981)
 + * Support UDTs, tuples, and collections in user-defined
 +   functions (CASSANDRA-7563)
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer 
apis (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support pure user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 
7781, 7929,
 +   7924, 7812, 8063, 7813)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 (CASSANDRA-5918)
 + * Add Thrift get_multi_slice call (CASSANDRA-6757)
 + * Optimize fetching multiple cells by name (CASSANDRA-6933)
 + * Allow compilation in java 8 (CASSANDRA-7028)
 + * Make incremental repair default (CASSANDRA-7250)
 + * Enable code coverage thru JaCoCo (CASSANDRA-7226)
 + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) 
 + * Shorten SSTable path (CASSANDRA-6962)
 + * Use unsafe mutations for most unit tests (CASSANDRA-6969)
 + * Fix race condition during calculation of pending ranges (CASSANDRA-7390)
 + * Fail on very large batch sizes (CASSANDRA-8011)
 + * Improve concurrency of repair (CASSANDRA-6455, 8208)
 +
 +
  2.1.3
+  * Disable mmap on Windows (CASSANDRA-6993)
   * Add missing ConsistencyLevels to cassandra-stress (CASSANDRA-8253)
   * Add auth support to cassandra-stress (CASSANDRA-7985)
   * Fix ArrayIndexOutOfBoundsException when generating error message

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e73fccdc/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--



cassandra git commit: Disable mmap on Windows (2.1 branch)

2014-12-16 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 93f679e00 - 987191418


Disable mmap on Windows (2.1 branch)

Patch by jmckenzie; reviewed by belliottsmith for CASSANDRA-6993


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/98719141
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/98719141
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/98719141

Branch: refs/heads/cassandra-2.1
Commit: 987191418849191258519464665223f48620920c
Parents: 93f679e
Author: Joshua McKenzie jmcken...@apache.org
Authored: Tue Dec 16 14:29:49 2014 -0600
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Tue Dec 16 14:29:49 2014 -0600

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java| 35 +---
 2 files changed, 24 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/98719141/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6c0f2da..d95e02e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.3
+ * Disable mmap on Windows (CASSANDRA-6993)
  * Add missing ConsistencyLevels to cassandra-stress (CASSANDRA-8253)
  * Add auth support to cassandra-stress (CASSANDRA-7985)
  * Fix ArrayIndexOutOfBoundsException when generating error message

http://git-wip-us.apache.org/repos/asf/cassandra/blob/98719141/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 2ec0172..83b3636 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -195,23 +195,34 @@ public class DatabaseDescriptor
 if (conf.commitlog_total_space_in_mb == null)
 conf.commitlog_total_space_in_mb = hasLargeAddressSpace() ? 8192 : 
32;
 
-/* evaluate the DiskAccessMode Config directive, which also affects 
indexAccessMode selection */
-if (conf.disk_access_mode == Config.DiskAccessMode.auto)
-{
-conf.disk_access_mode = hasLargeAddressSpace() ? 
Config.DiskAccessMode.mmap : Config.DiskAccessMode.standard;
-indexAccessMode = conf.disk_access_mode;
-logger.info(DiskAccessMode 'auto' determined to be {}, 
indexAccessMode is {}, conf.disk_access_mode, indexAccessMode);
-}
-else if (conf.disk_access_mode == 
Config.DiskAccessMode.mmap_index_only)
+// Always force standard mode access on Windows - CASSANDRA-6993. 
Windows won't allow deletion of hard-links to files that
+// are memory-mapped which causes trouble with snapshots.
+if (FBUtilities.isWindows())
 {
 conf.disk_access_mode = Config.DiskAccessMode.standard;
-indexAccessMode = Config.DiskAccessMode.mmap;
-logger.info(DiskAccessMode is {}, indexAccessMode is {}, 
conf.disk_access_mode, indexAccessMode);
+indexAccessMode = conf.disk_access_mode;
+logger.info(Windows environment detected.  DiskAccessMode set to 
{}, indexAccessMode {}, conf.disk_access_mode, indexAccessMode);
 }
 else
 {
-indexAccessMode = conf.disk_access_mode;
-logger.info(DiskAccessMode is {}, indexAccessMode is {}, 
conf.disk_access_mode, indexAccessMode);
+/* evaluate the DiskAccessMode Config directive, which also 
affects indexAccessMode selection */
+if (conf.disk_access_mode == Config.DiskAccessMode.auto)
+{
+conf.disk_access_mode = hasLargeAddressSpace() ? 
Config.DiskAccessMode.mmap : Config.DiskAccessMode.standard;
+indexAccessMode = conf.disk_access_mode;
+logger.info(DiskAccessMode 'auto' determined to be {}, 
indexAccessMode is {}, conf.disk_access_mode, indexAccessMode);
+}
+else if (conf.disk_access_mode == 
Config.DiskAccessMode.mmap_index_only)
+{
+conf.disk_access_mode = Config.DiskAccessMode.standard;
+indexAccessMode = Config.DiskAccessMode.mmap;
+logger.info(DiskAccessMode is {}, indexAccessMode is {}, 
conf.disk_access_mode, indexAccessMode);
+}
+else
+{
+indexAccessMode = conf.disk_access_mode;
+logger.info(DiskAccessMode is {}, indexAccessMode is {}, 
conf.disk_access_mode, indexAccessMode);
+}
 }
 
 /* Authentication and authorization backend, implementing 
IAuthenticator and 

[1/2] cassandra git commit: Disable mmap on Windows (2.1 branch)

2014-12-16 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk dc1bc81f6 - e73fccdcd


Disable mmap on Windows (2.1 branch)

Patch by jmckenzie; reviewed by belliottsmith for CASSANDRA-6993


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/98719141
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/98719141
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/98719141

Branch: refs/heads/trunk
Commit: 987191418849191258519464665223f48620920c
Parents: 93f679e
Author: Joshua McKenzie jmcken...@apache.org
Authored: Tue Dec 16 14:29:49 2014 -0600
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Tue Dec 16 14:29:49 2014 -0600

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java| 35 +---
 2 files changed, 24 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/98719141/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6c0f2da..d95e02e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.3
+ * Disable mmap on Windows (CASSANDRA-6993)
  * Add missing ConsistencyLevels to cassandra-stress (CASSANDRA-8253)
  * Add auth support to cassandra-stress (CASSANDRA-7985)
  * Fix ArrayIndexOutOfBoundsException when generating error message

http://git-wip-us.apache.org/repos/asf/cassandra/blob/98719141/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 2ec0172..83b3636 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -195,23 +195,34 @@ public class DatabaseDescriptor
 if (conf.commitlog_total_space_in_mb == null)
 conf.commitlog_total_space_in_mb = hasLargeAddressSpace() ? 8192 : 
32;
 
-/* evaluate the DiskAccessMode Config directive, which also affects 
indexAccessMode selection */
-if (conf.disk_access_mode == Config.DiskAccessMode.auto)
-{
-conf.disk_access_mode = hasLargeAddressSpace() ? 
Config.DiskAccessMode.mmap : Config.DiskAccessMode.standard;
-indexAccessMode = conf.disk_access_mode;
-logger.info(DiskAccessMode 'auto' determined to be {}, 
indexAccessMode is {}, conf.disk_access_mode, indexAccessMode);
-}
-else if (conf.disk_access_mode == 
Config.DiskAccessMode.mmap_index_only)
+// Always force standard mode access on Windows - CASSANDRA-6993. 
Windows won't allow deletion of hard-links to files that
+// are memory-mapped which causes trouble with snapshots.
+if (FBUtilities.isWindows())
 {
 conf.disk_access_mode = Config.DiskAccessMode.standard;
-indexAccessMode = Config.DiskAccessMode.mmap;
-logger.info(DiskAccessMode is {}, indexAccessMode is {}, 
conf.disk_access_mode, indexAccessMode);
+indexAccessMode = conf.disk_access_mode;
+logger.info(Windows environment detected.  DiskAccessMode set to 
{}, indexAccessMode {}, conf.disk_access_mode, indexAccessMode);
 }
 else
 {
-indexAccessMode = conf.disk_access_mode;
-logger.info(DiskAccessMode is {}, indexAccessMode is {}, 
conf.disk_access_mode, indexAccessMode);
+/* evaluate the DiskAccessMode Config directive, which also 
affects indexAccessMode selection */
+if (conf.disk_access_mode == Config.DiskAccessMode.auto)
+{
+conf.disk_access_mode = hasLargeAddressSpace() ? 
Config.DiskAccessMode.mmap : Config.DiskAccessMode.standard;
+indexAccessMode = conf.disk_access_mode;
+logger.info(DiskAccessMode 'auto' determined to be {}, 
indexAccessMode is {}, conf.disk_access_mode, indexAccessMode);
+}
+else if (conf.disk_access_mode == 
Config.DiskAccessMode.mmap_index_only)
+{
+conf.disk_access_mode = Config.DiskAccessMode.standard;
+indexAccessMode = Config.DiskAccessMode.mmap;
+logger.info(DiskAccessMode is {}, indexAccessMode is {}, 
conf.disk_access_mode, indexAccessMode);
+}
+else
+{
+indexAccessMode = conf.disk_access_mode;
+logger.info(DiskAccessMode is {}, indexAccessMode is {}, 
conf.disk_access_mode, indexAccessMode);
+}
 }
 
 /* Authentication and authorization backend, implementing 
IAuthenticator and IAuthorizer */



[2/2] cassandra git commit: add missing file

2014-12-16 Thread jmckenzie
add missing file


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ffa80673
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ffa80673
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ffa80673

Branch: refs/heads/cassandra-2.1
Commit: ffa806733e7492d3d4b54957f911af501e043df9
Parents: 1fec4a4
Author: Joshua McKenzie jmcken...@apache.org
Authored: Tue Dec 16 15:06:02 2014 -0600
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Tue Dec 16 15:06:02 2014 -0600

--
 .../cassandra/io/sstable/ISSTableScanner.java   | 34 
 1 file changed, 34 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ffa80673/src/java/org/apache/cassandra/io/sstable/ISSTableScanner.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/ISSTableScanner.java 
b/src/java/org/apache/cassandra/io/sstable/ISSTableScanner.java
new file mode 100644
index 000..b80bd87
--- /dev/null
+++ b/src/java/org/apache/cassandra/io/sstable/ISSTableScanner.java
@@ -0,0 +1,34 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.cassandra.io.sstable;
+
+import org.apache.cassandra.db.columniterator.OnDiskAtomIterator;
+import org.apache.cassandra.utils.CloseableIterator;
+
+/**
+ * An ISSTableScanner is an abstraction allowing multiple SSTableScanners to be
+ * chained together under the hood.  See LeveledCompactionStrategy.getScanners.
+ */
+public interface ISSTableScanner extends CloseableIteratorOnDiskAtomIterator
+{
+public long getLengthInBytes();
+public long getCurrentPosition();
+public String getBackingFiles();
+}



[4/5] cassandra git commit: add missing file

2014-12-16 Thread jmckenzie
add missing file


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ffa80673
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ffa80673
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ffa80673

Branch: refs/heads/trunk
Commit: ffa806733e7492d3d4b54957f911af501e043df9
Parents: 1fec4a4
Author: Joshua McKenzie jmcken...@apache.org
Authored: Tue Dec 16 15:06:02 2014 -0600
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Tue Dec 16 15:06:02 2014 -0600

--
 .../cassandra/io/sstable/ISSTableScanner.java   | 34 
 1 file changed, 34 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ffa80673/src/java/org/apache/cassandra/io/sstable/ISSTableScanner.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/ISSTableScanner.java 
b/src/java/org/apache/cassandra/io/sstable/ISSTableScanner.java
new file mode 100644
index 000..b80bd87
--- /dev/null
+++ b/src/java/org/apache/cassandra/io/sstable/ISSTableScanner.java
@@ -0,0 +1,34 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.cassandra.io.sstable;
+
+import org.apache.cassandra.db.columniterator.OnDiskAtomIterator;
+import org.apache.cassandra.utils.CloseableIterator;
+
+/**
+ * An ISSTableScanner is an abstraction allowing multiple SSTableScanners to be
+ * chained together under the hood.  See LeveledCompactionStrategy.getScanners.
+ */
+public interface ISSTableScanner extends CloseableIteratorOnDiskAtomIterator
+{
+public long getLengthInBytes();
+public long getCurrentPosition();
+public String getBackingFiles();
+}



[1/2] cassandra git commit: Fix ref counting race between SSTableScanner and SSTR

2014-12-16 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 987191418 - ffa806733


Fix ref counting race between SSTableScanner and SSTR

Patch by jmckenzie; reviewed by marcuse for CASSANDRA-8399


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1fec4a42
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1fec4a42
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1fec4a42

Branch: refs/heads/cassandra-2.1
Commit: 1fec4a4281be94f8ef2f9f8a5eaccee56d70e87e
Parents: 9871914
Author: Joshua McKenzie jmcken...@apache.org
Authored: Tue Dec 16 14:37:07 2014 -0600
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Tue Dec 16 14:37:07 2014 -0600

--
 .../compaction/AbstractCompactionIterable.java  |  7 +-
 .../compaction/AbstractCompactionStrategy.java  | 10 +--
 .../db/compaction/CompactionIterable.java   |  5 +-
 .../db/compaction/CompactionManager.java| 22 +++---
 .../db/compaction/ICompactionScanner.java   | 34 -
 .../compaction/LeveledCompactionStrategy.java   |  7 +-
 .../compaction/WrappingCompactionStrategy.java  |  4 +-
 .../cassandra/io/sstable/SSTableReader.java | 61 +++-
 .../cassandra/io/sstable/SSTableScanner.java| 73 +---
 .../apache/cassandra/tools/SSTableExport.java   |  2 +-
 .../db/compaction/AntiCompactionTest.java   |  9 +--
 .../db/compaction/CompactionsTest.java  |  7 +-
 .../LeveledCompactionStrategyTest.java  |  5 +-
 .../cassandra/db/compaction/TTLExpiryTest.java  |  4 +-
 .../cassandra/io/sstable/SSTableReaderTest.java |  3 +-
 .../io/sstable/SSTableRewriterTest.java | 24 +++
 .../io/sstable/SSTableScannerTest.java  | 17 +++--
 .../cassandra/io/sstable/SSTableUtils.java  |  4 +-
 18 files changed, 130 insertions(+), 168 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1fec4a42/src/java/org/apache/cassandra/db/compaction/AbstractCompactionIterable.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionIterable.java 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionIterable.java
index e9f063f..5ac2c8b 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionIterable.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionIterable.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.db.compaction;
 import java.util.List;
 import java.util.concurrent.atomic.AtomicLong;
 
+import org.apache.cassandra.io.sstable.ISSTableScanner;
 import org.apache.cassandra.utils.CloseableIterator;
 
 public abstract class AbstractCompactionIterable extends CompactionInfo.Holder 
implements IterableAbstractCompactedRow
@@ -28,7 +29,7 @@ public abstract class AbstractCompactionIterable extends 
CompactionInfo.Holder i
 protected final CompactionController controller;
 protected final long totalBytes;
 protected volatile long bytesRead;
-protected final ListICompactionScanner scanners;
+protected final ListISSTableScanner scanners;
 /*
  * counters for merged rows.
  * array index represents (number of merged rows - 1), so index 0 is 
counter for no merge (1 row),
@@ -36,7 +37,7 @@ public abstract class AbstractCompactionIterable extends 
CompactionInfo.Holder i
  */
 protected final AtomicLong[] mergeCounters;
 
-public AbstractCompactionIterable(CompactionController controller, 
OperationType type, ListICompactionScanner scanners)
+public AbstractCompactionIterable(CompactionController controller, 
OperationType type, ListISSTableScanner scanners)
 {
 this.controller = controller;
 this.type = type;
@@ -44,7 +45,7 @@ public abstract class AbstractCompactionIterable extends 
CompactionInfo.Holder i
 this.bytesRead = 0;
 
 long bytes = 0;
-for (ICompactionScanner scanner : scanners)
+for (ISSTableScanner scanner : scanners)
 bytes += scanner.getLengthInBytes();
 this.totalBytes = bytes;
 mergeCounters = new AtomicLong[scanners.size()];

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1fec4a42/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
index bf136b9..337657d 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
@@ -22,7 +22,6 @@ import java.util.*;
 import 

[2/5] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-12-16 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/bee53d72/src/java/org/apache/cassandra/io/sstable/format/big/BigTableReader.java
--
diff --cc 
src/java/org/apache/cassandra/io/sstable/format/big/BigTableReader.java
index 2488f86,000..fc346d1
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/io/sstable/format/big/BigTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/format/big/BigTableReader.java
@@@ -1,256 -1,0 +1,251 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * License); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an AS IS BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.io.sstable.format.big;
 +
 +import com.google.common.util.concurrent.RateLimiter;
 +import org.apache.cassandra.cache.KeyCacheKey;
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.db.DataRange;
 +import org.apache.cassandra.db.DecoratedKey;
 +import org.apache.cassandra.db.RowIndexEntry;
 +import org.apache.cassandra.db.RowPosition;
 +import org.apache.cassandra.db.columniterator.OnDiskAtomIterator;
- import org.apache.cassandra.db.compaction.ICompactionScanner;
 +import org.apache.cassandra.db.composites.CellName;
 +import org.apache.cassandra.db.filter.ColumnSlice;
 +import org.apache.cassandra.dht.IPartitioner;
 +import org.apache.cassandra.dht.Range;
 +import org.apache.cassandra.dht.Token;
 +import org.apache.cassandra.io.sstable.Component;
 +import org.apache.cassandra.io.sstable.CorruptSSTableException;
 +import org.apache.cassandra.io.sstable.Descriptor;
++import org.apache.cassandra.io.sstable.ISSTableScanner;
 +import org.apache.cassandra.io.sstable.format.SSTableReader;
 +import org.apache.cassandra.io.sstable.metadata.StatsMetadata;
 +import org.apache.cassandra.io.util.FileDataInput;
 +import org.apache.cassandra.io.util.FileUtils;
 +import org.apache.cassandra.tracing.Tracing;
 +import org.apache.cassandra.utils.ByteBufferUtil;
 +import org.apache.cassandra.utils.Pair;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +import java.io.IOException;
 +import java.nio.ByteBuffer;
 +import java.util.*;
 +
 +/**
 + * SSTableReaders are open()ed by Keyspace.onStart; after that they are 
created by SSTableWriter.renameAndOpen.
 + * Do not re-call open() on existing SSTable files; use the references kept 
by ColumnFamilyStore post-start instead.
 + */
 +public class BigTableReader extends SSTableReader
 +{
 +private static final Logger logger = 
LoggerFactory.getLogger(BigTableReader.class);
 +
 +BigTableReader(Descriptor desc, SetComponent components, CFMetaData 
metadata, IPartitioner partitioner, Long maxDataAge, StatsMetadata 
sstableMetadata, OpenReason openReason)
 +{
 +super(desc, components, metadata, partitioner, maxDataAge, 
sstableMetadata, openReason);
 +}
 +
 +public OnDiskAtomIterator iterator(DecoratedKey key, SortedSetCellName 
columns)
 +{
 +return new SSTableNamesIterator(this, key, columns);
 +}
 +
 +public OnDiskAtomIterator iterator(FileDataInput input, DecoratedKey key, 
SortedSetCellName columns, RowIndexEntry indexEntry )
 +{
 +return new SSTableNamesIterator(this, input, key, columns, 
indexEntry);
 +}
 +
 +public OnDiskAtomIterator iterator(DecoratedKey key, ColumnSlice[] 
slices, boolean reverse)
 +{
 +return new SSTableSliceIterator(this, key, slices, reverse);
 +}
 +
 +public OnDiskAtomIterator iterator(FileDataInput input, DecoratedKey key, 
ColumnSlice[] slices, boolean reverse, RowIndexEntry indexEntry)
 +{
 +return new SSTableSliceIterator(this, input, key, slices, reverse, 
indexEntry);
 +}
 +/**
 + *
 + * @param dataRange filter to use when reading the columns
 + * @return A Scanner for seeking over the rows of the SSTable.
 + */
- public ICompactionScanner getScanner(DataRange dataRange, RateLimiter 
limiter)
++public ISSTableScanner getScanner(DataRange dataRange, RateLimiter 
limiter)
 +{
- return new BigTableScanner(this, dataRange, limiter);
++return BigTableScanner.getScanner(this, dataRange, limiter);
 +}
 +
 +
 +/**
 + * Direct I/O SSTableScanner over a defined collection of ranges 

[5/5] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-12-16 Thread jmckenzie
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/32ac6af2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/32ac6af2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/32ac6af2

Branch: refs/heads/trunk
Commit: 32ac6af2ba936c23e61d2ac4ab9cd32175cb5176
Parents: bee53d7 ffa8067
Author: Joshua McKenzie jmcken...@apache.org
Authored: Tue Dec 16 15:06:32 2014 -0600
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Tue Dec 16 15:06:32 2014 -0600

--

--




[1/5] cassandra git commit: Fix ref counting race between SSTableScanner and SSTR

2014-12-16 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk e73fccdcd - 32ac6af2b


Fix ref counting race between SSTableScanner and SSTR

Patch by jmckenzie; reviewed by marcuse for CASSANDRA-8399


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1fec4a42
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1fec4a42
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1fec4a42

Branch: refs/heads/trunk
Commit: 1fec4a4281be94f8ef2f9f8a5eaccee56d70e87e
Parents: 9871914
Author: Joshua McKenzie jmcken...@apache.org
Authored: Tue Dec 16 14:37:07 2014 -0600
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Tue Dec 16 14:37:07 2014 -0600

--
 .../compaction/AbstractCompactionIterable.java  |  7 +-
 .../compaction/AbstractCompactionStrategy.java  | 10 +--
 .../db/compaction/CompactionIterable.java   |  5 +-
 .../db/compaction/CompactionManager.java| 22 +++---
 .../db/compaction/ICompactionScanner.java   | 34 -
 .../compaction/LeveledCompactionStrategy.java   |  7 +-
 .../compaction/WrappingCompactionStrategy.java  |  4 +-
 .../cassandra/io/sstable/SSTableReader.java | 61 +++-
 .../cassandra/io/sstable/SSTableScanner.java| 73 +---
 .../apache/cassandra/tools/SSTableExport.java   |  2 +-
 .../db/compaction/AntiCompactionTest.java   |  9 +--
 .../db/compaction/CompactionsTest.java  |  7 +-
 .../LeveledCompactionStrategyTest.java  |  5 +-
 .../cassandra/db/compaction/TTLExpiryTest.java  |  4 +-
 .../cassandra/io/sstable/SSTableReaderTest.java |  3 +-
 .../io/sstable/SSTableRewriterTest.java | 24 +++
 .../io/sstable/SSTableScannerTest.java  | 17 +++--
 .../cassandra/io/sstable/SSTableUtils.java  |  4 +-
 18 files changed, 130 insertions(+), 168 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1fec4a42/src/java/org/apache/cassandra/db/compaction/AbstractCompactionIterable.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionIterable.java 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionIterable.java
index e9f063f..5ac2c8b 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionIterable.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionIterable.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.db.compaction;
 import java.util.List;
 import java.util.concurrent.atomic.AtomicLong;
 
+import org.apache.cassandra.io.sstable.ISSTableScanner;
 import org.apache.cassandra.utils.CloseableIterator;
 
 public abstract class AbstractCompactionIterable extends CompactionInfo.Holder 
implements IterableAbstractCompactedRow
@@ -28,7 +29,7 @@ public abstract class AbstractCompactionIterable extends 
CompactionInfo.Holder i
 protected final CompactionController controller;
 protected final long totalBytes;
 protected volatile long bytesRead;
-protected final ListICompactionScanner scanners;
+protected final ListISSTableScanner scanners;
 /*
  * counters for merged rows.
  * array index represents (number of merged rows - 1), so index 0 is 
counter for no merge (1 row),
@@ -36,7 +37,7 @@ public abstract class AbstractCompactionIterable extends 
CompactionInfo.Holder i
  */
 protected final AtomicLong[] mergeCounters;
 
-public AbstractCompactionIterable(CompactionController controller, 
OperationType type, ListICompactionScanner scanners)
+public AbstractCompactionIterable(CompactionController controller, 
OperationType type, ListISSTableScanner scanners)
 {
 this.controller = controller;
 this.type = type;
@@ -44,7 +45,7 @@ public abstract class AbstractCompactionIterable extends 
CompactionInfo.Holder i
 this.bytesRead = 0;
 
 long bytes = 0;
-for (ICompactionScanner scanner : scanners)
+for (ISSTableScanner scanner : scanners)
 bytes += scanner.getLengthInBytes();
 this.totalBytes = bytes;
 mergeCounters = new AtomicLong[scanners.size()];

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1fec4a42/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
index bf136b9..337657d 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
@@ -22,7 +22,6 @@ import java.util.*;
 import com.google.common.base.Throwables;
 import 

[3/5] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-12-16 Thread jmckenzie
Merge branch 'cassandra-2.1' into trunk

Conflicts:

src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
src/java/org/apache/cassandra/db/compaction/CompactionIterable.java
src/java/org/apache/cassandra/db/compaction/CompactionManager.java

src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java

src/java/org/apache/cassandra/db/compaction/WrappingCompactionStrategy.java
src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
src/java/org/apache/cassandra/io/sstable/format/big/BigTableScanner.java
src/java/org/apache/cassandra/tools/SSTableExport.java
test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java
test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java

test/unit/org/apache/cassandra/db/compaction/LeveledCompactionStrategyTest.java
test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
test/unit/org/apache/cassandra/io/sstable/SSTableReaderTest.java
test/unit/org/apache/cassandra/io/sstable/SSTableRewriterTest.java
test/unit/org/apache/cassandra/io/sstable/SSTableScannerTest.java
test/unit/org/apache/cassandra/io/sstable/SSTableUtils.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bee53d72
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bee53d72
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bee53d72

Branch: refs/heads/trunk
Commit: bee53d72a530aab2949d12bd9d2320b76811c85a
Parents: e73fccd 1fec4a4
Author: Joshua McKenzie jmcken...@apache.org
Authored: Tue Dec 16 15:03:05 2014 -0600
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Tue Dec 16 15:03:05 2014 -0600

--
 .../compaction/AbstractCompactionIterable.java  |  7 +-
 .../compaction/AbstractCompactionStrategy.java  |  9 +--
 .../db/compaction/CompactionIterable.java   |  5 +-
 .../db/compaction/CompactionManager.java| 29 +++-
 .../db/compaction/ICompactionScanner.java   | 34 -
 .../compaction/LeveledCompactionStrategy.java   |  7 +-
 .../compaction/WrappingCompactionStrategy.java  |  4 +-
 .../cassandra/io/sstable/ISSTableScanner.java   | 34 +
 .../io/sstable/format/SSTableReader.java| 52 ++
 .../io/sstable/format/big/BigTableReader.java   | 15 ++--
 .../io/sstable/format/big/BigTableScanner.java  | 74 +---
 .../apache/cassandra/tools/SSTableExport.java   |  3 +-
 .../db/compaction/AntiCompactionTest.java   |  6 +-
 .../db/compaction/CompactionsTest.java  |  2 +-
 .../LeveledCompactionStrategyTest.java  |  5 +-
 .../cassandra/db/compaction/TTLExpiryTest.java  |  3 +-
 .../cassandra/io/sstable/SSTableReaderTest.java |  3 +-
 .../io/sstable/SSTableRewriterTest.java | 22 +++---
 .../io/sstable/SSTableScannerTest.java  | 17 +++--
 .../cassandra/io/sstable/SSTableUtils.java  |  5 +-
 20 files changed, 168 insertions(+), 168 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bee53d72/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
--
diff --cc 
src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
index 1d6d2e1,337657d..0f44b4b
--- 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
@@@ -34,6 -33,8 +34,7 @@@ import org.apache.cassandra.dht.Range
  import org.apache.cassandra.dht.Token;
  import org.apache.cassandra.exceptions.ConfigurationException;
  import org.apache.cassandra.io.sstable.Component;
+ import org.apache.cassandra.io.sstable.ISSTableScanner;
 -import org.apache.cassandra.io.sstable.SSTableReader;
  import org.apache.cassandra.utils.JVMStabilityInspector;
  
  /**

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bee53d72/src/java/org/apache/cassandra/db/compaction/CompactionIterable.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/CompactionIterable.java
index 86918bc,fdcec6e..cd08b81
--- a/src/java/org/apache/cassandra/db/compaction/CompactionIterable.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionIterable.java
@@@ -24,7 -24,7 +24,8 @@@ import java.util.List
  import com.google.common.collect.ImmutableList;
  
  import org.apache.cassandra.db.columniterator.OnDiskAtomIterator;
 +import org.apache.cassandra.io.sstable.format.SSTableFormat;
+ import org.apache.cassandra.io.sstable.ISSTableScanner;
  import org.apache.cassandra.utils.CloseableIterator;
  import org.apache.cassandra.utils.MergeIterator;
  
@@@ -40,10 

[jira] [Updated] (CASSANDRA-8494) incremental bootstrap

2014-12-16 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8494:
--
  Priority: Minor  (was: Major)
Issue Type: New Feature  (was: Improvement)

 incremental bootstrap
 -

 Key: CASSANDRA-8494
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8494
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jon Haddad
Priority: Minor

 Current bootstrapping involves (to my knowledge) picking tokens and streaming 
 data before the node is available for requests.  This can be problematic with 
 fat nodes, since it may require 20TB of data to be streamed over before the 
 machine can be useful.  This can result in a massive window of time before 
 the machine can do anything useful.
 As a potential approach to mitigate the huge window of time before a node is 
 available, I suggest modifying the bootstrap process to only acquire a single 
 initial token before being marked UP.  This would likely be a configuration 
 parameter incremental_bootstrap or something similar.
 After the node is bootstrapped with this one token, it could go into UP 
 state, and could then acquire additional tokens (one or a handful at a time), 
 which would be streamed over while the node is active and serving requests.  
 The benefit here is that with the default 256 tokens a node could become an 
 active part of the cluster with less than 1% of it's final data streamed over.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8494) incremental bootstrap

2014-12-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14248999#comment-14248999
 ] 

Jonathan Ellis commented on CASSANDRA-8494:
---

Is this really a big problem in practice?  Yeah, it's aesthetically ugly, but 
is it worth rewriting and destabilizing bootstrap to address it?

 incremental bootstrap
 -

 Key: CASSANDRA-8494
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8494
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jon Haddad

 Current bootstrapping involves (to my knowledge) picking tokens and streaming 
 data before the node is available for requests.  This can be problematic with 
 fat nodes, since it may require 20TB of data to be streamed over before the 
 machine can be useful.  This can result in a massive window of time before 
 the machine can do anything useful.
 As a potential approach to mitigate the huge window of time before a node is 
 available, I suggest modifying the bootstrap process to only acquire a single 
 initial token before being marked UP.  This would likely be a configuration 
 parameter incremental_bootstrap or something similar.
 After the node is bootstrapped with this one token, it could go into UP 
 state, and could then acquire additional tokens (one or a handful at a time), 
 which would be streamed over while the node is active and serving requests.  
 The benefit here is that with the default 256 tokens a node could become an 
 active part of the cluster with less than 1% of it's final data streamed over.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6696) Partition sstables by token range

2014-12-16 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249000#comment-14249000
 ] 

Jon Haddad commented on CASSANDRA-6696:
---

It's been pointed out to me that this would help Spark performance, since it 
achieves data locality by filtering on tokens.  We could skip entire sstables 
when reading in local data for bulk processing.

 Partition sstables by token range
 -

 Key: CASSANDRA-6696
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Assignee: Marcus Eriksson
  Labels: compaction, correctness, performance
 Fix For: 3.0


 In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
 empty one and repair is run. 
 This can cause deleted data to come back in some cases. Also this is true for 
 corrupt stables in which we delete the corrupt stable and run repair. 
 Here is an example:
 Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
 row=sankalp col=sankalp is written 20 days back and successfully went to all 
 three nodes. 
 Then a delete/tombstone was written successfully for the same row column 15 
 days back. 
 Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
 since it got compacted with the actual data. So there is no trace of this row 
 column in node A and B.
 Now in node C, say the original data is in drive1 and tombstone is in drive2. 
 Compaction has not yet reclaimed the data and tombstone.  
 Drive2 becomes corrupt and was replaced with new empty drive. 
 Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
 has come back to life. 
 Now after replacing the drive we run repair. This data will be propagated to 
 all nodes. 
 Note: This is still a problem even if we run repair every gc grace. 
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8449) Allow zero-copy reads again

2014-12-16 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249015#comment-14249015
 ] 

T Jake Luciani commented on CASSANDRA-8449:
---

Looks like there is already a CFS.readOrdering barrier in scheduleTidy.  

The problem is readOrdering is only started at the CFS.getTopLevelColumns() 
level, which means the data could be dropped before it's written to the output 
buffer.  
We could move this to the ReadCommand level which would cover internode 
messages via the verb handlers.  But for local read commands we would need to 
push this all the way to the netty/thrift level perhaps push the opOrder into a 
wrapper around ListRow

On the other hand this is a lot of effort for something that isn't wildly used 
anymore.  CASSANDRA-8464 is much more relevant and doesn't have this issue.


 Allow zero-copy reads again
 ---

 Key: CASSANDRA-8449
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8449
 Project: Cassandra
  Issue Type: Improvement
Reporter: T Jake Luciani
Assignee: T Jake Luciani
Priority: Minor
  Labels: performance
 Fix For: 3.0


 We disabled zero-copy reads in CASSANDRA-3179 due to in flight reads 
 accessing a ByteBuffer when the data was unmapped by compaction.  Currently 
 this code path is only used for uncompressed reads.
 The actual bytes are in fact copied to the client output buffers for both 
 netty and thrift before being sent over the wire, so the only issue really is 
 the time it takes to process the read internally.  
 This patch adds a slow network read test and changes the tidy() method to 
 actually delete a sstable once the readTimeout has elapsed giving plenty of 
 time to serialize the read.
 Removing this copy causes significantly less GC on the read path and improves 
 the tail latencies:
 http://cstar.datastax.com/graph?stats=c0c8ce16-7fea-11e4-959d-42010af0688fmetric=gc_countoperation=2_readsmoothing=1show_aggregates=truexmin=0xmax=109.34ymin=0ymax=5.5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8494) incremental bootstrap

2014-12-16 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249016#comment-14249016
 ] 

Jon Haddad commented on CASSANDRA-8494:
---

Well, it depends.  The issue is written on the assumption that we want to be 
able to increase node density, and that currently bootstrapping a 20TB node is 
problematic.  If we're not going to push node density, it might not be an 
issue, but I suspect sticking to no more than 1TB per node is going to fly 
less and less over time.  

 incremental bootstrap
 -

 Key: CASSANDRA-8494
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8494
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jon Haddad
Priority: Minor

 Current bootstrapping involves (to my knowledge) picking tokens and streaming 
 data before the node is available for requests.  This can be problematic with 
 fat nodes, since it may require 20TB of data to be streamed over before the 
 machine can be useful.  This can result in a massive window of time before 
 the machine can do anything useful.
 As a potential approach to mitigate the huge window of time before a node is 
 available, I suggest modifying the bootstrap process to only acquire a single 
 initial token before being marked UP.  This would likely be a configuration 
 parameter incremental_bootstrap or something similar.
 After the node is bootstrapped with this one token, it could go into UP 
 state, and could then acquire additional tokens (one or a handful at a time), 
 which would be streamed over while the node is active and serving requests.  
 The benefit here is that with the default 256 tokens a node could become an 
 active part of the cluster with less than 1% of it's final data streamed over.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-6696) Partition sstables by token range

2014-12-16 Thread Jon Haddad (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Haddad updated CASSANDRA-6696:
--
Comment: was deleted

(was: It's been pointed out to me that this would help Spark performance, since 
it achieves data locality by filtering on tokens.  We could skip entire 
sstables when reading in local data for bulk processing.)

 Partition sstables by token range
 -

 Key: CASSANDRA-6696
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Assignee: Marcus Eriksson
  Labels: compaction, correctness, performance
 Fix For: 3.0


 In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
 empty one and repair is run. 
 This can cause deleted data to come back in some cases. Also this is true for 
 corrupt stables in which we delete the corrupt stable and run repair. 
 Here is an example:
 Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
 row=sankalp col=sankalp is written 20 days back and successfully went to all 
 three nodes. 
 Then a delete/tombstone was written successfully for the same row column 15 
 days back. 
 Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
 since it got compacted with the actual data. So there is no trace of this row 
 column in node A and B.
 Now in node C, say the original data is in drive1 and tombstone is in drive2. 
 Compaction has not yet reclaimed the data and tombstone.  
 Drive2 becomes corrupt and was replaced with new empty drive. 
 Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
 has come back to life. 
 Now after replacing the drive we run repair. This data will be propagated to 
 all nodes. 
 Note: This is still a problem even if we run repair every gc grace. 
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8494) incremental bootstrap

2014-12-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249031#comment-14249031
 ] 

Jonathan Ellis commented on CASSANDRA-8494:
---

Why is it a big deal if bootstrap takes a day or two given reasonable capacity 
planning?

 incremental bootstrap
 -

 Key: CASSANDRA-8494
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8494
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jon Haddad
Priority: Minor

 Current bootstrapping involves (to my knowledge) picking tokens and streaming 
 data before the node is available for requests.  This can be problematic with 
 fat nodes, since it may require 20TB of data to be streamed over before the 
 machine can be useful.  This can result in a massive window of time before 
 the machine can do anything useful.
 As a potential approach to mitigate the huge window of time before a node is 
 available, I suggest modifying the bootstrap process to only acquire a single 
 initial token before being marked UP.  This would likely be a configuration 
 parameter incremental_bootstrap or something similar.
 After the node is bootstrapped with this one token, it could go into UP 
 state, and could then acquire additional tokens (one or a handful at a time), 
 which would be streamed over while the node is active and serving requests.  
 The benefit here is that with the default 256 tokens a node could become an 
 active part of the cluster with less than 1% of it's final data streamed over.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8494) incremental bootstrap

2014-12-16 Thread Ryan Svihla (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249029#comment-14249029
 ] 

Ryan Svihla commented on CASSANDRA-8494:


I'm mixed on the push for density, I get that people _really_ want it, and this 
would substantially help Cassandra in that space, but I'm also convinced just 
by physics the story for high density will always be worse than the story for a 
bunch of cheap low density nodes (IE total cost, not just data center space 
costs).

Regardless, i think even in the case of more say 1TB nodes, this would be an 
impressive boost to handling overloaded clusters, where load can be moved off 
struggling nodes more quickly and gracefully. What we struggle with today in 
the field is a people that don't monitor their clusters, and don't realize till 
they're going OOM that they're in trouble. For those folks we always struggle 
streaming in new nodes as quickly as possible. I think this could potentially 
really help with those more common than you'd think scenarios.

 incremental bootstrap
 -

 Key: CASSANDRA-8494
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8494
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jon Haddad
Priority: Minor

 Current bootstrapping involves (to my knowledge) picking tokens and streaming 
 data before the node is available for requests.  This can be problematic with 
 fat nodes, since it may require 20TB of data to be streamed over before the 
 machine can be useful.  This can result in a massive window of time before 
 the machine can do anything useful.
 As a potential approach to mitigate the huge window of time before a node is 
 available, I suggest modifying the bootstrap process to only acquire a single 
 initial token before being marked UP.  This would likely be a configuration 
 parameter incremental_bootstrap or something similar.
 After the node is bootstrapped with this one token, it could go into UP 
 state, and could then acquire additional tokens (one or a handful at a time), 
 which would be streamed over while the node is active and serving requests.  
 The benefit here is that with the default 256 tokens a node could become an 
 active part of the cluster with less than 1% of it's final data streamed over.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7275) Errors in FlushRunnable may leave threads hung

2014-12-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249033#comment-14249033
 ] 

Jonathan Ellis commented on CASSANDRA-7275:
---

bq. Killing C* process is harmful as if we have code problem in 
writeSortedContents or replaceFlushed code it would potentially result in 
shutdown of the whole cluster or at least of all of the neighbors sharing 
replica range.

I'm much more comfortable with things die if something goes catastrophically 
wrong than things start returning nonsense on reads which is what happens if 
we mark something flushed that actually wasn't.

That said, I'd be okay using disk failure policy as a guide.  If people opt 
into best effort behavior and are okay with those implications, so be it.

 Errors in FlushRunnable may leave threads hung
 --

 Key: CASSANDRA-7275
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7275
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Pavel Yaskevich
Priority: Minor
 Fix For: 2.0.12

 Attachments: 0001-Move-latch.countDown-into-finally-block.patch, 
 7252-2.0-v2.txt, CASSANDRA-7275-flush-info.patch


 In Memtable.FlushRunnable, the CountDownLatch will never be counted down if 
 there are errors, which results in hanging any threads that are waiting for 
 the flush to complete.  For example, an error like this causes the problem:
 {noformat}
 ERROR [FlushWriter:474] 2014-05-20 12:10:31,137 CassandraDaemon.java (line 
 198) Exception in thread Thread[FlushWriter:474,5,main]
 java.lang.IllegalArgumentException
 at java.nio.Buffer.position(Unknown Source)
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:64)
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:72)
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.split(AbstractCompositeType.java:138)
 at 
 org.apache.cassandra.io.sstable.ColumnNameHelper.minComponents(ColumnNameHelper.java:103)
 at 
 org.apache.cassandra.db.ColumnFamily.getColumnStats(ColumnFamily.java:439)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:194)
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:397)
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:350)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8494) incremental bootstrap

2014-12-16 Thread Ryan Svihla (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249040#comment-14249040
 ] 

Ryan Svihla commented on CASSANDRA-8494:


I think there is an overlap between high density desire users and very slow 
almost glacial planning. By the time they've requisitioned the hardware, and 
gotten the nodes in place, their cluster may very well be far past overloaded. 
End of the day, this will help those that don't plan well the most.

I think it could be much better new user experience if we get this right.

 incremental bootstrap
 -

 Key: CASSANDRA-8494
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8494
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jon Haddad
Priority: Minor

 Current bootstrapping involves (to my knowledge) picking tokens and streaming 
 data before the node is available for requests.  This can be problematic with 
 fat nodes, since it may require 20TB of data to be streamed over before the 
 machine can be useful.  This can result in a massive window of time before 
 the machine can do anything useful.
 As a potential approach to mitigate the huge window of time before a node is 
 available, I suggest modifying the bootstrap process to only acquire a single 
 initial token before being marked UP.  This would likely be a configuration 
 parameter incremental_bootstrap or something similar.
 After the node is bootstrapped with this one token, it could go into UP 
 state, and could then acquire additional tokens (one or a handful at a time), 
 which would be streamed over while the node is active and serving requests.  
 The benefit here is that with the default 256 tokens a node could become an 
 active part of the cluster with less than 1% of it's final data streamed over.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8494) incremental bootstrap

2014-12-16 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8494:
--
Labels: density  (was: )

 incremental bootstrap
 -

 Key: CASSANDRA-8494
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8494
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jon Haddad
Priority: Minor
  Labels: density

 Current bootstrapping involves (to my knowledge) picking tokens and streaming 
 data before the node is available for requests.  This can be problematic with 
 fat nodes, since it may require 20TB of data to be streamed over before the 
 machine can be useful.  This can result in a massive window of time before 
 the machine can do anything useful.
 As a potential approach to mitigate the huge window of time before a node is 
 available, I suggest modifying the bootstrap process to only acquire a single 
 initial token before being marked UP.  This would likely be a configuration 
 parameter incremental_bootstrap or something similar.
 After the node is bootstrapped with this one token, it could go into UP 
 state, and could then acquire additional tokens (one or a handful at a time), 
 which would be streamed over while the node is active and serving requests.  
 The benefit here is that with the default 256 tokens a node could become an 
 active part of the cluster with less than 1% of it's final data streamed over.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8494) incremental bootstrap

2014-12-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249046#comment-14249046
 ] 

Jonathan Ellis commented on CASSANDRA-8494:
---

All right.  [~yukim] / [~brandon.williams], how feasible is this?

 incremental bootstrap
 -

 Key: CASSANDRA-8494
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8494
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jon Haddad
Priority: Minor
  Labels: density

 Current bootstrapping involves (to my knowledge) picking tokens and streaming 
 data before the node is available for requests.  This can be problematic with 
 fat nodes, since it may require 20TB of data to be streamed over before the 
 machine can be useful.  This can result in a massive window of time before 
 the machine can do anything useful.
 As a potential approach to mitigate the huge window of time before a node is 
 available, I suggest modifying the bootstrap process to only acquire a single 
 initial token before being marked UP.  This would likely be a configuration 
 parameter incremental_bootstrap or something similar.
 After the node is bootstrapped with this one token, it could go into UP 
 state, and could then acquire additional tokens (one or a handful at a time), 
 which would be streamed over while the node is active and serving requests.  
 The benefit here is that with the default 256 tokens a node could become an 
 active part of the cluster with less than 1% of it's final data streamed over.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8463) Constant compaction under LCS

2014-12-16 Thread Rick Branson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249047#comment-14249047
 ] 

Rick Branson commented on CASSANDRA-8463:
-

It did take some time to rebalance the tables within the levels, but nodes are 
able to clear their compaction log with the patch applied. +1

 Constant compaction under LCS
 -

 Key: CASSANDRA-8463
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8463
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Hardware is recent 2-socket, 16-core (x2 Hyperthreaded), 
 144G RAM, solid-state storage.
 Platform is Linux 3.2.51, Oracle JDK 64-bit 1.7.0_65.
 Heap is 32G total, 4G newsize.
 8G/8G on-heap/off-heap memtables, offheap_buffer allocator, 0.5 
 memtable_cleanup_threshold
 concurrent_compactors: 20
Reporter: Rick Branson
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 0001-better-logging.patch, 
 0001-make-sure-we-set-lastCompactedKey-properly.patch, log-for-8463.txt


 It appears that tables configured with LCS will completely re-compact 
 themselves over some period of time after upgrading from 2.0 to 2.1 (2.0.11 
 - 2.1.2, specifically). It starts out with 10 pending tasks for an hour or 
 so, then starts building up, now with 50-100 tasks pending across the cluster 
 after 12 hours. These nodes are under heavy write load, but were easily able 
 to keep up in 2.0 (they rarely had 5 pending compaction tasks), so I don't 
 think it's LCS in 2.1 actually being worse, just perhaps some different LCS 
 behavior that causes the layout of tables from 2.0 to prompt the compactor to 
 reorganize them?
 The nodes flushed ~11MB SSTables under 2.0. They're currently flushing ~36MB 
 SSTables due to the improved memtable setup in 2.1. Before I upgraded the 
 entire cluster to 2.1, I noticed the problem and tried several variations on 
 the flush size, thinking perhaps the larger tables in L0 were causing some 
 kind of cascading compactions. Even if they're sized roughly like the 2.0 
 flushes were, same behavior occurs. I also tried both enabling  disabling 
 STCS in L0 with no real change other than L0 began to back up faster, so I 
 left the STCS in L0 enabled.
 Tables are configured with 32MB sstable_size_in_mb, which was found to be an 
 improvement on the 160MB table size for compaction performance. Maybe this is 
 wrong now? Otherwise, the tables are configured with defaults. Compaction has 
 been unthrottled to help them catch-up. The compaction threads stay very 
 busy, with the cluster-wide CPU at 45% nice time. No nodes have completely 
 caught up yet. I'll update JIRA with status about their progress if anything 
 interesting happens.
 From a node around 12 hours ago, around an hour after the upgrade, with 19 
 pending compaction tasks:
 SSTables in each level: [6/4, 10, 105/100, 268, 0, 0, 0, 0, 0]
 SSTables in each level: [6/4, 10, 106/100, 271, 0, 0, 0, 0, 0]
 SSTables in each level: [1, 16/10, 105/100, 269, 0, 0, 0, 0, 0]
 SSTables in each level: [5/4, 10, 103/100, 272, 0, 0, 0, 0, 0]
 SSTables in each level: [4, 11/10, 105/100, 270, 0, 0, 0, 0, 0]
 SSTables in each level: [1, 12/10, 105/100, 271, 0, 0, 0, 0, 0]
 SSTables in each level: [1, 14/10, 104/100, 267, 0, 0, 0, 0, 0]
 SSTables in each level: [9/4, 10, 103/100, 265, 0, 0, 0, 0, 0]
 Recently, with 41 pending compaction tasks:
 SSTables in each level: [4, 13/10, 106/100, 269, 0, 0, 0, 0, 0]
 SSTables in each level: [4, 12/10, 106/100, 273, 0, 0, 0, 0, 0]
 SSTables in each level: [5/4, 11/10, 106/100, 271, 0, 0, 0, 0, 0]
 SSTables in each level: [4, 12/10, 103/100, 275, 0, 0, 0, 0, 0]
 SSTables in each level: [2, 13/10, 106/100, 273, 0, 0, 0, 0, 0]
 SSTables in each level: [3, 10, 104/100, 275, 0, 0, 0, 0, 0]
 SSTables in each level: [6/4, 11/10, 103/100, 269, 0, 0, 0, 0, 0]
 SSTables in each level: [4, 16/10, 105/100, 264, 0, 0, 0, 0, 0]
 More information about the use case: writes are roughly uniform across these 
 tables. The data is sharded across these 8 tables by key to improve 
 compaction parallelism. Each node receives up to 75,000 writes/sec sustained 
 at peak, and a small number of reads. This is a pre-production cluster that's 
 being warmed up with new data, so the low volume of reads (~100/sec per node) 
 is just from automatic sampled data checks, otherwise we'd just use STCS :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7275) Errors in FlushRunnable may leave threads hung

2014-12-16 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249053#comment-14249053
 ] 

Pavel Yaskevich commented on CASSANDRA-7275:


I understand it might be hard for you, Benedict, but just consider there could 
be a programming error in the flush of the memtable or replacing flushed one, 
which is only triggered when metadata about compaction is written back at the 
end of that compaction e.g. CompactionTask.runMayThrow() L225, e.g. error 
mentioned in the description or duplicate hard-link failure or something 
similar which has nothing to do with the underlaying (file-)system which means 
that #1 suggestion is not going to help because compaction is blocked in 
SystemKeyspace.finishCompaction() and flush retry is not going to help because 
it will just fail again and again trying to flush the same data. As an end user 
I would prefer that nobody actually takes a decision to fail on the floor for 
me except me because it means data loss even when problem is not affecting 
actual write/read path, I would be fine though to fail on FS\{Read, 
Write\}Error if user explicitly sets it to fail on I/O errors (e.g. 
disk_failure_policy, it is like of your #2 but not exactly) otherwise I would 
rather get notified in the log and carry on so I can take informed decision on 
my next actions.

bq. Eventually, if it cannot recover safely, it should die though, as there 
will need to be some operator involvement and the reality is not everybody 
monitors their log files.

I'm going to ignore this argument until you actually have experience of running 
Cassandra in production, otherwise it's the same as talking to the wall.

bq. I'm much more comfortable with things die if something goes 
catastrophically wrong than things start returning nonsense on reads which 
is what happens if we mark something flushed that actually wasn't.

I remember it was already the same when the disk is full in the DSE, did people 
actually have fun restoring cluster after it went completely dark? I'm also 
*not* saying that we shouldn't fail on FS\{Read, Write\}Error if 
disk_failure_policy says otherwise.

 Errors in FlushRunnable may leave threads hung
 --

 Key: CASSANDRA-7275
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7275
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Pavel Yaskevich
Priority: Minor
 Fix For: 2.0.12

 Attachments: 0001-Move-latch.countDown-into-finally-block.patch, 
 7252-2.0-v2.txt, CASSANDRA-7275-flush-info.patch


 In Memtable.FlushRunnable, the CountDownLatch will never be counted down if 
 there are errors, which results in hanging any threads that are waiting for 
 the flush to complete.  For example, an error like this causes the problem:
 {noformat}
 ERROR [FlushWriter:474] 2014-05-20 12:10:31,137 CassandraDaemon.java (line 
 198) Exception in thread Thread[FlushWriter:474,5,main]
 java.lang.IllegalArgumentException
 at java.nio.Buffer.position(Unknown Source)
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:64)
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:72)
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.split(AbstractCompositeType.java:138)
 at 
 org.apache.cassandra.io.sstable.ColumnNameHelper.minComponents(ColumnNameHelper.java:103)
 at 
 org.apache.cassandra.db.ColumnFamily.getColumnStats(ColumnFamily.java:439)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:194)
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:397)
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:350)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7886) Coordinator should not wait for read timeouts when replicas hit Exceptions

2014-12-16 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249056#comment-14249056
 ] 

Tyler Hobbs commented on CASSANDRA-7886:


bq. Hi Tyler Hobbs, sorry I kept you waiting for so long.

No worries, I know you're busy :)

bq. The commented code was meant as a preparation for WriteFailureExceptions. 
Does it perhaps make sense to fully add WriteFailureException? As a follow up 
ticket, we could implement it then for the different writes. Or do you want me 
to get rid it?

I do think it's a good idea to implement something similar for writes, and 
splitting that into a second ticket would be good.  So go ahead and delete the 
comments for this patch.

{quote}
Just to make sure that we dont touch anything new here: TOEs are logged inside 
SliceQueryFilter.collectReducedColumns already. I simply took this catch block 
from the ReadVerbHandler/RangeSliceVerbHandler and put into 
StorageProxy/MessageDeliveryTask.
I don't like that either, but I did not want to touch it. Do you still want me 
to change it?
{quote}

Yes, go ahead and remove those other try/catch blocks as well.  I can't see a 
reason why they should be suppressed once the logging statement is removed.

bq. I merged ReadTimeoutException|ReadFailureException into a single catch 
block.

Cool.  The way you did it there looks perfect.  Further up in StorageProxy 
there's an almost identical chunk of code.  Can you condense that one as well?

bq. I also added the last cell-name to the TOE, so that an administrator can 
get an estimate where to look for the tombstones. This doesn't really match the 
tickets new name, but is related to my original issue 

The many implementations of CellName don't implement {{toString()}}, so I think 
you want {{container.getComparator().getString(cell.name())}} instead.

 Coordinator should not wait for read timeouts when replicas hit Exceptions
 --

 Key: CASSANDRA-7886
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7886
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Tested with Cassandra 2.0.8
Reporter: Christian Spriegel
Assignee: Christian Spriegel
Priority: Minor
  Labels: protocolv4
 Fix For: 3.0

 Attachments: 7886_v1.txt, 7886_v2_trunk.txt, 7886_v3_trunk.txt, 
 7886_v4_trunk.txt


 *Issue*
 When you have TombstoneOverwhelmingExceptions occuring in queries, this will 
 cause the query to be simply dropped on every data-node, but no response is 
 sent back to the coordinator. Instead the coordinator waits for the specified 
 read_request_timeout_in_ms.
 On the application side this can cause memory issues, since the application 
 is waiting for the timeout interval for every request.Therefore, if our 
 application runs into TombstoneOverwhelmingExceptions, then (sooner or later) 
 our entire application cluster goes down :-(
 *Proposed solution*
 I think the data nodes should send a error message to the coordinator when 
 they run into a TombstoneOverwhelmingException. Then the coordinator does not 
 have to wait for the timeout-interval.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7886) Coordinator should not wait for read timeouts when replicas hit Exceptions

2014-12-16 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249075#comment-14249075
 ] 

Tyler Hobbs commented on CASSANDRA-7886:


Ah, I also forgot to mention that 
{{ErrorMessage.getBackwardsCompatibleException()}} has a code comment that 
still refers to Unavailable instead of ReadTimeout.

 Coordinator should not wait for read timeouts when replicas hit Exceptions
 --

 Key: CASSANDRA-7886
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7886
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Tested with Cassandra 2.0.8
Reporter: Christian Spriegel
Assignee: Christian Spriegel
Priority: Minor
  Labels: protocolv4
 Fix For: 3.0

 Attachments: 7886_v1.txt, 7886_v2_trunk.txt, 7886_v3_trunk.txt, 
 7886_v4_trunk.txt


 *Issue*
 When you have TombstoneOverwhelmingExceptions occuring in queries, this will 
 cause the query to be simply dropped on every data-node, but no response is 
 sent back to the coordinator. Instead the coordinator waits for the specified 
 read_request_timeout_in_ms.
 On the application side this can cause memory issues, since the application 
 is waiting for the timeout interval for every request.Therefore, if our 
 application runs into TombstoneOverwhelmingExceptions, then (sooner or later) 
 our entire application cluster goes down :-(
 *Proposed solution*
 I think the data nodes should send a error message to the coordinator when 
 they run into a TombstoneOverwhelmingException. Then the coordinator does not 
 have to wait for the timeout-interval.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7275) Errors in FlushRunnable may leave threads hung

2014-12-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249077#comment-14249077
 ] 

Jonathan Ellis commented on CASSANDRA-7275:
---

bq. consider there could be a programming error in the flush of the memtable or 
replacing flushed one

I don't know that hand waving about potential bugs gets us anywhere.  There 
could be programming errors anywhere, including in mark the segment flushed 
when it wasn't panic mode.  The right solution to bugs is QA, not hoping that 
you can guess where unexpected exceptions will happen and provide a safety net.

 Errors in FlushRunnable may leave threads hung
 --

 Key: CASSANDRA-7275
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7275
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Pavel Yaskevich
Priority: Minor
 Fix For: 2.0.12

 Attachments: 0001-Move-latch.countDown-into-finally-block.patch, 
 7252-2.0-v2.txt, CASSANDRA-7275-flush-info.patch


 In Memtable.FlushRunnable, the CountDownLatch will never be counted down if 
 there are errors, which results in hanging any threads that are waiting for 
 the flush to complete.  For example, an error like this causes the problem:
 {noformat}
 ERROR [FlushWriter:474] 2014-05-20 12:10:31,137 CassandraDaemon.java (line 
 198) Exception in thread Thread[FlushWriter:474,5,main]
 java.lang.IllegalArgumentException
 at java.nio.Buffer.position(Unknown Source)
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:64)
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:72)
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.split(AbstractCompositeType.java:138)
 at 
 org.apache.cassandra.io.sstable.ColumnNameHelper.minComponents(ColumnNameHelper.java:103)
 at 
 org.apache.cassandra.db.ColumnFamily.getColumnStats(ColumnFamily.java:439)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:194)
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:397)
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:350)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >