[jira] [Commented] (CASSANDRA-6137) CQL3 SELECT IN CLAUSE inconsistent

2013-11-18 Thread Vito Giuliani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825205#comment-13825205
 ] 

Vito Giuliani commented on CASSANDRA-6137:
--

I've hit this bug too in my local environment (using C* 2.0.2), both in the 
java driver (datastax's) and cqlsh:

{noformat}
SELECT code, date, price FROM inventory_price_by_day WHERE code='3853853853852' 
and date in ('2013-11-08', '2013-11-09', '2013-11-10', '2013-11-11', 
'2013-11-12', '2013-11-13', '2013-11-14', '2013-11-15');

 code  | date   | price
---++---
 3853853853852 | 2013-11-08 | 66.00
 3853853853852 | 2013-11-09 | 66.00
 3853853853852 | 2013-11-10 | 66.00
 3853853853852 | 2013-11-11 | 66.00
 3853853853852 | 2013-11-12 | 66.00
 3853853853852 | 2013-11-13 | 66.00
 3853853853852 | 2013-11-14 | 66.00
 3853853853852 | 2013-11-15 | 66.00

(8 rows)

SELECT code, date, price FROM inventory_price_by_day WHERE code='3853853853852' 
and date in ('2013-11-07', '2013-11-08', '2013-11-09', '2013-11-10', 
'2013-11-11', '2013-11-12', '2013-11-13', '2013-11-14', '2013-11-15');

 code  | date   | price
---++---
 3853853853852 | 2013-11-15 | 66.00

(1 rows)
{noformat}
(the only difference between the two queries is that the latter includes an 
additional day)

I tried to run flush / compact / keycacheinvalidate but they don't seem to have 
any kind of effect here.
Enabling tracing, there seems to be a difference in the way the two queries are 
executed:

{noformat}
 activity   

   | timestamp  
  | source| source_elapsed
---+--+---+


execute_cql3_query | 
09:23:28,987 | 127.0.0.1 |  0
 Parsing SELECT code, date, price FROM inventory_price_by_day WHERE 
code='3853853853852' and date in ('2013-11-08', '2013-11-09', '2013-11-10', 
'2013-11-11', '2013-11-12', '2013-11-13', '2013-11-14', '2013-11-15') LIMIT 
1; | 09:23:28,987 | 127.0.0.1 | 66


   Preparing statement | 
09:23:28,987 | 127.0.0.1 |161


Executing single-partition query on inventory_price_by_day | 
09:23:28,988 | 127.0.0.1 |467


  Acquiring sstable references | 
09:23:28,988 | 127.0.0.1 |488


   Merging memtable tombstones | 
09:23:28,988 | 127.0.0.1 |514


   Key cache hit for sstable 4 | 
09:23:28,988 | 127.0.0.1 |589


 Seeking to partition indexed section in data file | 
09:23:28,988 | 127.0.0.1 |604

 
Skipped 0/1 non-slice-intersecting sstables, included 0 due to tombstones | 
09:23:28,988 | 127.0.0.1 |870


Merging data from memtables and 1 sstables | 
09:23:28,988 | 127.0.0.1 |893


[jira] [Created] (CASSANDRA-6369) Fix prepared statement size computation

2013-11-18 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-6369:
---

 Summary: Fix prepared statement size computation
 Key: CASSANDRA-6369
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6369
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 1.2.12, 2.0.3


When computed the size of CQLStatement to limit the prepared statement cache 
(CASSANDRA-6107), we overestimate the actual memory used because the statement 
include a reference to the table CFMetaData which measureDeep counts. And as it 
happens, that reference is big: on a simple test preparing a very trivial 
select statement, I was able to only prepare 87 statements before some started 
to be evicted because each statement was more than 93K big and more than 92K of 
that was the CFMetaData object. As it happens there is no reason to account the 
CFMetaData object at all since it's in memory anyway whether or not there is 
prepared statements or not.

Attaching a simple (if not extremely elegant) patch to remove what we don't 
care about of the computation. Another solution would be to use the 
MemoryMeter.withTrackerProvider option as we do in Memtable, but in the 
QueryProcessor case we currently use only one MemoryMeter, not one per CF, so 
it didn't felt necessarilly cleaner. We could create one-shot MemoryMeter 
object each time we need to measure a CQLStatement but that doesn't feel a lot 
simpler/cleaner either. But if someone feels religious about some other 
solution, I don't care.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6369) Fix prepared statement size computation

2013-11-18 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6369:


Attachment: 6369.txt

> Fix prepared statement size computation
> ---
>
> Key: CASSANDRA-6369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6369
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 1.2.12, 2.0.3
>
> Attachments: 6369.txt
>
>
> When computed the size of CQLStatement to limit the prepared statement cache 
> (CASSANDRA-6107), we overestimate the actual memory used because the 
> statement include a reference to the table CFMetaData which measureDeep 
> counts. And as it happens, that reference is big: on a simple test preparing 
> a very trivial select statement, I was able to only prepare 87 statements 
> before some started to be evicted because each statement was more than 93K 
> big and more than 92K of that was the CFMetaData object. As it happens there 
> is no reason to account the CFMetaData object at all since it's in memory 
> anyway whether or not there is prepared statements or not.
> Attaching a simple (if not extremely elegant) patch to remove what we don't 
> care about of the computation. Another solution would be to use the 
> MemoryMeter.withTrackerProvider option as we do in Memtable, but in the 
> QueryProcessor case we currently use only one MemoryMeter, not one per CF, so 
> it didn't felt necessarilly cleaner. We could create one-shot MemoryMeter 
> object each time we need to measure a CQLStatement but that doesn't feel a 
> lot simpler/cleaner either. But if someone feels religious about some other 
> solution, I don't care.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6369) Fix prepared statement size computation

2013-11-18 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6369:
--

Reviewer: Lyuben Todorov

> Fix prepared statement size computation
> ---
>
> Key: CASSANDRA-6369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6369
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 1.2.12, 2.0.3
>
> Attachments: 6369.txt
>
>
> When computed the size of CQLStatement to limit the prepared statement cache 
> (CASSANDRA-6107), we overestimate the actual memory used because the 
> statement include a reference to the table CFMetaData which measureDeep 
> counts. And as it happens, that reference is big: on a simple test preparing 
> a very trivial select statement, I was able to only prepare 87 statements 
> before some started to be evicted because each statement was more than 93K 
> big and more than 92K of that was the CFMetaData object. As it happens there 
> is no reason to account the CFMetaData object at all since it's in memory 
> anyway whether or not there is prepared statements or not.
> Attaching a simple (if not extremely elegant) patch to remove what we don't 
> care about of the computation. Another solution would be to use the 
> MemoryMeter.withTrackerProvider option as we do in Memtable, but in the 
> QueryProcessor case we currently use only one MemoryMeter, not one per CF, so 
> it didn't felt necessarilly cleaner. We could create one-shot MemoryMeter 
> object each time we need to measure a CQLStatement but that doesn't feel a 
> lot simpler/cleaner either. But if someone feels religious about some other 
> solution, I don't care.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (CASSANDRA-6275) 2.0.x leaks file handles

2013-11-18 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson reassigned CASSANDRA-6275:
--

Assignee: (was: Marcus Eriksson)

> 2.0.x leaks file handles
> 
>
> Key: CASSANDRA-6275
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6275
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: java version "1.7.0_25"
> Java(TM) SE Runtime Environment (build 1.7.0_25-b15)
> Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)
> Linux cassandra-test1 2.6.32-279.el6.x86_64 #1 SMP Thu Jun 21 15:00:18 EDT 
> 2012 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Mikhail Mazursky
> Attachments: c_file-descriptors_strace.tbz, cassandra_jstack.txt, 
> leak.log, position_hints.tgz, slog.gz
>
>
> Looks like C* is leaking file descriptors when doing lots of CAS operations.
> {noformat}
> $ sudo cat /proc/15455/limits
> Limit Soft Limit   Hard Limit   Units
> Max cpu time  unlimitedunlimitedseconds  
> Max file size unlimitedunlimitedbytes
> Max data size unlimitedunlimitedbytes
> Max stack size10485760 unlimitedbytes
> Max core file size00bytes
> Max resident set  unlimitedunlimitedbytes
> Max processes 1024 unlimitedprocesses
> Max open files4096 4096 files
> Max locked memory unlimitedunlimitedbytes
> Max address space unlimitedunlimitedbytes
> Max file locksunlimitedunlimitedlocks
> Max pending signals   1463314633signals  
> Max msgqueue size 819200   819200   bytes
> Max nice priority 00   
> Max realtime priority 00   
> Max realtime timeout  unlimitedunlimitedus 
> {noformat}
> Looks like the problem is not in limits.
> Before load test:
> {noformat}
> cassandra-test0 ~]$ lsof -n | grep java | wc -l
> 166
> cassandra-test1 ~]$ lsof -n | grep java | wc -l
> 164
> cassandra-test2 ~]$ lsof -n | grep java | wc -l
> 180
> {noformat}
> After load test:
> {noformat}
> cassandra-test0 ~]$ lsof -n | grep java | wc -l
> 967
> cassandra-test1 ~]$ lsof -n | grep java | wc -l
> 1766
> cassandra-test2 ~]$ lsof -n | grep java | wc -l
> 2578
> {noformat}
> Most opened files have names like:
> {noformat}
> java  16890 cassandra 1636r  REG 202,17  88724987 
> 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db
> java  16890 cassandra 1637r  REG 202,17 161158485 
> 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db
> java  16890 cassandra 1638r  REG 202,17  88724987 
> 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db
> java  16890 cassandra 1639r  REG 202,17 161158485 
> 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db
> java  16890 cassandra 1640r  REG 202,17  88724987 
> 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db
> java  16890 cassandra 1641r  REG 202,17 161158485 
> 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db
> java  16890 cassandra 1642r  REG 202,17  88724987 
> 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db
> java  16890 cassandra 1643r  REG 202,17 161158485 
> 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db
> java  16890 cassandra 1644r  REG 202,17  88724987 
> 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db
> java  16890 cassandra 1645r  REG 202,17 161158485 
> 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db
> java  16890 cassandra 1646r  REG 202,17  88724987 
> 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db
> java  16890 cassandra 1647r  REG 202,17 161158485 
> 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db
> java  16890 cassandra 1648r  REG 202,17  88724987 
> 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db
> java  16890 cassandra 1649r  REG 202,17 161158485 
> 

[jira] [Commented] (CASSANDRA-6181) Replaying a commit led to java.lang.StackOverflowError and node crash

2013-11-18 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825319#comment-13825319
 ] 

Sylvain Lebresne commented on CASSANDRA-6181:
-

bq. What would the effective usage limits be before and after this patch

Before, it would crash with the exception in the description above, after it 
won't. This is really "just" a bug fix, if you don't run into it, there is 
nothing this patch will do for you. If you do, then it will fix the problem.

> Replaying a commit led to java.lang.StackOverflowError and node crash
> -
>
> Key: CASSANDRA-6181
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6181
> Project: Cassandra
>  Issue Type: Bug
> Environment: 1.2.8 & 1.2.10 - ubuntu 12.04
>Reporter: Jeffrey Damick
>Assignee: Sylvain Lebresne
>Priority: Critical
> Fix For: 1.2.12, 2.0.2
>
> Attachments: 6181.txt
>
>
> 2 of our nodes died after attempting to replay a commit.  I can attach the 
> commit log file if that helps.
> It was occurring on 1.2.8, after several failed attempts to start, we 
> attempted startup with 1.2.10.  This also yielded the same issue (below).  
> The only resolution was to physically move the commit log file out of the way 
> and then the nodes were able to start...  
> The replication factor was 3 so I'm hoping there was no data loss...
> {code}
>  INFO [main] 2013-10-11 14:50:35,891 CommitLogReplayer.java (line 119) 
> Replaying /ebs/cassandra/commitlog/CommitLog-2-1377542389560.log
> ERROR [MutationStage:18] 2013-10-11 14:50:37,387 CassandraDaemon.java (line 
> 191) Exception in thread Thread[MutationStage:18,5,main]
> java.lang.StackOverflowError
> at 
> org.apache.cassandra.db.marshal.TimeUUIDType.compareTimestampBytes(TimeUUIDType.java:68)
> at 
> org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:57)
> at 
> org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:29)
> at 
> org.apache.cassandra.db.marshal.AbstractType.compareCollectionMembers(AbstractType.java:229)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:81)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:31)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:439)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
>  etc over and over until 
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.add(RangeTombstoneList.java:144)
> at 
> org.apache.cassandra.db.RangeTombstoneList.addAll(RangeTombstoneList.java:186)
> at org.apache.cassandra.db.DeletionInfo.add(DeletionInfo.java:180)
> at 
> org.apache.cassandra.db.AtomicSortedColumns.addAllWithSizeDelta(AtomicSortedColumns.java:197)
> at 
> org.apache.cassandra.db.AbstractColumnContainer.addAllWithSizeDelta(AbstractColumnContainer.java:99)
> at org.apache.cassandra.db.Memtable.resolve(Memtable.java:207)
> at org.apache.cassandra.db.Memtable.put(Memtable.java:170)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:745)
> at org.apache.cassandra.db.Table.apply(Table.java:388)
> at org.apache.cassandra.db.Table.apply(Table.java:353)
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer$1.runMayThrow(CommitLogReplayer.java:258)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> 

[jira] [Commented] (CASSANDRA-4511) Secondary index support for CQL3 collections

2013-11-18 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825335#comment-13825335
 ] 

Sylvain Lebresne commented on CASSANDRA-4511:
-

bq. I'm struggling to think of a use case for indexing map keys

One thing that comes into mind is tags, but where you want to attach some data 
to it. Say, when was the tag added, or maybe whom added it. In that case, you 
could imagine wanting to index both the keys (the tag itself, to know what 
object has tag X) and the values (for instance to know 'which object did user Y 
tagged').

And btw, technically there is not a whole lot of difficulty adding this, we 
just need to go a bit over the 2ndary index API to make sure we can add more 
than one index on a given name. But that API probably need some cleanup anyway.

bq. Also, it occurs to me that we don't need to add new syntax

True. But I'll note that 1) we don't, technically speaking, support this syntax 
currently, we only support 'IN ?' and 'IN (...)', so it save adding one token 
to the lexer but doesn't entirely save from updating the grammar and 2) 
internally, I still think we'd want to keep it a separate case from other IN 
because it has different rules anyway. Overall, I don't mind using IN over 
CONTAINS if we think that's a better syntax but I don't think one of the 
arguments should be "because it makes things easier internally" (didn't meant 
to imply this was your argument btw, just making sure we agree on why we would 
make the choice) because I don't think that's true.

In any case, as far as I'm concerned, I don't care a whole lot between CONTAINS 
and IN, expect maybe that it feels easier to extend the syntax to map keys with 
CONTAINS (using CONTAINS KEY).

> Secondary index support for CQL3 collections 
> -
>
> Key: CASSANDRA-4511
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4511
> Project: Cassandra
>  Issue Type: Improvement
>Affects Versions: 1.2.0 beta 1
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 2.1
>
> Attachments: 4511.txt
>
>
> We should allow to 2ndary index on collections. A typical use case would be 
> to add a 'tag set' to say a user profile and to query users based on 
> what tag they have.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6370) Updating cql created table through cassandra-cli transform it into a compact storage table

2013-11-18 Thread Alain RODRIGUEZ (JIRA)
Alain RODRIGUEZ created CASSANDRA-6370:
--

 Summary: Updating cql created table through cassandra-cli 
transform it into a compact storage table
 Key: CASSANDRA-6370
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6370
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Alain RODRIGUEZ
Priority: Critical


To reproduce :

echo "CREATE TABLE test (aid int, period text, event text, viewer text, PRIMARY 
KEY (aid, period, event, viewer) );" | cqlsh -kmykeyspace;

echo "describe table test;" | cqlsh -kmykeyspace;

Output >
CREATE TABLE test (
  aid int,
  period text,
  event text,
  viewer text,
  PRIMARY KEY (aid, period, event, viewer)
) WITH
  bloom_filter_fp_chance=0.01 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.00 AND
  gc_grace_seconds=864000 AND
  read_repair_chance=0.10 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={'sstable_compression': 'SnappyCompressor'};

Then do :

echo "update column family test with dclocal_read_repair_chance = 0.1;" | 
cassandra-cli -kmykeyspace

And finally again : echo "describe table test;" | cqlsh -kmykeyspace;

Output >

CREATE TABLE test (
  aid int,
  column1 text,
  column2 text,
  column3 text,
  column4 text,
  value blob,
  PRIMARY KEY (aid, column1, column2, column3, column4)
) WITH COMPACT STORAGE AND
  bloom_filter_fp_chance=0.01 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.10 AND
  gc_grace_seconds=864000 AND
  read_repair_chance=0.10 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={'sstable_compression': 'SnappyCompressor'};

This is quite annoying in production. If it is happening to you: 
UPDATE system.schema_columnfamilies SET column_aliases = 
'["period","event","viewer"]' WHERE keyspace_name='mykeyspace' AND 
columnfamily_name='test'; should help restoring the table. (Thanks Sylvain for 
this information.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-6370) Updating cql created table through cassandra-cli transform it into a compact storage table

2013-11-18 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-6370.
---

Resolution: Won't Fix

Why do you think cli does not list cql-created tables?

> Updating cql created table through cassandra-cli transform it into a compact 
> storage table
> --
>
> Key: CASSANDRA-6370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6370
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Alain RODRIGUEZ
>Priority: Critical
>
> To reproduce :
> echo "CREATE TABLE test (aid int, period text, event text, viewer text, 
> PRIMARY KEY (aid, period, event, viewer) );" | cqlsh -kmykeyspace;
> echo "describe table test;" | cqlsh -kmykeyspace;
> Output >
> CREATE TABLE test (
>   aid int,
>   period text,
>   event text,
>   viewer text,
>   PRIMARY KEY (aid, period, event, viewer)
> ) WITH
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> Then do :
> echo "update column family test with dclocal_read_repair_chance = 0.1;" | 
> cassandra-cli -kmykeyspace
> And finally again : echo "describe table test;" | cqlsh -kmykeyspace;
> Output >
> CREATE TABLE test (
>   aid int,
>   column1 text,
>   column2 text,
>   column3 text,
>   column4 text,
>   value blob,
>   PRIMARY KEY (aid, column1, column2, column3, column4)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.10 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> This is quite annoying in production. If it is happening to you: 
> UPDATE system.schema_columnfamilies SET column_aliases = 
> '["period","event","viewer"]' WHERE keyspace_name='mykeyspace' AND 
> columnfamily_name='test'; should help restoring the table. (Thanks Sylvain 
> for this information.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


git commit: Improve error message

2013-11-18 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.2 2b1fb0ff4 -> a301ea142


Improve error message


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a301ea14
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a301ea14
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a301ea14

Branch: refs/heads/cassandra-1.2
Commit: a301ea14293f046dc4ff10c84aaef192fbc70c0a
Parents: 2b1fb0f
Author: Sylvain Lebresne 
Authored: Mon Nov 18 15:53:42 2013 +0100
Committer: Sylvain Lebresne 
Committed: Mon Nov 18 15:53:42 2013 +0100

--
 src/java/org/apache/cassandra/dht/Murmur3Partitioner.java | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a301ea14/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
--
diff --git a/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java 
b/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
index 502f5cc..fe1c24f 100644
--- a/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
+++ b/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
@@ -185,7 +185,14 @@ public class Murmur3Partitioner extends 
AbstractPartitioner
 
 public Token fromString(String string)
 {
-return new LongToken(Long.valueOf(string));
+try
+{
+return new LongToken(Long.valueOf(string));
+}
+catch (NumberFormatException e)
+{
+throw new IllegalArgumentException(String.format("Invalid 
token for Murmur3Partitioner. Got %s but expected a long value (unsigned 8 
bytes integer)."));
+}
 }
 };
 



[jira] [Commented] (CASSANDRA-6368) java.lang.NumberFormatException: For input string: "140804036566258204771707954633792970268"

2013-11-18 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825373#comment-13825373
 ] 

Sylvain Lebresne commented on CASSANDRA-6368:
-

Not sure what you think is the problem. If it's that the error message could be 
a bit better, then I don't entirely disagree and I've committed a slightly more 
user friendly in commit a301ea1.

But other than that, the input is definitively not a valid token for 
Murmur3Partitionner so it just sound like you either meant to use 
RandomPartitionner and you should fix that in your yaml, or you meant to use 
Murmur3Partitionner and there you probably have a bad token value configured 
(again in the yaml). 

If it's none of that, please be a bit more explicit.

> java.lang.NumberFormatException: For input string: 
> "140804036566258204771707954633792970268"
> 
>
> Key: CASSANDRA-6368
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6368
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Olivier Lamy (*$^¨%`£)
>
> ERROR 13:24:31,790 Error occurred during processing of message.
> java.lang.NumberFormatException: For input string: 
> "140804036566258204771707954633792970268"
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
>   at java.lang.Long.parseLong(Long.java:422)
>   at java.lang.Long.valueOf(Long.java:525)
>   at 
> org.apache.cassandra.dht.Murmur3Partitioner$1.fromString(Murmur3Partitioner.java:188)
>   at 
> org.apache.cassandra.thrift.CassandraServer.get_range_slices(CassandraServer.java:936)
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.getResult(Cassandra.java:3454)
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.getResult(Cassandra.java:3442)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
>   at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:199)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
>   at java.lang.Thread.run(Thread.java:695)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-6137) CQL3 SELECT IN CLAUSE inconsistent

2013-11-18 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-6137.
-

   Resolution: Duplicate
Reproduced In: 2.0.1, 1.2.8  (was: 1.2.8, 2.0.1)

Pretty sure this is the same than CASSANDRA-6327 which has been fixed for 2.0.3.

> CQL3 SELECT IN CLAUSE inconsistent
> --
>
> Key: CASSANDRA-6137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6137
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu AWS Cassandra 2.0.1 SINGLE NODE on EBS RAID 
> storage
> OSX Cassandra 1.2.8 on SSD storage
>Reporter: Constance Eustace
>Priority: Minor
>
> Possible Resolution:
> What seems to be key is to run a nodetool compact (possibly a nodetool flush) 
> after schema drops / schema creations / schema truncates and invalidate the 
> caches. This seems to align the data for new inserts/updates. From my 
> reproduction tests, I have been unable to generate the database corruption if 
> nodetool flush, nodetool compact, nodetool keycacheinvalidate (we have turned 
> off rowcache due to other bugs). Then, even after running a more stressful 
> test with 10x the inserts and five separate concurrent update threads the 
> corruption did not appear. 
> So I believe this is a tentative "fix" to this issue... in general, after any 
> manipulation to the schema, you should run nodetool compact and 
> keycacheinvalidate. I have not tested if a general compact on all keyspaces 
> and tables versus a more specific compact on the affected keyspace and/or 
> keyspace tables is all that is necessary (compact can be a very expensive 
> operation). 
> --
> Problem Encountered:
> We are encountering inconsistent results from CQL3 queries with column keys 
> using IN clause in WHERE. This has been reproduced in cqlsh and the jdbc 
> driver. Specifically, we are doing queries to pull a subset of column keys 
> for a specific row key. 
> We detect this corruption by selecting all the column keys for a row, and 
> then trying different subsets of column keys in WHERE  IN ( key subset list>). We see some of these column key subset queries not return 
> all the column keys, even though the select-all-column-keys query finds them. 
> It seems to appear when there is a large amount of raw insertion work 
> (non-updates / new ingested data) combined with simultaneous updates to 
> existing data. EDIT: this also seems to only happen with mass insert+updates 
> after schema changes / drops / table creation / table truncation. See the 
> Possible Resolution section above.
> --
> Details:
> Rowkey is e_entid
> Column key is p_prop
> This returns roughly 21 rows for 21 column keys that match p_prop.
> cqlsh> SELECT 
> e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
>  FROM internal_submission.Entity_Job WHERE e_entid = 
> '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB';
> These three queries each return one row for the requested single column key 
> in the IN clause:
> SELECT 
> e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
>  FROM internal_submission.Entity_Job WHERE e_entid = 
> '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
> ('urn:bby:pcm:job:ingest:content:complete:count');
> SELECT 
> e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
>  FROM internal_submission.Entity_Job WHERE e_entid = 
> '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
> ('urn:bby:pcm:job:ingest:content:all:count');
> SELECT 
> e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
>  FROM internal_submission.Entity_Job WHERE e_entid = 
> '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
> ('urn:bby:pcm:job:ingest:content:fail:count');
> This query returns ONLY ONE ROW (one column key), not three as I would expect 
> from the three-column-key IN clause:
> cqlsh> SELECT 
> e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
>  FROM internal_submission.Entity_Job WHERE e_entid = 
> '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
> ('urn:bby:pcm:job:ingest:content:complete:count','urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count');
> This query does return two rows however for the requested two column keys:
> cqlsh> SELECT 
> e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
>  FROM internal_submission.Entity_Job WHERE e_entid =

[jira] [Resolved] (CASSANDRA-6220) Unable to select multiple entries using In clause on clustering part of compound key

2013-11-18 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-6220.
-

Resolution: Duplicate

As for CASSANDRA-6137, pretty sure this has been solved in CASSANDRA-6327.

> Unable to select multiple entries using In clause on clustering part of 
> compound key
> 
>
> Key: CASSANDRA-6220
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6220
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Ashot Golovenko
> Attachments: inserts.zip
>
>
> I have the following table:
> CREATE TABLE rating (
> id bigint,
> mid int,
> hid int,
> r double,
> PRIMARY KEY ((id, mid), hid));
> And I get really really strange result sets on the following queries:
> cqlsh:bm> SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
> and hid = 201329320;
>  hid   | r
> ---+
>  201329320 | 45.476
> (1 rows)
> cqlsh:bm> SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
> and hid = 201329220;
>  hid   | r
> ---+---
>  201329220 | 53.62
> (1 rows)
> cqlsh:bm> SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
> and hid in (201329320, 201329220);
>  hid   | r
> ---+
>  201329320 | 45.476
> (1 rows)  <-- WRONG - should be two records
> As you can see although both records exist I'm not able the fetch all of them 
> using in clause. By now I have to cycle my requests which are about 30 and I 
> find it highly inefficient given that I query physically the same row. 
> More of that  - it doesn't happen all the time! For different id values 
> sometimes I get the correct dataset.
> Ideally I'd like the following select to work:
> SELECT hid, r FROM rating WHERE id  = 755349113 and mid in ? and hid in ?;
> Which doesn't work either.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6369) Fix prepared statement size computation

2013-11-18 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825381#comment-13825381
 ] 

Lyuben Todorov commented on CASSANDRA-6369:
---

LGTM.

> Fix prepared statement size computation
> ---
>
> Key: CASSANDRA-6369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6369
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 1.2.12, 2.0.3
>
> Attachments: 6369.txt
>
>
> When computed the size of CQLStatement to limit the prepared statement cache 
> (CASSANDRA-6107), we overestimate the actual memory used because the 
> statement include a reference to the table CFMetaData which measureDeep 
> counts. And as it happens, that reference is big: on a simple test preparing 
> a very trivial select statement, I was able to only prepare 87 statements 
> before some started to be evicted because each statement was more than 93K 
> big and more than 92K of that was the CFMetaData object. As it happens there 
> is no reason to account the CFMetaData object at all since it's in memory 
> anyway whether or not there is prepared statements or not.
> Attaching a simple (if not extremely elegant) patch to remove what we don't 
> care about of the computation. Another solution would be to use the 
> MemoryMeter.withTrackerProvider option as we do in Memtable, but in the 
> QueryProcessor case we currently use only one MemoryMeter, not one per CF, so 
> it didn't felt necessarilly cleaner. We could create one-shot MemoryMeter 
> object each time we need to measure a CQLStatement but that doesn't feel a 
> lot simpler/cleaner either. But if someone feels religious about some other 
> solution, I don't care.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Reopened] (CASSANDRA-6370) Updating cql created table through cassandra-cli transform it into a compact storage table

2013-11-18 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reopened CASSANDRA-6370:
-

  Assignee: Sylvain Lebresne

I'm going to reopen because while I agree that you should absolutely stick to 
cqlsh when dealing with CQL3 tables, I think it doesn't cost us much to either 
make sure it's not too easy to shoot yourself in the foot or at least disallow 
modifications of CQL3 table from thrift if that screw them up (especially since 
it's pretty damn hard to get back on your feet afterwards unless you're very 
familiar with the schema code).

> Updating cql created table through cassandra-cli transform it into a compact 
> storage table
> --
>
> Key: CASSANDRA-6370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6370
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Alain RODRIGUEZ
>Assignee: Sylvain Lebresne
>Priority: Critical
>
> To reproduce :
> echo "CREATE TABLE test (aid int, period text, event text, viewer text, 
> PRIMARY KEY (aid, period, event, viewer) );" | cqlsh -kmykeyspace;
> echo "describe table test;" | cqlsh -kmykeyspace;
> Output >
> CREATE TABLE test (
>   aid int,
>   period text,
>   event text,
>   viewer text,
>   PRIMARY KEY (aid, period, event, viewer)
> ) WITH
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> Then do :
> echo "update column family test with dclocal_read_repair_chance = 0.1;" | 
> cassandra-cli -kmykeyspace
> And finally again : echo "describe table test;" | cqlsh -kmykeyspace;
> Output >
> CREATE TABLE test (
>   aid int,
>   column1 text,
>   column2 text,
>   column3 text,
>   column4 text,
>   value blob,
>   PRIMARY KEY (aid, column1, column2, column3, column4)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.10 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> This is quite annoying in production. If it is happening to you: 
> UPDATE system.schema_columnfamilies SET column_aliases = 
> '["period","event","viewer"]' WHERE keyspace_name='mykeyspace' AND 
> columnfamily_name='test'; should help restoring the table. (Thanks Sylvain 
> for this information.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6370) Updating cql created table through cassandra-cli transform it into a compact storage table

2013-11-18 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825392#comment-13825392
 ] 

Russell Bradberry commented on CASSANDRA-6370:
--

I tend to agree. If there is any unexpected behavior that could arise then it 
should be prevented from happening, a big warning like "THIS WILL ALTER YOUR 
TABLE WITH COMPACT STORAGE ... Continue Y/N?", so the user is aware of what is 
happening.  Simply saying "it's hidden when you list it" is not a solution IMO.

> Updating cql created table through cassandra-cli transform it into a compact 
> storage table
> --
>
> Key: CASSANDRA-6370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6370
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Alain RODRIGUEZ
>Assignee: Sylvain Lebresne
>Priority: Critical
>
> To reproduce :
> echo "CREATE TABLE test (aid int, period text, event text, viewer text, 
> PRIMARY KEY (aid, period, event, viewer) );" | cqlsh -kmykeyspace;
> echo "describe table test;" | cqlsh -kmykeyspace;
> Output >
> CREATE TABLE test (
>   aid int,
>   period text,
>   event text,
>   viewer text,
>   PRIMARY KEY (aid, period, event, viewer)
> ) WITH
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> Then do :
> echo "update column family test with dclocal_read_repair_chance = 0.1;" | 
> cassandra-cli -kmykeyspace
> And finally again : echo "describe table test;" | cqlsh -kmykeyspace;
> Output >
> CREATE TABLE test (
>   aid int,
>   column1 text,
>   column2 text,
>   column3 text,
>   column4 text,
>   value blob,
>   PRIMARY KEY (aid, column1, column2, column3, column4)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.10 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> This is quite annoying in production. If it is happening to you: 
> UPDATE system.schema_columnfamilies SET column_aliases = 
> '["period","event","viewer"]' WHERE keyspace_name='mykeyspace' AND 
> columnfamily_name='test'; should help restoring the table. (Thanks Sylvain 
> for this information.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6277) AE in PrecompactedRow.update(PrecompactedRow.java:171)

2013-11-18 Thread Viliam Holub (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825400#comment-13825400
 ] 

Viliam Holub commented on CASSANDRA-6277:
-

My experience running 2.0.2: 21 AEs during repair on all 9 nodes

> AE in PrecompactedRow.update(PrecompactedRow.java:171)
> --
>
> Key: CASSANDRA-6277
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6277
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux, 12 nodes, 3 AZ EC2
> Cassandra version 2.0.2
>Reporter: Viliam Holub
>Assignee: Jonathan Ellis
>Priority: Minor
>  Labels: repair
> Fix For: 2.0.3
>
>
> Getting this AE on destination nodes during repair:
> ERROR [ValidationExecutor:78] 2013-10-31 04:35:31,243 CassandraDaemon.java 
> (line 187) Exception in thread Thread[ValidationExecutor:78,1,main]
> java.lang.AssertionError
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.update(PrecompactedRow.java:171)
> at org.apache.cassandra.repair.Validator.rowHash(Validator.java:198)
> at org.apache.cassandra.repair.Validator.add(Validator.java:151)
> at 
> org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:799)
> at 
> org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:62)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.call(CompactionManager.java:397)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6348) TimeoutException throws if Cql query allows data filtering and index is too big and it can't find the data in base CF after filtering

2013-11-18 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825401#comment-13825401
 ] 

Sylvain Lebresne commented on CASSANDRA-6348:
-

Hum, can't really reproduce on the cassandra-1.2 branch:
{noformat}
Connected to test at 127.0.0.1:9160.
[cqlsh 3.1.8 | Cassandra 1.2.11-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
19.36.1]
Use HELP for help.
cqlsh> create KEYSPACE ks WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
cqlsh> use ks;
cqlsh:ks>   create table test ( key1 int, key2 int , col1 int, col2 int, 
primary key (key1, key2));
cqlsh:ks>   create index col1 on test(col1);
cqlsh:ks>   create index col2 on test(col2);
cqlsh:ks> select * from test where col1=100 and col2 =1;
Bad Request: Cannot execute this query as it might involve data filtering and 
thus may have unpredictable performance. If you want to execute this query 
despite the performance unpredictability, use ALLOW FILTERING
{noformat}
I.e. ALLOW FILTERING does is required.

bq. We can either disable those kind of queries or WARN the user that data 
filtering might lead to timeout exception or OOM.

Just to make sure we agree, that's *exactly* what requiring ALLOW FILTERING is 
about, warning the user that C* does not execute the query smartly and that the 
performance will suck. You should *never* use ALLOW FILTERING in production 
unless you know very well what you do in particular.

bq. We should be able to auto page through 2i CF (for native protocol), so if 
the auto-paging ends in the middle of a index scanning

This is not really what the native protocol paging is about. If you ask pages 
of 1000 results, the native protocol paging will return you pages of 1000 
results until you're done paging. In that case, the point is that it takes a 
long time to find any results at all because the way we handle the query is 
dumb.  But I'll note that we do page internally the index scanning (which is 
why you can get a timeout but in theory not an OOM).

Note that I'm not saying we shouldn't improve the way we handle such queries, 
but that's a whole separate issue (CASSANDRA-6048).


> TimeoutException throws if Cql query allows data filtering and index is too 
> big and it can't find the data in base CF after filtering 
> --
>
> Key: CASSANDRA-6348
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6348
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Alex Liu
>Assignee: Alex Liu
>
> If index row is too big, and filtering can't find the match Cql row in base 
> CF, it keep scanning the index row and retrieving base CF until the index row 
> is scanned completely which may take too long and thrift server returns 
> TimeoutException. This is one of the reasons why we shouldn't index a column 
> if the index is too big.
> Multiple indexes merging can resolve the case where there are only EQUAL 
> clauses. (CASSANDRA-6048 addresses it).
> If the query has none-EQUAL clauses, we still need do data filtering which 
> might lead to timeout exception.
> We can either disable those kind of queries or WARN the user that data 
> filtering might lead to timeout exception or OOM.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6370) Updating cql created table through cassandra-cli transform it into a compact storage table

2013-11-18 Thread Alain RODRIGUEZ (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825405#comment-13825405
 ] 

Alain RODRIGUEZ commented on CASSANDRA-6370:


I didn't try to list any table here.

I heard was thrift / cql were abstractions of the same data and that we could 
continuing to use both. I thought cassandra-cli was also compatible with cql 
tables. Even more, I had no error or warning while running my cassandra-cli 
command.

In my point of view (which was wrong at this time, I agree now) this was a 
"normal" usage of cassandra that resulted into a bug in my productions servers. 
My error is now fixed as you can read at the end of the description. My point 
here is to help the Cassandra team to make Cassandra more robust to avoid more 
issues of this kind.

Do whatever you want with this report, but there is no need of bashing me, this 
wasn't trivial, and the fact you hide the cql-created tables doesn't help in 
any way since I didn't list them. 

This happen to me for being an early Cassandra adopter who use to change the 
schema using cassandra-cli.

> Updating cql created table through cassandra-cli transform it into a compact 
> storage table
> --
>
> Key: CASSANDRA-6370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6370
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Alain RODRIGUEZ
>Assignee: Sylvain Lebresne
>Priority: Critical
>
> To reproduce :
> echo "CREATE TABLE test (aid int, period text, event text, viewer text, 
> PRIMARY KEY (aid, period, event, viewer) );" | cqlsh -kmykeyspace;
> echo "describe table test;" | cqlsh -kmykeyspace;
> Output >
> CREATE TABLE test (
>   aid int,
>   period text,
>   event text,
>   viewer text,
>   PRIMARY KEY (aid, period, event, viewer)
> ) WITH
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> Then do :
> echo "update column family test with dclocal_read_repair_chance = 0.1;" | 
> cassandra-cli -kmykeyspace
> And finally again : echo "describe table test;" | cqlsh -kmykeyspace;
> Output >
> CREATE TABLE test (
>   aid int,
>   column1 text,
>   column2 text,
>   column3 text,
>   column4 text,
>   value blob,
>   PRIMARY KEY (aid, column1, column2, column3, column4)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.10 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> This is quite annoying in production. If it is happening to you: 
> UPDATE system.schema_columnfamilies SET column_aliases = 
> '["period","event","viewer"]' WHERE keyspace_name='mykeyspace' AND 
> columnfamily_name='test'; should help restoring the table. (Thanks Sylvain 
> for this information.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6370) Updating cql created table through cassandra-cli transform it into a compact storage table

2013-11-18 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6370:


Attachment: 6370.txt

Let's keep it simple, attaching a patch that just refuse modifications to CQL3 
tables from thrift. We don't allow to create them or list them so there's no 
good reason to allow modifying them and that way we make sure to avoid subtle 
screw-ups. And if you really want to shoot yourself in the foot by messing up 
with the underlying schema layout, that's what the System tables are for.

> Updating cql created table through cassandra-cli transform it into a compact 
> storage table
> --
>
> Key: CASSANDRA-6370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6370
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Alain RODRIGUEZ
>Assignee: Sylvain Lebresne
>Priority: Critical
> Fix For: 1.2.12
>
> Attachments: 6370.txt
>
>
> To reproduce :
> echo "CREATE TABLE test (aid int, period text, event text, viewer text, 
> PRIMARY KEY (aid, period, event, viewer) );" | cqlsh -kmykeyspace;
> echo "describe table test;" | cqlsh -kmykeyspace;
> Output >
> CREATE TABLE test (
>   aid int,
>   period text,
>   event text,
>   viewer text,
>   PRIMARY KEY (aid, period, event, viewer)
> ) WITH
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> Then do :
> echo "update column family test with dclocal_read_repair_chance = 0.1;" | 
> cassandra-cli -kmykeyspace
> And finally again : echo "describe table test;" | cqlsh -kmykeyspace;
> Output >
> CREATE TABLE test (
>   aid int,
>   column1 text,
>   column2 text,
>   column3 text,
>   column4 text,
>   value blob,
>   PRIMARY KEY (aid, column1, column2, column3, column4)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.10 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> This is quite annoying in production. If it is happening to you: 
> UPDATE system.schema_columnfamilies SET column_aliases = 
> '["period","event","viewer"]' WHERE keyspace_name='mykeyspace' AND 
> columnfamily_name='test'; should help restoring the table. (Thanks Sylvain 
> for this information.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6370) Updating cql created table through cassandra-cli transform it into a compact storage table

2013-11-18 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825412#comment-13825412
 ] 

Jonathan Ellis commented on CASSANDRA-6370:
---

+1

> Updating cql created table through cassandra-cli transform it into a compact 
> storage table
> --
>
> Key: CASSANDRA-6370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6370
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Alain RODRIGUEZ
>Assignee: Sylvain Lebresne
>Priority: Critical
> Fix For: 1.2.12
>
> Attachments: 6370.txt
>
>
> To reproduce :
> echo "CREATE TABLE test (aid int, period text, event text, viewer text, 
> PRIMARY KEY (aid, period, event, viewer) );" | cqlsh -kmykeyspace;
> echo "describe table test;" | cqlsh -kmykeyspace;
> Output >
> CREATE TABLE test (
>   aid int,
>   period text,
>   event text,
>   viewer text,
>   PRIMARY KEY (aid, period, event, viewer)
> ) WITH
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> Then do :
> echo "update column family test with dclocal_read_repair_chance = 0.1;" | 
> cassandra-cli -kmykeyspace
> And finally again : echo "describe table test;" | cqlsh -kmykeyspace;
> Output >
> CREATE TABLE test (
>   aid int,
>   column1 text,
>   column2 text,
>   column3 text,
>   column4 text,
>   value blob,
>   PRIMARY KEY (aid, column1, column2, column3, column4)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.10 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> This is quite annoying in production. If it is happening to you: 
> UPDATE system.schema_columnfamilies SET column_aliases = 
> '["period","event","viewer"]' WHERE keyspace_name='mykeyspace' AND 
> columnfamily_name='test'; should help restoring the table. (Thanks Sylvain 
> for this information.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[2/2] git commit: Fix size computation of prepared statements

2013-11-18 Thread slebresne
Fix size computation of prepared statements

patch by slebresne; reviewed by lyubent for CASSANDRA-6369


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0ffa5c20
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0ffa5c20
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0ffa5c20

Branch: refs/heads/cassandra-1.2
Commit: 0ffa5c20af381b697d25f19b7a987fef8fcc2e92
Parents: 34645c3
Author: Sylvain Lebresne 
Authored: Mon Nov 18 17:17:46 2013 +0100
Committer: Sylvain Lebresne 
Committed: Mon Nov 18 17:17:46 2013 +0100

--
 CHANGES.txt  |  1 +
 src/java/org/apache/cassandra/cql3/QueryProcessor.java   | 10 +++---
 .../apache/cassandra/cql3/statements/BatchStatement.java | 11 +++
 .../cassandra/cql3/statements/DeleteStatement.java   |  7 +++
 .../cassandra/cql3/statements/ModificationStatement.java |  2 +-
 .../cassandra/cql3/statements/SelectStatement.java   |  8 +++-
 .../cassandra/cql3/statements/UpdateStatement.java   |  7 +++
 7 files changed, 41 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0ffa5c20/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 29e87d8..d7395a6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -22,6 +22,7 @@
  * Make CL code for the native protocol match the one in C* 2.0
(CASSANDRA-6347)
  * Disallow altering CQL3 table from thrift (CASSANDRA-6370)
+ * Fix size computation of prepared statement (CASSANDRA-6369)
 
 
 1.2.11

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0ffa5c20/src/java/org/apache/cassandra/cql3/QueryProcessor.java
--
diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java 
b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
index 2d43bdc..40b9339 100644
--- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java
+++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
@@ -70,7 +70,7 @@ public class QueryProcessor
 private static final ConcurrentLinkedHashMap 
preparedStatements;
 private static final ConcurrentLinkedHashMap 
thriftPreparedStatements;
 
-static 
+static
 {
 if (MemoryMeter.isInitialized())
 {
@@ -96,7 +96,6 @@ public class QueryProcessor
 }
 }
 
-
 public static CQLStatement getPrepared(MD5Digest id)
 {
 return preparedStatements.get(id);
@@ -328,6 +327,11 @@ public class QueryProcessor
 
 private static long measure(Object key)
 {
-return MemoryMeter.isInitialized() ? meter.measureDeep(key) : 1;
+if (!MemoryMeter.isInitialized())
+return 1;
+
+return key instanceof MeasurableForPreparedCache
+ ? ((MeasurableForPreparedCache)key).measureForPreparedCache(meter)
+ : meter.measureDeep(key);
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0ffa5c20/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index f93eb63..d211eb9 100644
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@ -20,6 +20,8 @@ package org.apache.cassandra.cql3.statements;
 import java.nio.ByteBuffer;
 import java.util.*;
 
+import org.github.jamm.MemoryMeter;
+
 import org.apache.cassandra.auth.Permission;
 import org.apache.cassandra.cql3.*;
 import org.apache.cassandra.db.ConsistencyLevel;
@@ -29,6 +31,7 @@ import org.apache.cassandra.db.RowMutation;
 import org.apache.cassandra.exceptions.*;
 import org.apache.cassandra.service.ClientState;
 import org.apache.cassandra.utils.Pair;
+import org.apache.cassandra.utils.ObjectSizes;
 
 /**
  * A BATCH statement parsed from a CQL query.
@@ -54,6 +57,14 @@ public class BatchStatement extends ModificationStatement
 this.statements = statements;
 }
 
+public long measureForPreparedCache(MemoryMeter meter)
+{
+long size = meter.measure(this) + meter.measure(statements);
+for (ModificationStatement stmt : statements)
+size += stmt.measureForPreparedCache(meter);
+return size;
+}
+
 @Override
 public void prepareKeyspace(ClientState state) throws 
InvalidRequestException
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0ffa5c20/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
--
diff --git a/sr

[1/2] git commit: Disallow updating CQL3 tables from thrift

2013-11-18 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.2 a301ea142 -> 0ffa5c20a


Disallow updating CQL3 tables from thrift

patch by slebresne; reviewed by jbellis for CASSANDRA-6370


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34645c37
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34645c37
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34645c37

Branch: refs/heads/cassandra-1.2
Commit: 34645c37d1e47b60cb20d3e3cd0e0376e8f92ae5
Parents: a301ea1
Author: Sylvain Lebresne 
Authored: Mon Nov 18 17:16:11 2013 +0100
Committer: Sylvain Lebresne 
Committed: Mon Nov 18 17:16:11 2013 +0100

--
 CHANGES.txt   | 1 +
 NEWS.txt  | 9 +
 src/java/org/apache/cassandra/thrift/CassandraServer.java | 3 +++
 3 files changed, 13 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/34645c37/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a438f15..29e87d8 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -21,6 +21,7 @@
  * Fix AssertionError when doing set element deletion (CASSANDRA-6341)
  * Make CL code for the native protocol match the one in C* 2.0
(CASSANDRA-6347)
+ * Disallow altering CQL3 table from thrift (CASSANDRA-6370)
 
 
 1.2.11

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34645c37/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index d803f02..915729a 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -14,6 +14,15 @@ restore snapshots created with the previous major version 
using the
 using the provided 'sstableupgrade' tool.
 
 
+1.2.12
+==
+
+Upgrading
+-
+- Altering CQL3 tables from Thrift is now rejected as this had the
+  potential of corrupting the schema. You should use cqlsh otherwise.
+
+
 1.2.11
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34645c37/src/java/org/apache/cassandra/thrift/CassandraServer.java
--
diff --git a/src/java/org/apache/cassandra/thrift/CassandraServer.java 
b/src/java/org/apache/cassandra/thrift/CassandraServer.java
index 9063be1..5b9fbfd 100644
--- a/src/java/org/apache/cassandra/thrift/CassandraServer.java
+++ b/src/java/org/apache/cassandra/thrift/CassandraServer.java
@@ -1407,6 +1407,9 @@ public class CassandraServer implements Cassandra.Iface
 if (oldCfm == null)
 throw new InvalidRequestException("Could not find column 
family definition to modify.");
 
+if (!oldCfm.isThriftIncompatible())
+throw new InvalidRequestException("Cannot modify CQL3 table " 
+ oldCfm.cfName + " as it may break the schema. You should use cqlsh to modify 
CQL3 tables instead.");
+
 state().hasColumnFamilyAccess(cf_def.keyspace, cf_def.name, 
Permission.ALTER);
 
 CFMetaData.applyImplicitDefaults(cf_def);



[1/4] git commit: Improve error message

2013-11-18 Thread slebresne
Updated Branches:
  refs/heads/cassandra-2.0 a7a7edeaa -> 25471bac3


Improve error message


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a301ea14
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a301ea14
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a301ea14

Branch: refs/heads/cassandra-2.0
Commit: a301ea14293f046dc4ff10c84aaef192fbc70c0a
Parents: 2b1fb0f
Author: Sylvain Lebresne 
Authored: Mon Nov 18 15:53:42 2013 +0100
Committer: Sylvain Lebresne 
Committed: Mon Nov 18 15:53:42 2013 +0100

--
 src/java/org/apache/cassandra/dht/Murmur3Partitioner.java | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a301ea14/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
--
diff --git a/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java 
b/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
index 502f5cc..fe1c24f 100644
--- a/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
+++ b/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
@@ -185,7 +185,14 @@ public class Murmur3Partitioner extends 
AbstractPartitioner
 
 public Token fromString(String string)
 {
-return new LongToken(Long.valueOf(string));
+try
+{
+return new LongToken(Long.valueOf(string));
+}
+catch (NumberFormatException e)
+{
+throw new IllegalArgumentException(String.format("Invalid 
token for Murmur3Partitioner. Got %s but expected a long value (unsigned 8 
bytes integer)."));
+}
 }
 };
 



[4/4] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-11-18 Thread slebresne
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
NEWS.txt
src/java/org/apache/cassandra/cql3/QueryProcessor.java
src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
src/java/org/apache/cassandra/thrift/CassandraServer.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/25471bac
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/25471bac
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/25471bac

Branch: refs/heads/cassandra-2.0
Commit: 25471bac3527c9fc54c815626f9266d5ea8508da
Parents: a7a7ede 0ffa5c2
Author: Sylvain Lebresne 
Authored: Mon Nov 18 17:30:15 2013 +0100
Committer: Sylvain Lebresne 
Committed: Mon Nov 18 17:30:15 2013 +0100

--
 CHANGES.txt |  2 ++
 .../cql3/MeasurableForPreparedCache.java| 26 
 .../apache/cassandra/cql3/QueryProcessor.java   |  9 +--
 .../cql3/statements/BatchStatement.java | 12 -
 .../cql3/statements/ModificationStatement.java  |  9 ++-
 .../cql3/statements/SelectStatement.java|  8 +-
 .../cassandra/dht/Murmur3Partitioner.java   |  9 ++-
 .../cassandra/thrift/CassandraServer.java   |  2 ++
 8 files changed, 71 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/25471bac/CHANGES.txt
--
diff --cc CHANGES.txt
index 57ad75d,d7395a6..7b2db56
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -48,42 -21,11 +48,44 @@@ Merged from 1.2
   * Fix AssertionError when doing set element deletion (CASSANDRA-6341)
   * Make CL code for the native protocol match the one in C* 2.0
 (CASSANDRA-6347)
+  * Disallow altering CQL3 table from thrift (CASSANDRA-6370)
+  * Fix size computation of prepared statement (CASSANDRA-6369)
  
  
 -1.2.11
 +2.0.2
 + * Update FailureDetector to use nanontime (CASSANDRA-4925)
 + * Fix FileCacheService regressions (CASSANDRA-6149)
 + * Never return WriteTimeout for CL.ANY (CASSANDRA-6032)
 + * Fix race conditions in bulk loader (CASSANDRA-6129)
 + * Add configurable metrics reporting (CASSANDRA-4430)
 + * drop queries exceeding a configurable number of tombstones (CASSANDRA-6117)
 + * Track and persist sstable read activity (CASSANDRA-5515)
 + * Fixes for speculative retry (CASSANDRA-5932, CASSANDRA-6194)
 + * Improve memory usage of metadata min/max column names (CASSANDRA-6077)
 + * Fix thrift validation refusing row markers on CQL3 tables (CASSANDRA-6081)
 + * Fix insertion of collections with CAS (CASSANDRA-6069)
 + * Correctly send metadata on SELECT COUNT (CASSANDRA-6080)
 + * Track clients' remote addresses in ClientState (CASSANDRA-6070)
 + * Create snapshot dir if it does not exist when migrating
 +   leveled manifest (CASSANDRA-6093)
 + * make sequential nodetool repair the default (CASSANDRA-5950)
 + * Add more hooks for compaction strategy implementations (CASSANDRA-6111)
 + * Fix potential NPE on composite 2ndary indexes (CASSANDRA-6098)
 + * Delete can potentially be skipped in batch (CASSANDRA-6115)
 + * Allow alter keyspace on system_traces (CASSANDRA-6016)
 + * Disallow empty column names in cql (CASSANDRA-6136)
 + * Use Java7 file-handling APIs and fix file moving on Windows 
(CASSANDRA-5383)
 + * Save compaction history to system keyspace (CASSANDRA-5078)
 + * Fix NPE if StorageService.getOperationMode() is executed before full 
startup (CASSANDRA-6166)
 + * CQL3: support pre-epoch longs for TimestampType (CASSANDRA-6212)
 + * Add reloadtriggers command to nodetool (CASSANDRA-4949)
 + * cqlsh: ignore empty 'value alias' in DESCRIBE (CASSANDRA-6139)
 + * Fix sstable loader (CASSANDRA-6205)
 + * Reject bootstrapping if the node already exists in gossip (CASSANDRA-5571)
 + * Fix NPE while loading paxos state (CASSANDRA-6211)
 + * cqlsh: add SHOW SESSION  command (CASSANDRA-6228)
 +Merged from 1.2:
 + * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)
   * Add a warning for small LCS sstable size (CASSANDRA-6191)
   * Add ability to list specific KS/CF combinations in nodetool cfstats 
(CASSANDRA-4191)
   * Mark CF clean if a mutation raced the drop and got it marked dirty 
(CASSANDRA-5946)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/25471bac/src/java/org/apache/cassandra/cql3/MeasurableForPreparedCache.java
--
diff --cc src/java/org/apache/cassandra/cql3/Meas

[2/4] git commit: Disallow updating CQL3 tables from thrift

2013-11-18 Thread slebresne
Disallow updating CQL3 tables from thrift

patch by slebresne; reviewed by jbellis for CASSANDRA-6370


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34645c37
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34645c37
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34645c37

Branch: refs/heads/cassandra-2.0
Commit: 34645c37d1e47b60cb20d3e3cd0e0376e8f92ae5
Parents: a301ea1
Author: Sylvain Lebresne 
Authored: Mon Nov 18 17:16:11 2013 +0100
Committer: Sylvain Lebresne 
Committed: Mon Nov 18 17:16:11 2013 +0100

--
 CHANGES.txt   | 1 +
 NEWS.txt  | 9 +
 src/java/org/apache/cassandra/thrift/CassandraServer.java | 3 +++
 3 files changed, 13 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/34645c37/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a438f15..29e87d8 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -21,6 +21,7 @@
  * Fix AssertionError when doing set element deletion (CASSANDRA-6341)
  * Make CL code for the native protocol match the one in C* 2.0
(CASSANDRA-6347)
+ * Disallow altering CQL3 table from thrift (CASSANDRA-6370)
 
 
 1.2.11

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34645c37/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index d803f02..915729a 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -14,6 +14,15 @@ restore snapshots created with the previous major version 
using the
 using the provided 'sstableupgrade' tool.
 
 
+1.2.12
+==
+
+Upgrading
+-
+- Altering CQL3 tables from Thrift is now rejected as this had the
+  potential of corrupting the schema. You should use cqlsh otherwise.
+
+
 1.2.11
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34645c37/src/java/org/apache/cassandra/thrift/CassandraServer.java
--
diff --git a/src/java/org/apache/cassandra/thrift/CassandraServer.java 
b/src/java/org/apache/cassandra/thrift/CassandraServer.java
index 9063be1..5b9fbfd 100644
--- a/src/java/org/apache/cassandra/thrift/CassandraServer.java
+++ b/src/java/org/apache/cassandra/thrift/CassandraServer.java
@@ -1407,6 +1407,9 @@ public class CassandraServer implements Cassandra.Iface
 if (oldCfm == null)
 throw new InvalidRequestException("Could not find column 
family definition to modify.");
 
+if (!oldCfm.isThriftIncompatible())
+throw new InvalidRequestException("Cannot modify CQL3 table " 
+ oldCfm.cfName + " as it may break the schema. You should use cqlsh to modify 
CQL3 tables instead.");
+
 state().hasColumnFamilyAccess(cf_def.keyspace, cf_def.name, 
Permission.ALTER);
 
 CFMetaData.applyImplicitDefaults(cf_def);



[3/4] git commit: Fix size computation of prepared statements

2013-11-18 Thread slebresne
Fix size computation of prepared statements

patch by slebresne; reviewed by lyubent for CASSANDRA-6369


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0ffa5c20
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0ffa5c20
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0ffa5c20

Branch: refs/heads/cassandra-2.0
Commit: 0ffa5c20af381b697d25f19b7a987fef8fcc2e92
Parents: 34645c3
Author: Sylvain Lebresne 
Authored: Mon Nov 18 17:17:46 2013 +0100
Committer: Sylvain Lebresne 
Committed: Mon Nov 18 17:17:46 2013 +0100

--
 CHANGES.txt  |  1 +
 src/java/org/apache/cassandra/cql3/QueryProcessor.java   | 10 +++---
 .../apache/cassandra/cql3/statements/BatchStatement.java | 11 +++
 .../cassandra/cql3/statements/DeleteStatement.java   |  7 +++
 .../cassandra/cql3/statements/ModificationStatement.java |  2 +-
 .../cassandra/cql3/statements/SelectStatement.java   |  8 +++-
 .../cassandra/cql3/statements/UpdateStatement.java   |  7 +++
 7 files changed, 41 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0ffa5c20/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 29e87d8..d7395a6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -22,6 +22,7 @@
  * Make CL code for the native protocol match the one in C* 2.0
(CASSANDRA-6347)
  * Disallow altering CQL3 table from thrift (CASSANDRA-6370)
+ * Fix size computation of prepared statement (CASSANDRA-6369)
 
 
 1.2.11

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0ffa5c20/src/java/org/apache/cassandra/cql3/QueryProcessor.java
--
diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java 
b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
index 2d43bdc..40b9339 100644
--- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java
+++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
@@ -70,7 +70,7 @@ public class QueryProcessor
 private static final ConcurrentLinkedHashMap 
preparedStatements;
 private static final ConcurrentLinkedHashMap 
thriftPreparedStatements;
 
-static 
+static
 {
 if (MemoryMeter.isInitialized())
 {
@@ -96,7 +96,6 @@ public class QueryProcessor
 }
 }
 
-
 public static CQLStatement getPrepared(MD5Digest id)
 {
 return preparedStatements.get(id);
@@ -328,6 +327,11 @@ public class QueryProcessor
 
 private static long measure(Object key)
 {
-return MemoryMeter.isInitialized() ? meter.measureDeep(key) : 1;
+if (!MemoryMeter.isInitialized())
+return 1;
+
+return key instanceof MeasurableForPreparedCache
+ ? ((MeasurableForPreparedCache)key).measureForPreparedCache(meter)
+ : meter.measureDeep(key);
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0ffa5c20/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index f93eb63..d211eb9 100644
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@ -20,6 +20,8 @@ package org.apache.cassandra.cql3.statements;
 import java.nio.ByteBuffer;
 import java.util.*;
 
+import org.github.jamm.MemoryMeter;
+
 import org.apache.cassandra.auth.Permission;
 import org.apache.cassandra.cql3.*;
 import org.apache.cassandra.db.ConsistencyLevel;
@@ -29,6 +31,7 @@ import org.apache.cassandra.db.RowMutation;
 import org.apache.cassandra.exceptions.*;
 import org.apache.cassandra.service.ClientState;
 import org.apache.cassandra.utils.Pair;
+import org.apache.cassandra.utils.ObjectSizes;
 
 /**
  * A BATCH statement parsed from a CQL query.
@@ -54,6 +57,14 @@ public class BatchStatement extends ModificationStatement
 this.statements = statements;
 }
 
+public long measureForPreparedCache(MemoryMeter meter)
+{
+long size = meter.measure(this) + meter.measure(statements);
+for (ModificationStatement stmt : statements)
+size += stmt.measureForPreparedCache(meter);
+return size;
+}
+
 @Override
 public void prepareKeyspace(ClientState state) throws 
InvalidRequestException
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0ffa5c20/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
--
diff --git a/sr

git commit: Fix typo

2013-11-18 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.2 0ffa5c20a -> 1ac601980


Fix typo


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1ac60198
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1ac60198
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1ac60198

Branch: refs/heads/cassandra-1.2
Commit: 1ac601980382516419bfe45b01ce1b8eccf4d9ce
Parents: 0ffa5c2
Author: Sylvain Lebresne 
Authored: Mon Nov 18 17:30:55 2013 +0100
Committer: Sylvain Lebresne 
Committed: Mon Nov 18 17:30:55 2013 +0100

--
 src/java/org/apache/cassandra/thrift/CassandraServer.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1ac60198/src/java/org/apache/cassandra/thrift/CassandraServer.java
--
diff --git a/src/java/org/apache/cassandra/thrift/CassandraServer.java 
b/src/java/org/apache/cassandra/thrift/CassandraServer.java
index 5b9fbfd..883ab5a 100644
--- a/src/java/org/apache/cassandra/thrift/CassandraServer.java
+++ b/src/java/org/apache/cassandra/thrift/CassandraServer.java
@@ -1407,7 +1407,7 @@ public class CassandraServer implements Cassandra.Iface
 if (oldCfm == null)
 throw new InvalidRequestException("Could not find column 
family definition to modify.");
 
-if (!oldCfm.isThriftIncompatible())
+if (oldCfm.isThriftIncompatible())
 throw new InvalidRequestException("Cannot modify CQL3 table " 
+ oldCfm.cfName + " as it may break the schema. You should use cqlsh to modify 
CQL3 tables instead.");
 
 state().hasColumnFamilyAccess(cf_def.keyspace, cf_def.name, 
Permission.ALTER);



[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-11-18 Thread slebresne
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
src/java/org/apache/cassandra/thrift/CassandraServer.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fd6f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fd6f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fd6f

Branch: refs/heads/cassandra-2.0
Commit: fd6ff7464bede6dbcc634ffd3ff8cf1650e7
Parents: 25471ba 1ac6019
Author: Sylvain Lebresne 
Authored: Mon Nov 18 17:31:49 2013 +0100
Committer: Sylvain Lebresne 
Committed: Mon Nov 18 17:31:49 2013 +0100

--

--




[1/2] git commit: Fix typo

2013-11-18 Thread slebresne
Updated Branches:
  refs/heads/cassandra-2.0 25471bac3 -> fd6ff


Fix typo


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1ac60198
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1ac60198
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1ac60198

Branch: refs/heads/cassandra-2.0
Commit: 1ac601980382516419bfe45b01ce1b8eccf4d9ce
Parents: 0ffa5c2
Author: Sylvain Lebresne 
Authored: Mon Nov 18 17:30:55 2013 +0100
Committer: Sylvain Lebresne 
Committed: Mon Nov 18 17:30:55 2013 +0100

--
 src/java/org/apache/cassandra/thrift/CassandraServer.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1ac60198/src/java/org/apache/cassandra/thrift/CassandraServer.java
--
diff --git a/src/java/org/apache/cassandra/thrift/CassandraServer.java 
b/src/java/org/apache/cassandra/thrift/CassandraServer.java
index 5b9fbfd..883ab5a 100644
--- a/src/java/org/apache/cassandra/thrift/CassandraServer.java
+++ b/src/java/org/apache/cassandra/thrift/CassandraServer.java
@@ -1407,7 +1407,7 @@ public class CassandraServer implements Cassandra.Iface
 if (oldCfm == null)
 throw new InvalidRequestException("Could not find column 
family definition to modify.");
 
-if (!oldCfm.isThriftIncompatible())
+if (oldCfm.isThriftIncompatible())
 throw new InvalidRequestException("Cannot modify CQL3 table " 
+ oldCfm.cfName + " as it may break the schema. You should use cqlsh to modify 
CQL3 tables instead.");
 
 state().hasColumnFamilyAccess(cf_def.keyspace, cf_def.name, 
Permission.ALTER);



[jira] [Updated] (CASSANDRA-6370) Updating cql created table through cassandra-cli transform it into a compact storage table

2013-11-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6370:
-

Reviewer: Aleksey Yeschenko

+1

> Updating cql created table through cassandra-cli transform it into a compact 
> storage table
> --
>
> Key: CASSANDRA-6370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6370
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Alain RODRIGUEZ
>Assignee: Sylvain Lebresne
>Priority: Critical
> Fix For: 1.2.12
>
> Attachments: 6370.txt
>
>
> To reproduce :
> echo "CREATE TABLE test (aid int, period text, event text, viewer text, 
> PRIMARY KEY (aid, period, event, viewer) );" | cqlsh -kmykeyspace;
> echo "describe table test;" | cqlsh -kmykeyspace;
> Output >
> CREATE TABLE test (
>   aid int,
>   period text,
>   event text,
>   viewer text,
>   PRIMARY KEY (aid, period, event, viewer)
> ) WITH
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> Then do :
> echo "update column family test with dclocal_read_repair_chance = 0.1;" | 
> cassandra-cli -kmykeyspace
> And finally again : echo "describe table test;" | cqlsh -kmykeyspace;
> Output >
> CREATE TABLE test (
>   aid int,
>   column1 text,
>   column2 text,
>   column3 text,
>   column4 text,
>   value blob,
>   PRIMARY KEY (aid, column1, column2, column3, column4)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.10 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> This is quite annoying in production. If it is happening to you: 
> UPDATE system.schema_columnfamilies SET column_aliases = 
> '["period","event","viewer"]' WHERE keyspace_name='mykeyspace' AND 
> columnfamily_name='test'; should help restoring the table. (Thanks Sylvain 
> for this information.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[5/7] git commit: Fix typo

2013-11-18 Thread slebresne
Fix typo


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1ac60198
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1ac60198
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1ac60198

Branch: refs/heads/trunk
Commit: 1ac601980382516419bfe45b01ce1b8eccf4d9ce
Parents: 0ffa5c2
Author: Sylvain Lebresne 
Authored: Mon Nov 18 17:30:55 2013 +0100
Committer: Sylvain Lebresne 
Committed: Mon Nov 18 17:30:55 2013 +0100

--
 src/java/org/apache/cassandra/thrift/CassandraServer.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1ac60198/src/java/org/apache/cassandra/thrift/CassandraServer.java
--
diff --git a/src/java/org/apache/cassandra/thrift/CassandraServer.java 
b/src/java/org/apache/cassandra/thrift/CassandraServer.java
index 5b9fbfd..883ab5a 100644
--- a/src/java/org/apache/cassandra/thrift/CassandraServer.java
+++ b/src/java/org/apache/cassandra/thrift/CassandraServer.java
@@ -1407,7 +1407,7 @@ public class CassandraServer implements Cassandra.Iface
 if (oldCfm == null)
 throw new InvalidRequestException("Could not find column 
family definition to modify.");
 
-if (!oldCfm.isThriftIncompatible())
+if (oldCfm.isThriftIncompatible())
 throw new InvalidRequestException("Cannot modify CQL3 table " 
+ oldCfm.cfName + " as it may break the schema. You should use cqlsh to modify 
CQL3 tables instead.");
 
 state().hasColumnFamilyAccess(cf_def.keyspace, cf_def.name, 
Permission.ALTER);



[2/7] git commit: Disallow updating CQL3 tables from thrift

2013-11-18 Thread slebresne
Disallow updating CQL3 tables from thrift

patch by slebresne; reviewed by jbellis for CASSANDRA-6370


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34645c37
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34645c37
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34645c37

Branch: refs/heads/trunk
Commit: 34645c37d1e47b60cb20d3e3cd0e0376e8f92ae5
Parents: a301ea1
Author: Sylvain Lebresne 
Authored: Mon Nov 18 17:16:11 2013 +0100
Committer: Sylvain Lebresne 
Committed: Mon Nov 18 17:16:11 2013 +0100

--
 CHANGES.txt   | 1 +
 NEWS.txt  | 9 +
 src/java/org/apache/cassandra/thrift/CassandraServer.java | 3 +++
 3 files changed, 13 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/34645c37/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a438f15..29e87d8 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -21,6 +21,7 @@
  * Fix AssertionError when doing set element deletion (CASSANDRA-6341)
  * Make CL code for the native protocol match the one in C* 2.0
(CASSANDRA-6347)
+ * Disallow altering CQL3 table from thrift (CASSANDRA-6370)
 
 
 1.2.11

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34645c37/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index d803f02..915729a 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -14,6 +14,15 @@ restore snapshots created with the previous major version 
using the
 using the provided 'sstableupgrade' tool.
 
 
+1.2.12
+==
+
+Upgrading
+-
+- Altering CQL3 tables from Thrift is now rejected as this had the
+  potential of corrupting the schema. You should use cqlsh otherwise.
+
+
 1.2.11
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34645c37/src/java/org/apache/cassandra/thrift/CassandraServer.java
--
diff --git a/src/java/org/apache/cassandra/thrift/CassandraServer.java 
b/src/java/org/apache/cassandra/thrift/CassandraServer.java
index 9063be1..5b9fbfd 100644
--- a/src/java/org/apache/cassandra/thrift/CassandraServer.java
+++ b/src/java/org/apache/cassandra/thrift/CassandraServer.java
@@ -1407,6 +1407,9 @@ public class CassandraServer implements Cassandra.Iface
 if (oldCfm == null)
 throw new InvalidRequestException("Could not find column 
family definition to modify.");
 
+if (!oldCfm.isThriftIncompatible())
+throw new InvalidRequestException("Cannot modify CQL3 table " 
+ oldCfm.cfName + " as it may break the schema. You should use cqlsh to modify 
CQL3 tables instead.");
+
 state().hasColumnFamilyAccess(cf_def.keyspace, cf_def.name, 
Permission.ALTER);
 
 CFMetaData.applyImplicitDefaults(cf_def);



[1/7] git commit: Improve error message

2013-11-18 Thread slebresne
Updated Branches:
  refs/heads/trunk e50d89dd2 -> 542d9c8d1


Improve error message


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a301ea14
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a301ea14
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a301ea14

Branch: refs/heads/trunk
Commit: a301ea14293f046dc4ff10c84aaef192fbc70c0a
Parents: 2b1fb0f
Author: Sylvain Lebresne 
Authored: Mon Nov 18 15:53:42 2013 +0100
Committer: Sylvain Lebresne 
Committed: Mon Nov 18 15:53:42 2013 +0100

--
 src/java/org/apache/cassandra/dht/Murmur3Partitioner.java | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a301ea14/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
--
diff --git a/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java 
b/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
index 502f5cc..fe1c24f 100644
--- a/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
+++ b/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
@@ -185,7 +185,14 @@ public class Murmur3Partitioner extends 
AbstractPartitioner
 
 public Token fromString(String string)
 {
-return new LongToken(Long.valueOf(string));
+try
+{
+return new LongToken(Long.valueOf(string));
+}
+catch (NumberFormatException e)
+{
+throw new IllegalArgumentException(String.format("Invalid 
token for Murmur3Partitioner. Got %s but expected a long value (unsigned 8 
bytes integer)."));
+}
 }
 };
 



[4/7] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-11-18 Thread slebresne
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
NEWS.txt
src/java/org/apache/cassandra/cql3/QueryProcessor.java
src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
src/java/org/apache/cassandra/thrift/CassandraServer.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/25471bac
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/25471bac
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/25471bac

Branch: refs/heads/trunk
Commit: 25471bac3527c9fc54c815626f9266d5ea8508da
Parents: a7a7ede 0ffa5c2
Author: Sylvain Lebresne 
Authored: Mon Nov 18 17:30:15 2013 +0100
Committer: Sylvain Lebresne 
Committed: Mon Nov 18 17:30:15 2013 +0100

--
 CHANGES.txt |  2 ++
 .../cql3/MeasurableForPreparedCache.java| 26 
 .../apache/cassandra/cql3/QueryProcessor.java   |  9 +--
 .../cql3/statements/BatchStatement.java | 12 -
 .../cql3/statements/ModificationStatement.java  |  9 ++-
 .../cql3/statements/SelectStatement.java|  8 +-
 .../cassandra/dht/Murmur3Partitioner.java   |  9 ++-
 .../cassandra/thrift/CassandraServer.java   |  2 ++
 8 files changed, 71 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/25471bac/CHANGES.txt
--
diff --cc CHANGES.txt
index 57ad75d,d7395a6..7b2db56
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -48,42 -21,11 +48,44 @@@ Merged from 1.2
   * Fix AssertionError when doing set element deletion (CASSANDRA-6341)
   * Make CL code for the native protocol match the one in C* 2.0
 (CASSANDRA-6347)
+  * Disallow altering CQL3 table from thrift (CASSANDRA-6370)
+  * Fix size computation of prepared statement (CASSANDRA-6369)
  
  
 -1.2.11
 +2.0.2
 + * Update FailureDetector to use nanontime (CASSANDRA-4925)
 + * Fix FileCacheService regressions (CASSANDRA-6149)
 + * Never return WriteTimeout for CL.ANY (CASSANDRA-6032)
 + * Fix race conditions in bulk loader (CASSANDRA-6129)
 + * Add configurable metrics reporting (CASSANDRA-4430)
 + * drop queries exceeding a configurable number of tombstones (CASSANDRA-6117)
 + * Track and persist sstable read activity (CASSANDRA-5515)
 + * Fixes for speculative retry (CASSANDRA-5932, CASSANDRA-6194)
 + * Improve memory usage of metadata min/max column names (CASSANDRA-6077)
 + * Fix thrift validation refusing row markers on CQL3 tables (CASSANDRA-6081)
 + * Fix insertion of collections with CAS (CASSANDRA-6069)
 + * Correctly send metadata on SELECT COUNT (CASSANDRA-6080)
 + * Track clients' remote addresses in ClientState (CASSANDRA-6070)
 + * Create snapshot dir if it does not exist when migrating
 +   leveled manifest (CASSANDRA-6093)
 + * make sequential nodetool repair the default (CASSANDRA-5950)
 + * Add more hooks for compaction strategy implementations (CASSANDRA-6111)
 + * Fix potential NPE on composite 2ndary indexes (CASSANDRA-6098)
 + * Delete can potentially be skipped in batch (CASSANDRA-6115)
 + * Allow alter keyspace on system_traces (CASSANDRA-6016)
 + * Disallow empty column names in cql (CASSANDRA-6136)
 + * Use Java7 file-handling APIs and fix file moving on Windows 
(CASSANDRA-5383)
 + * Save compaction history to system keyspace (CASSANDRA-5078)
 + * Fix NPE if StorageService.getOperationMode() is executed before full 
startup (CASSANDRA-6166)
 + * CQL3: support pre-epoch longs for TimestampType (CASSANDRA-6212)
 + * Add reloadtriggers command to nodetool (CASSANDRA-4949)
 + * cqlsh: ignore empty 'value alias' in DESCRIBE (CASSANDRA-6139)
 + * Fix sstable loader (CASSANDRA-6205)
 + * Reject bootstrapping if the node already exists in gossip (CASSANDRA-5571)
 + * Fix NPE while loading paxos state (CASSANDRA-6211)
 + * cqlsh: add SHOW SESSION  command (CASSANDRA-6228)
 +Merged from 1.2:
 + * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)
   * Add a warning for small LCS sstable size (CASSANDRA-6191)
   * Add ability to list specific KS/CF combinations in nodetool cfstats 
(CASSANDRA-4191)
   * Mark CF clean if a mutation raced the drop and got it marked dirty 
(CASSANDRA-5946)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/25471bac/src/java/org/apache/cassandra/cql3/MeasurableForPreparedCache.java
--
diff --cc src/java/org/apache/cassandra/cql3/MeasurableFo

[6/7] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-11-18 Thread slebresne
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
src/java/org/apache/cassandra/thrift/CassandraServer.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fd6f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fd6f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fd6f

Branch: refs/heads/trunk
Commit: fd6ff7464bede6dbcc634ffd3ff8cf1650e7
Parents: 25471ba 1ac6019
Author: Sylvain Lebresne 
Authored: Mon Nov 18 17:31:49 2013 +0100
Committer: Sylvain Lebresne 
Committed: Mon Nov 18 17:31:49 2013 +0100

--

--




[3/7] git commit: Fix size computation of prepared statements

2013-11-18 Thread slebresne
Fix size computation of prepared statements

patch by slebresne; reviewed by lyubent for CASSANDRA-6369


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0ffa5c20
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0ffa5c20
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0ffa5c20

Branch: refs/heads/trunk
Commit: 0ffa5c20af381b697d25f19b7a987fef8fcc2e92
Parents: 34645c3
Author: Sylvain Lebresne 
Authored: Mon Nov 18 17:17:46 2013 +0100
Committer: Sylvain Lebresne 
Committed: Mon Nov 18 17:17:46 2013 +0100

--
 CHANGES.txt  |  1 +
 src/java/org/apache/cassandra/cql3/QueryProcessor.java   | 10 +++---
 .../apache/cassandra/cql3/statements/BatchStatement.java | 11 +++
 .../cassandra/cql3/statements/DeleteStatement.java   |  7 +++
 .../cassandra/cql3/statements/ModificationStatement.java |  2 +-
 .../cassandra/cql3/statements/SelectStatement.java   |  8 +++-
 .../cassandra/cql3/statements/UpdateStatement.java   |  7 +++
 7 files changed, 41 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0ffa5c20/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 29e87d8..d7395a6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -22,6 +22,7 @@
  * Make CL code for the native protocol match the one in C* 2.0
(CASSANDRA-6347)
  * Disallow altering CQL3 table from thrift (CASSANDRA-6370)
+ * Fix size computation of prepared statement (CASSANDRA-6369)
 
 
 1.2.11

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0ffa5c20/src/java/org/apache/cassandra/cql3/QueryProcessor.java
--
diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java 
b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
index 2d43bdc..40b9339 100644
--- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java
+++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
@@ -70,7 +70,7 @@ public class QueryProcessor
 private static final ConcurrentLinkedHashMap 
preparedStatements;
 private static final ConcurrentLinkedHashMap 
thriftPreparedStatements;
 
-static 
+static
 {
 if (MemoryMeter.isInitialized())
 {
@@ -96,7 +96,6 @@ public class QueryProcessor
 }
 }
 
-
 public static CQLStatement getPrepared(MD5Digest id)
 {
 return preparedStatements.get(id);
@@ -328,6 +327,11 @@ public class QueryProcessor
 
 private static long measure(Object key)
 {
-return MemoryMeter.isInitialized() ? meter.measureDeep(key) : 1;
+if (!MemoryMeter.isInitialized())
+return 1;
+
+return key instanceof MeasurableForPreparedCache
+ ? ((MeasurableForPreparedCache)key).measureForPreparedCache(meter)
+ : meter.measureDeep(key);
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0ffa5c20/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index f93eb63..d211eb9 100644
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@ -20,6 +20,8 @@ package org.apache.cassandra.cql3.statements;
 import java.nio.ByteBuffer;
 import java.util.*;
 
+import org.github.jamm.MemoryMeter;
+
 import org.apache.cassandra.auth.Permission;
 import org.apache.cassandra.cql3.*;
 import org.apache.cassandra.db.ConsistencyLevel;
@@ -29,6 +31,7 @@ import org.apache.cassandra.db.RowMutation;
 import org.apache.cassandra.exceptions.*;
 import org.apache.cassandra.service.ClientState;
 import org.apache.cassandra.utils.Pair;
+import org.apache.cassandra.utils.ObjectSizes;
 
 /**
  * A BATCH statement parsed from a CQL query.
@@ -54,6 +57,14 @@ public class BatchStatement extends ModificationStatement
 this.statements = statements;
 }
 
+public long measureForPreparedCache(MemoryMeter meter)
+{
+long size = meter.measure(this) + meter.measure(statements);
+for (ModificationStatement stmt : statements)
+size += stmt.measureForPreparedCache(meter);
+return size;
+}
+
 @Override
 public void prepareKeyspace(ClientState state) throws 
InvalidRequestException
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0ffa5c20/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
--
diff --git a/src/java/o

[7/7] git commit: Merge branch 'cassandra-2.0' into trunk

2013-11-18 Thread slebresne
Merge branch 'cassandra-2.0' into trunk

Conflicts:
src/java/org/apache/cassandra/cql3/statements/SelectStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/542d9c8d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/542d9c8d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/542d9c8d

Branch: refs/heads/trunk
Commit: 542d9c8d15e25a26a22da10b88dbf72c491fbf91
Parents: e50d89d fd6
Author: Sylvain Lebresne 
Authored: Mon Nov 18 17:34:13 2013 +0100
Committer: Sylvain Lebresne 
Committed: Mon Nov 18 17:34:13 2013 +0100

--
 CHANGES.txt |  2 ++
 .../cql3/MeasurableForPreparedCache.java| 26 
 .../apache/cassandra/cql3/QueryProcessor.java   |  9 +--
 .../cql3/statements/BatchStatement.java | 12 -
 .../cql3/statements/ModificationStatement.java  |  9 ++-
 .../cql3/statements/SelectStatement.java|  8 +-
 .../cassandra/dht/Murmur3Partitioner.java   |  9 ++-
 .../cassandra/thrift/CassandraServer.java   |  2 ++
 8 files changed, 71 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/542d9c8d/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/542d9c8d/src/java/org/apache/cassandra/cql3/QueryProcessor.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/542d9c8d/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --cc 
src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 793b4c6,8833f34..25f59c7
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@@ -20,9 -20,10 +20,11 @@@ package org.apache.cassandra.cql3.state
  import java.nio.ByteBuffer;
  import java.util.*;
  
+ import org.github.jamm.MemoryMeter;
+ 
  import org.apache.cassandra.auth.Permission;
  import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.ColumnDefinition;
  import org.apache.cassandra.cql3.*;
  import org.apache.cassandra.db.*;
  import org.apache.cassandra.db.filter.ColumnSlice;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/542d9c8d/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 4b57766,b94e549..344e926
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@@ -24,7 -24,7 +24,8 @@@ import com.google.common.base.Objects
  import com.google.common.base.Predicate;
  import com.google.common.collect.AbstractIterator;
  import com.google.common.collect.Iterables;
 +import com.google.common.collect.Iterators;
+ import org.github.jamm.MemoryMeter;
  
  import org.apache.cassandra.auth.Permission;
  import org.apache.cassandra.cql3.*;
@@@ -108,6 -110,11 +109,11 @@@ public class SelectStatement implement
   : selection.getResultMetadata();
  }
  
+ public long measureForPreparedCache(MemoryMeter meter)
+ {
 -return meter.measureDeep(this) - meter.measureDeep(cfDef);
++return meter.measureDeep(this) - meter.measureDeep(cfm);
+ }
+ 
  public int getBoundsTerms()
  {
  return boundTerms;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/542d9c8d/src/java/org/apache/cassandra/thrift/CassandraServer.java
--



[jira] [Updated] (CASSANDRA-6370) Updating cql created table through cassandra-cli transform it into a compact storage table

2013-11-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6370:
-

Reviewer: Jonathan Ellis  (was: Aleksey Yeschenko)

> Updating cql created table through cassandra-cli transform it into a compact 
> storage table
> --
>
> Key: CASSANDRA-6370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6370
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Alain RODRIGUEZ
>Assignee: Sylvain Lebresne
>Priority: Critical
> Fix For: 1.2.12
>
> Attachments: 6370.txt
>
>
> To reproduce :
> echo "CREATE TABLE test (aid int, period text, event text, viewer text, 
> PRIMARY KEY (aid, period, event, viewer) );" | cqlsh -kmykeyspace;
> echo "describe table test;" | cqlsh -kmykeyspace;
> Output >
> CREATE TABLE test (
>   aid int,
>   period text,
>   event text,
>   viewer text,
>   PRIMARY KEY (aid, period, event, viewer)
> ) WITH
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> Then do :
> echo "update column family test with dclocal_read_repair_chance = 0.1;" | 
> cassandra-cli -kmykeyspace
> And finally again : echo "describe table test;" | cqlsh -kmykeyspace;
> Output >
> CREATE TABLE test (
>   aid int,
>   column1 text,
>   column2 text,
>   column3 text,
>   column4 text,
>   value blob,
>   PRIMARY KEY (aid, column1, column2, column3, column4)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.10 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> This is quite annoying in production. If it is happening to you: 
> UPDATE system.schema_columnfamilies SET column_aliases = 
> '["period","event","viewer"]' WHERE keyspace_name='mykeyspace' AND 
> columnfamily_name='test'; should help restoring the table. (Thanks Sylvain 
> for this information.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6370) Updating cql created table through cassandra-cli transform it into a compact storage table

2013-11-18 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825448#comment-13825448
 ] 

Aleksey Yeschenko edited comment on CASSANDRA-6370 at 11/18/13 4:33 PM:


Edit: d-oh.


was (Author: iamaleksey):
+1

> Updating cql created table through cassandra-cli transform it into a compact 
> storage table
> --
>
> Key: CASSANDRA-6370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6370
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Alain RODRIGUEZ
>Assignee: Sylvain Lebresne
>Priority: Critical
> Fix For: 1.2.12
>
> Attachments: 6370.txt
>
>
> To reproduce :
> echo "CREATE TABLE test (aid int, period text, event text, viewer text, 
> PRIMARY KEY (aid, period, event, viewer) );" | cqlsh -kmykeyspace;
> echo "describe table test;" | cqlsh -kmykeyspace;
> Output >
> CREATE TABLE test (
>   aid int,
>   period text,
>   event text,
>   viewer text,
>   PRIMARY KEY (aid, period, event, viewer)
> ) WITH
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> Then do :
> echo "update column family test with dclocal_read_repair_chance = 0.1;" | 
> cassandra-cli -kmykeyspace
> And finally again : echo "describe table test;" | cqlsh -kmykeyspace;
> Output >
> CREATE TABLE test (
>   aid int,
>   column1 text,
>   column2 text,
>   column3 text,
>   column4 text,
>   value blob,
>   PRIMARY KEY (aid, column1, column2, column3, column4)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.10 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> This is quite annoying in production. If it is happening to you: 
> UPDATE system.schema_columnfamilies SET column_aliases = 
> '["period","event","viewer"]' WHERE keyspace_name='mykeyspace' AND 
> columnfamily_name='test'; should help restoring the table. (Thanks Sylvain 
> for this information.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6348) TimeoutException throws if Cql query allows data filtering and index is too big and it can't find the data in base CF after filtering

2013-11-18 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825496#comment-13825496
 ] 

Alex Liu commented on CASSANDRA-6348:
-

I forgot to put the "ALLOW FILTERING" in the clauses. The issue is raised 
during the Hadoop performance testing on indexed columns(The test case indexes 
on the columns which results in too big index). Hadoop Cql query uses "ALLOW 
FILTERING", user can provide user defined where clauses which might have data 
filtering on multiple columns. But the hadoop user may not understand fully 
what does data filtering work under the hood.

 Other than hadoop queries, It's common for user to query on multiple indexes, 
we should explain more detail about when the "ALLOW FILTERING" results in bad 
performance and which case leads to timeout exception in the following 
exception. 

{code}
Cannot execute this query as it might involve data filtering and thus may have 
unpredictable performance. If you want to execute this query despite the 
performance unpredictability, use ALLOW FILTERING
{code}

For most of the cases, "ALLOW FILTERING" improves performance. We can't assume 
that user can fully understand "ALLOW FILTERING" under the hood. I even spend 
quite some time on CASSANDRA-6048 to understand more about data filtering.



> TimeoutException throws if Cql query allows data filtering and index is too 
> big and it can't find the data in base CF after filtering 
> --
>
> Key: CASSANDRA-6348
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6348
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Alex Liu
>Assignee: Alex Liu
>
> If index row is too big, and filtering can't find the match Cql row in base 
> CF, it keep scanning the index row and retrieving base CF until the index row 
> is scanned completely which may take too long and thrift server returns 
> TimeoutException. This is one of the reasons why we shouldn't index a column 
> if the index is too big.
> Multiple indexes merging can resolve the case where there are only EQUAL 
> clauses. (CASSANDRA-6048 addresses it).
> If the query has none-EQUAL clauses, we still need do data filtering which 
> might lead to timeout exception.
> We can either disable those kind of queries or WARN the user that data 
> filtering might lead to timeout exception or OOM.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-4511) Secondary index support for CQL3 collections

2013-11-18 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825504#comment-13825504
 ] 

Jonathan Ellis commented on CASSANDRA-4511:
---

How could this extend to user types?

> Secondary index support for CQL3 collections 
> -
>
> Key: CASSANDRA-4511
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4511
> Project: Cassandra
>  Issue Type: Improvement
>Affects Versions: 1.2.0 beta 1
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 2.1
>
> Attachments: 4511.txt
>
>
> We should allow to 2ndary index on collections. A typical use case would be 
> to add a 'tag set' to say a user profile and to query users based on 
> what tag they have.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6345) Endpoint cache invalidation causes CPU spike (on vnode rings?)

2013-11-18 Thread Rick Branson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825614#comment-13825614
 ] 

Rick Branson commented on CASSANDRA-6345:
-

I like the simpler approach. I still think the callbacks for invalidation are 
asking for it ;) I also think perhaps the stampede lock should be more explicit 
than a synchronized lock on "this" to prevent unintended blocking from future 
modifications.

Either way, I think the only material concern I have is the order that 
TokenMetadata changes get applied to the caches in AbstractReplicationStrategy 
instances. Shouldn't the invalidation take place on all threads in all 
instances of AbstractReplicationStrategy before returning from an 
endpoint-mutating write operation in TokenMetadata? It seems as if just setting 
the cache to empty would allow a period of time where TokenMetadata write 
methods had returned but not all threads have seen the mutation yet because 
they are still holding onto the old clone of TM. This might be alright though, 
I'm not sure. Thoughts?

> Endpoint cache invalidation causes CPU spike (on vnode rings?)
> --
>
> Key: CASSANDRA-6345
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6345
> Project: Cassandra
>  Issue Type: Bug
> Environment: 30 nodes total, 2 DCs
> Cassandra 1.2.11
> vnodes enabled (256 per node)
>Reporter: Rick Branson
>Assignee: Jonathan Ellis
> Fix For: 1.2.12, 2.0.3
>
> Attachments: 6345-rbranson-v2.txt, 6345-rbranson.txt, 6345-v2.txt, 
> 6345-v3.txt, 6345.txt, half-way-thru-6345-rbranson-patch-applied.png
>
>
> We've observed that events which cause invalidation of the endpoint cache 
> (update keyspace, add/remove nodes, etc) in AbstractReplicationStrategy 
> result in several seconds of thundering herd behavior on the entire cluster. 
> A thread dump shows over a hundred threads (I stopped counting at that point) 
> with a backtrace like this:
> at java.net.Inet4Address.getAddress(Inet4Address.java:288)
> at 
> org.apache.cassandra.locator.TokenMetadata$1.compare(TokenMetadata.java:106)
> at 
> org.apache.cassandra.locator.TokenMetadata$1.compare(TokenMetadata.java:103)
> at java.util.TreeMap.getEntryUsingComparator(TreeMap.java:351)
> at java.util.TreeMap.getEntry(TreeMap.java:322)
> at java.util.TreeMap.get(TreeMap.java:255)
> at 
> com.google.common.collect.AbstractMultimap.put(AbstractMultimap.java:200)
> at 
> com.google.common.collect.AbstractSetMultimap.put(AbstractSetMultimap.java:117)
> at com.google.common.collect.TreeMultimap.put(TreeMultimap.java:74)
> at 
> com.google.common.collect.AbstractMultimap.putAll(AbstractMultimap.java:273)
> at com.google.common.collect.TreeMultimap.putAll(TreeMultimap.java:74)
> at 
> org.apache.cassandra.utils.SortedBiMultiValMap.create(SortedBiMultiValMap.java:60)
> at 
> org.apache.cassandra.locator.TokenMetadata.cloneOnlyTokenMap(TokenMetadata.java:598)
> at 
> org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:104)
> at 
> org.apache.cassandra.service.StorageService.getNaturalEndpoints(StorageService.java:2671)
> at 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:375)
> It looks like there's a large amount of cost in the 
> TokenMetadata.cloneOnlyTokenMap that 
> AbstractReplicationStrategy.getNaturalEndpoints is calling each time there is 
> a cache miss for an endpoint. It seems as if this would only impact clusters 
> with large numbers of tokens, so it's probably a vnodes-only issue.
> Proposal: In AbstractReplicationStrategy.getNaturalEndpoints(), cache the 
> cloned TokenMetadata instance returned by TokenMetadata.cloneOnlyTokenMap(), 
> wrapping it with a lock to prevent stampedes, and clearing it in 
> clearEndpointCache(). Thoughts?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5892) Support user defined where clause in cluster columns for CqlPagingRecordRead

2013-11-18 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825646#comment-13825646
 ] 

Alex Liu commented on CASSANDRA-5892:
-

This is auto-fixed in CASSANDRA-6311 which will be in C* 2.x release

> Support user defined where clause in cluster columns for CqlPagingRecordRead
> 
>
> Key: CASSANDRA-5892
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5892
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Reporter: Adam Masters
>Assignee: Alex Liu
>Priority: Minor
> Fix For: 2.0.3
>
> Attachments: 5892-1.2-branch.txt, 5892-v2-1.2-branch.txt
>
>
> When using CqlPagingRecordReader, specifying a custom where clause using 
> CqlConfigHelper.setInputWhereClauses() throws an exception when a GT (>) 
> comparator is used.
> Exception:
> java.lang.RuntimeException at 
> org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.executeQuery(CqlPagingRecordReader.java:646)
>  Caused by: InvalidRequestException(why:Invalid restrictions found on ts) at 
> org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result.read(Cassandra.java:39567)
> This is due to the paging mechanism inserting a GT comparator on the same 
> composite key as the custom where clause, resulting in an invalid CQL 
> statement. For example ("ts > '6349263850'" being the custom where 
> clause):
> SELECT * FROM "test_cf"
> WHERE token("key") = token( ? )  AND "ts" > ?
> AND ts > '6349263850' LIMIT 3 ALLOW FILTERING



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6311) Update CqlPagingRecordReader to use the latest Cql pagination

2013-11-18 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-6311:


Attachment: 6331-v2-2.0-branch.txt

Pig support for native reader

> Update CqlPagingRecordReader to use the latest Cql pagination
> -
>
> Key: CASSANDRA-6311
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6311
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Hadoop
>Reporter: Alex Liu
>Assignee: Alex Liu
> Fix For: 2.0.3
>
> Attachments: 6331-2.0-branch.txt, 6331-v2-2.0-branch.txt
>
>
> Since the latest Cql pagination is done and it should be more efficient, so 
> we need update CqlPagingRecordReader to use it instead of the custom thrift 
> paging.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6311) Update CqlPagingRecordReader to use the latest Cql pagination

2013-11-18 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825700#comment-13825700
 ] 

Alex Liu edited comment on CASSANDRA-6311 at 11/18/13 8:02 PM:
---

Pig support for native reader. The following parameters are added for Pig.

[&native_port=][&core_conns=][&max_conns=]
[&min_simult_reqs=][&max_simult_reqs=]
[&native_timeout=][&native_read_timeout=]
[&rec_buff_size=][&send_buff_size=][&solinger=]
[&tcp_nodelay=][&reuse_address=][&keep_alive=]
[&auth_provider=][&trust_store_path=]
[&key_store_path=][&trust_store_password=]
[&key_store_password=][&cipher_suites=]


was (Author: alexliu68):
Pig support for native reader

> Update CqlPagingRecordReader to use the latest Cql pagination
> -
>
> Key: CASSANDRA-6311
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6311
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Hadoop
>Reporter: Alex Liu
>Assignee: Alex Liu
> Fix For: 2.0.3
>
> Attachments: 6331-2.0-branch.txt, 6331-v2-2.0-branch.txt
>
>
> Since the latest Cql pagination is done and it should be more efficient, so 
> we need update CqlPagingRecordReader to use it instead of the custom thrift 
> paging.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6311) Update CqlPagingRecordReader to use the latest Cql pagination

2013-11-18 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825716#comment-13825716
 ] 

Jonathan Ellis commented on CASSANDRA-6311:
---

We should take advantage of changing the class name to simplifying the API.  
With server-side paging, we should be able to just accept a CQL statement as 
our configuration rather than breaking it out into CF/columns/whereclause.

(Which in turn means we'll want to keep the old CPRR around for a version or 
two for compatibility.

> Update CqlPagingRecordReader to use the latest Cql pagination
> -
>
> Key: CASSANDRA-6311
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6311
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Hadoop
>Reporter: Alex Liu
>Assignee: Alex Liu
> Fix For: 2.0.3
>
> Attachments: 6331-2.0-branch.txt, 6331-v2-2.0-branch.txt
>
>
> Since the latest Cql pagination is done and it should be more efficient, so 
> we need update CqlPagingRecordReader to use it instead of the custom thrift 
> paging.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6371) cassandra-driver-core-2.0.0-rc1.jar fails in this case

2013-11-18 Thread Jacob Rhoden (JIRA)
Jacob Rhoden created CASSANDRA-6371:
---

 Summary: cassandra-driver-core-2.0.0-rc1.jar fails in this case
 Key: CASSANDRA-6371
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6371
 Project: Cassandra
  Issue Type: Bug
  Components: Drivers (now out of tree)
Reporter: Jacob Rhoden


Testing this out switching over to the new driver. Its mostly working except 
for one particular query (or code?) is causing the following:

com.datastax.driver.core.exceptions.DriverInternalError: Tried to execute 
unknown prepared query 0x67dfcaa71c14d42a0a7f62406b41ea3e
   com.datastax.driver.core.exceptions.DriverInternalError.copy():42
   
com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException():271
   com.datastax.driver.core.ResultSetFuture.getUninterruptibly():187
   com.datastax.driver.core.Session.execute():126
   tap.command.GetNewsFeed.execute():72
   tap.servlet.HomeServlet.doGet():38
   javax.servlet.http.HttpServlet.service():668

Anyone encounter this one before? Any suggestions? In case its relevant, line 
72 is the for loop statement:

PreparedStatement p = s.prepare(
"select uuid,to_uuid,to_first_name, 
to_last_name,from_uuid,from_first_name,from_last_name,action,message "+
"from news_feed " +
"where person_uuid = ? " +
"order by uuid desc " +
"limit 50");
PreparedStatement q = s.prepare("select count(*) from 
post_likes where post_uuid=?");
PreparedStatement c = s.prepare("select comments from 
post_counters where uuid=?");
PreparedStatement lq = s.prepare("select person_uuid from 
post_likes where post_uuid=? and person_uuid=?");

for(Row r : s.execute(p.bind(user.getPersonUuid( {
Message m = new Message();
Person to = new Person();
to.setUuid(r.getUUID(1));
to.setFirstName(r.getString(2));
to.setLastName(r.getString(3));
Person from = new Person();
from.setUuid(r.getUUID(4));
from.setFirstName(r.getString(5));
from.setLastName(r.getString(6));
m.setUuid(r.getUUID(0));
m.setTo(to);
m.setFrom(from);
m.setAction(r.getString(7));
m.setMessage(r.getString(8));
results.add(m);

m.setLikeCount((int)s.execute(q.bind(m.getUuid())).one().getLong(0));
for(Row r2 : s.execute(c.bind(m.getUuid( {
m.setCommentCount((int)r2.getLong(0));
}
m.setLiked(s.execute(lq.bind(m.getUuid(), 
user.getPersonUuid())).iterator().hasNext());

m.setFromMe(from.getUuid().equals(user.getPersonUuid()));
m.setToMe(to.getUuid().equals(user.getPersonUuid()));
}

Reworking the code as follows avoids the problem:

public List execute() throws IOException {
List results = new LinkedList<>();

Session s = api.getCassandraSession();
PreparedStatement p = s.prepare(
"select uuid,to_uuid,to_first_name, 
to_last_name,from_uuid,from_first_name,from_last_name,action,message "+
"from news_feed " +
"where person_uuid = ? " +
"order by uuid desc " +
"limit 50");

for(Row r : s.execute(p.bind(user.getPersonUuid( {
Message m = new Message();
Person to = new Person();
to.setUuid(r.getUUID(1));
to.setFirstName(r.getString(2));
to.setLastName(r.getString(3));
Person from = new Person();
from.setUuid(r.getUUID(4));
from.setFirstName(r.getString(5));
from.setLastName(r.getString(6));
m.setUuid(r.getUUID(0));
m.setTo(to);
m.setFrom(from);
m.setAction(r.getString(7));
m.setMessage(r.getString(8));
results.add(m);

m.setFromMe(from.getUuid().equals(user.getPersonUuid()));
m.setToMe(to.getUuid().equals(user.getPersonUuid()));
}

PreparedStatement q = s.prepare("select count(*)

[jira] [Created] (CASSANDRA-6372) cassandra-driver-core-2.0.0-rc1.jar fails in this case

2013-11-18 Thread Jacob Rhoden (JIRA)
Jacob Rhoden created CASSANDRA-6372:
---

 Summary: cassandra-driver-core-2.0.0-rc1.jar fails in this case
 Key: CASSANDRA-6372
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6372
 Project: Cassandra
  Issue Type: Bug
  Components: Drivers (now out of tree)
Reporter: Jacob Rhoden


Testing this out switching over to the new driver. Its mostly working except 
for one particular query (or code?) is causing the following:

com.datastax.driver.core.exceptions.DriverInternalError: Tried to execute 
unknown prepared query 0x67dfcaa71c14d42a0a7f62406b41ea3e
   com.datastax.driver.core.exceptions.DriverInternalError.copy():42
   
com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException():271
   com.datastax.driver.core.ResultSetFuture.getUninterruptibly():187
   com.datastax.driver.core.Session.execute():126
   tap.command.GetNewsFeed.execute():72
   tap.servlet.HomeServlet.doGet():38
   javax.servlet.http.HttpServlet.service():668

Anyone encounter this one before? Any suggestions? In case its relevant, line 
72 is the for loop statement:

PreparedStatement p = s.prepare(
"select uuid,to_uuid,to_first_name, 
to_last_name,from_uuid,from_first_name,from_last_name,action,message "+
"from news_feed " +
"where person_uuid = ? " +
"order by uuid desc " +
"limit 50");
PreparedStatement q = s.prepare("select count(*) from 
post_likes where post_uuid=?");
PreparedStatement c = s.prepare("select comments from 
post_counters where uuid=?");
PreparedStatement lq = s.prepare("select person_uuid from 
post_likes where post_uuid=? and person_uuid=?");

for(Row r : s.execute(p.bind(user.getPersonUuid( {
Message m = new Message();
Person to = new Person();
to.setUuid(r.getUUID(1));
to.setFirstName(r.getString(2));
to.setLastName(r.getString(3));
Person from = new Person();
from.setUuid(r.getUUID(4));
from.setFirstName(r.getString(5));
from.setLastName(r.getString(6));
m.setUuid(r.getUUID(0));
m.setTo(to);
m.setFrom(from);
m.setAction(r.getString(7));
m.setMessage(r.getString(8));
results.add(m);

m.setLikeCount((int)s.execute(q.bind(m.getUuid())).one().getLong(0));
for(Row r2 : s.execute(c.bind(m.getUuid( {
m.setCommentCount((int)r2.getLong(0));
}
m.setLiked(s.execute(lq.bind(m.getUuid(), 
user.getPersonUuid())).iterator().hasNext());

m.setFromMe(from.getUuid().equals(user.getPersonUuid()));
m.setToMe(to.getUuid().equals(user.getPersonUuid()));
}

Reworking the code as follows avoids the problem:

public List execute() throws IOException {
List results = new LinkedList<>();

Session s = api.getCassandraSession();
PreparedStatement p = s.prepare(
"select uuid,to_uuid,to_first_name, 
to_last_name,from_uuid,from_first_name,from_last_name,action,message "+
"from news_feed " +
"where person_uuid = ? " +
"order by uuid desc " +
"limit 50");

for(Row r : s.execute(p.bind(user.getPersonUuid( {
Message m = new Message();
Person to = new Person();
to.setUuid(r.getUUID(1));
to.setFirstName(r.getString(2));
to.setLastName(r.getString(3));
Person from = new Person();
from.setUuid(r.getUUID(4));
from.setFirstName(r.getString(5));
from.setLastName(r.getString(6));
m.setUuid(r.getUUID(0));
m.setTo(to);
m.setFrom(from);
m.setAction(r.getString(7));
m.setMessage(r.getString(8));
results.add(m);

m.setFromMe(from.getUuid().equals(user.getPersonUuid()));
m.setToMe(to.getUuid().equals(user.getPersonUuid()));
}

PreparedStatement q = s.prepare("select count(*)

[jira] [Updated] (CASSANDRA-6372) cassandra-driver-core-2.0.0-rc1.jar fails in this case

2013-11-18 Thread Jacob Rhoden (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Rhoden updated CASSANDRA-6372:


Description: 
Testing this out switching over to the new driver. Its mostly working except 
for one particular query (or code?) is causing the following:

{quote}
com.datastax.driver.core.exceptions.DriverInternalError: Tried to execute 
unknown prepared query 0x67dfcaa71c14d42a0a7f62406b41ea3e
   com.datastax.driver.core.exceptions.DriverInternalError.copy():42
   
com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException():271
   com.datastax.driver.core.ResultSetFuture.getUninterruptibly():187
   com.datastax.driver.core.Session.execute():126
   tap.command.GetNewsFeed.execute():72
   tap.servlet.HomeServlet.doGet():38
   javax.servlet.http.HttpServlet.service():668
{quote}

Anyone encounter this one before? Any suggestions? In case its relevant, line 
72 is the for loop statement:

{quote}
PreparedStatement p = s.prepare(
"select uuid,to_uuid,to_first_name, 
to_last_name,from_uuid,from_first_name,from_last_name,action,message "+
"from news_feed " +
"where person_uuid = ? " +
"order by uuid desc " +
"limit 50");
PreparedStatement q = s.prepare("select count(*) from 
post_likes where post_uuid=?");
PreparedStatement c = s.prepare("select comments from 
post_counters where uuid=?");
PreparedStatement lq = s.prepare("select person_uuid from 
post_likes where post_uuid=? and person_uuid=?");

for(Row r : s.execute(p.bind(user.getPersonUuid( {
Message m = new Message();
Person to = new Person();
to.setUuid(r.getUUID(1));
to.setFirstName(r.getString(2));
to.setLastName(r.getString(3));
Person from = new Person();
from.setUuid(r.getUUID(4));
from.setFirstName(r.getString(5));
from.setLastName(r.getString(6));
m.setUuid(r.getUUID(0));
m.setTo(to);
m.setFrom(from);
m.setAction(r.getString(7));
m.setMessage(r.getString(8));
results.add(m);

m.setLikeCount((int)s.execute(q.bind(m.getUuid())).one().getLong(0));
for(Row r2 : s.execute(c.bind(m.getUuid( {
m.setCommentCount((int)r2.getLong(0));
}
m.setLiked(s.execute(lq.bind(m.getUuid(), 
user.getPersonUuid())).iterator().hasNext());

m.setFromMe(from.getUuid().equals(user.getPersonUuid()));
m.setToMe(to.getUuid().equals(user.getPersonUuid()));
}
{quote}

Reworking the code as follows avoids the problem:

{quote}
public List execute() throws IOException {
List results = new LinkedList<>();

Session s = api.getCassandraSession();
PreparedStatement p = s.prepare(
"select uuid,to_uuid,to_first_name, 
to_last_name,from_uuid,from_first_name,from_last_name,action,message "+
"from news_feed " +
"where person_uuid = ? " +
"order by uuid desc " +
"limit 50");

for(Row r : s.execute(p.bind(user.getPersonUuid( {
Message m = new Message();
Person to = new Person();
to.setUuid(r.getUUID(1));
to.setFirstName(r.getString(2));
to.setLastName(r.getString(3));
Person from = new Person();
from.setUuid(r.getUUID(4));
from.setFirstName(r.getString(5));
from.setLastName(r.getString(6));
m.setUuid(r.getUUID(0));
m.setTo(to);
m.setFrom(from);
m.setAction(r.getString(7));
m.setMessage(r.getString(8));
results.add(m);

m.setFromMe(from.getUuid().equals(user.getPersonUuid()));
m.setToMe(to.getUuid().equals(user.getPersonUuid()));
}

PreparedStatement q = s.prepare("select count(*) from 
post_likes where post_uuid=?");
PreparedStatement c = s.prepare("select comments from 
post_counters where uuid=?");
 

[jira] [Commented] (CASSANDRA-6372) cassandra-driver-core-2.0.0-rc1.jar fails in this case

2013-11-18 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825764#comment-13825764
 ] 

Mikhail Stepura commented on CASSANDRA-6372:


Most likely the problem is https://datastax-oss.atlassian.net/browse/JAVA-213 

> cassandra-driver-core-2.0.0-rc1.jar fails in this case
> --
>
> Key: CASSANDRA-6372
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6372
> Project: Cassandra
>  Issue Type: Bug
>  Components: Drivers (now out of tree)
>Reporter: Jacob Rhoden
>
> Testing this out switching over to the new driver. Its mostly working except 
> for one particular query (or code?) is causing the following:
> {quote}
> com.datastax.driver.core.exceptions.DriverInternalError: Tried to execute 
> unknown prepared query 0x67dfcaa71c14d42a0a7f62406b41ea3e
>com.datastax.driver.core.exceptions.DriverInternalError.copy():42
>
> com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException():271
>com.datastax.driver.core.ResultSetFuture.getUninterruptibly():187
>com.datastax.driver.core.Session.execute():126
>tap.command.GetNewsFeed.execute():72
>tap.servlet.HomeServlet.doGet():38
>javax.servlet.http.HttpServlet.service():668
> {quote}
> Anyone encounter this one before? Any suggestions? In case its relevant, line 
> 72 is the for loop statement:
> {quote}
>   PreparedStatement p = s.prepare(
>   "select uuid,to_uuid,to_first_name, 
> to_last_name,from_uuid,from_first_name,from_last_name,action,message "+
>   "from news_feed " +
>   "where person_uuid = ? " +
>   "order by uuid desc " +
>   "limit 50");
>   PreparedStatement q = s.prepare("select count(*) from 
> post_likes where post_uuid=?");
>   PreparedStatement c = s.prepare("select comments from 
> post_counters where uuid=?");
>   PreparedStatement lq = s.prepare("select person_uuid from 
> post_likes where post_uuid=? and person_uuid=?");
>   for(Row r : s.execute(p.bind(user.getPersonUuid( {
>   Message m = new Message();
>   Person to = new Person();
>   to.setUuid(r.getUUID(1));
>   to.setFirstName(r.getString(2));
>   to.setLastName(r.getString(3));
>   Person from = new Person();
>   from.setUuid(r.getUUID(4));
>   from.setFirstName(r.getString(5));
>   from.setLastName(r.getString(6));
>   m.setUuid(r.getUUID(0));
>   m.setTo(to);
>   m.setFrom(from);
>   m.setAction(r.getString(7));
>   m.setMessage(r.getString(8));
>   results.add(m);
>   
> m.setLikeCount((int)s.execute(q.bind(m.getUuid())).one().getLong(0));
>   for(Row r2 : s.execute(c.bind(m.getUuid( {
>   m.setCommentCount((int)r2.getLong(0));
>   }
>   m.setLiked(s.execute(lq.bind(m.getUuid(), 
> user.getPersonUuid())).iterator().hasNext());
>   
> m.setFromMe(from.getUuid().equals(user.getPersonUuid()));
>   m.setToMe(to.getUuid().equals(user.getPersonUuid()));
>   }
> {quote}
> Reworking the code as follows avoids the problem:
> {quote}
>   public List execute() throws IOException {
>   List results = new LinkedList<>();
>   Session s = api.getCassandraSession();
>   PreparedStatement p = s.prepare(
>   "select uuid,to_uuid,to_first_name, 
> to_last_name,from_uuid,from_first_name,from_last_name,action,message "+
>   "from news_feed " +
>   "where person_uuid = ? " +
>   "order by uuid desc " +
>   "limit 50");
>   for(Row r : s.execute(p.bind(user.getPersonUuid( {
>   Message m = new Message();
>   Person to = new Person();
>   to.setUuid(r.getUUID(1));
>   to.setFirstName(r.getString(2));
>   to.setLastName(r.getString(3));
>   Person from = new Person();
>   from.setUuid(r.getUUID(4));
>   from.setFirstName(r.getString(5));
>   from.setLastName(r.getString(6));
>   m.setUuid(r.getUUID(0));
>   m.setTo(to);
>   m.s

[jira] [Resolved] (CASSANDRA-6371) cassandra-driver-core-2.0.0-rc1.jar fails in this case

2013-11-18 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-6371.
---

Resolution: Invalid

Drivers are not part of the Apache project.

> cassandra-driver-core-2.0.0-rc1.jar fails in this case
> --
>
> Key: CASSANDRA-6371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Drivers (now out of tree)
>Reporter: Jacob Rhoden
>
> Testing this out switching over to the new driver. Its mostly working except 
> for one particular query (or code?) is causing the following:
> com.datastax.driver.core.exceptions.DriverInternalError: Tried to execute 
> unknown prepared query 0x67dfcaa71c14d42a0a7f62406b41ea3e
>com.datastax.driver.core.exceptions.DriverInternalError.copy():42
>
> com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException():271
>com.datastax.driver.core.ResultSetFuture.getUninterruptibly():187
>com.datastax.driver.core.Session.execute():126
>tap.command.GetNewsFeed.execute():72
>tap.servlet.HomeServlet.doGet():38
>javax.servlet.http.HttpServlet.service():668
> Anyone encounter this one before? Any suggestions? In case its relevant, line 
> 72 is the for loop statement:
>   PreparedStatement p = s.prepare(
>   "select uuid,to_uuid,to_first_name, 
> to_last_name,from_uuid,from_first_name,from_last_name,action,message "+
>   "from news_feed " +
>   "where person_uuid = ? " +
>   "order by uuid desc " +
>   "limit 50");
>   PreparedStatement q = s.prepare("select count(*) from 
> post_likes where post_uuid=?");
>   PreparedStatement c = s.prepare("select comments from 
> post_counters where uuid=?");
>   PreparedStatement lq = s.prepare("select person_uuid from 
> post_likes where post_uuid=? and person_uuid=?");
>   for(Row r : s.execute(p.bind(user.getPersonUuid( {
>   Message m = new Message();
>   Person to = new Person();
>   to.setUuid(r.getUUID(1));
>   to.setFirstName(r.getString(2));
>   to.setLastName(r.getString(3));
>   Person from = new Person();
>   from.setUuid(r.getUUID(4));
>   from.setFirstName(r.getString(5));
>   from.setLastName(r.getString(6));
>   m.setUuid(r.getUUID(0));
>   m.setTo(to);
>   m.setFrom(from);
>   m.setAction(r.getString(7));
>   m.setMessage(r.getString(8));
>   results.add(m);
>   
> m.setLikeCount((int)s.execute(q.bind(m.getUuid())).one().getLong(0));
>   for(Row r2 : s.execute(c.bind(m.getUuid( {
>   m.setCommentCount((int)r2.getLong(0));
>   }
>   m.setLiked(s.execute(lq.bind(m.getUuid(), 
> user.getPersonUuid())).iterator().hasNext());
>   
> m.setFromMe(from.getUuid().equals(user.getPersonUuid()));
>   m.setToMe(to.getUuid().equals(user.getPersonUuid()));
>   }
> Reworking the code as follows avoids the problem:
>   public List execute() throws IOException {
>   List results = new LinkedList<>();
>   Session s = api.getCassandraSession();
>   PreparedStatement p = s.prepare(
>   "select uuid,to_uuid,to_first_name, 
> to_last_name,from_uuid,from_first_name,from_last_name,action,message "+
>   "from news_feed " +
>   "where person_uuid = ? " +
>   "order by uuid desc " +
>   "limit 50");
>   for(Row r : s.execute(p.bind(user.getPersonUuid( {
>   Message m = new Message();
>   Person to = new Person();
>   to.setUuid(r.getUUID(1));
>   to.setFirstName(r.getString(2));
>   to.setLastName(r.getString(3));
>   Person from = new Person();
>   from.setUuid(r.getUUID(4));
>   from.setFirstName(r.getString(5));
>   from.setLastName(r.getString(6));
>   m.setUuid(r.getUUID(0));
>   m.setTo(to);
>   m.setFrom(from);
>   m.setAction(r.getString(7));
>   m.setMessage(r.getString(8

[jira] [Resolved] (CASSANDRA-6372) cassandra-driver-core-2.0.0-rc1.jar fails in this case

2013-11-18 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-6372.
---

Resolution: Invalid

Drivers are not part of the Apache project.

> cassandra-driver-core-2.0.0-rc1.jar fails in this case
> --
>
> Key: CASSANDRA-6372
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6372
> Project: Cassandra
>  Issue Type: Bug
>  Components: Drivers (now out of tree)
>Reporter: Jacob Rhoden
>
> Testing this out switching over to the new driver. Its mostly working except 
> for one particular query (or code?) is causing the following:
> {quote}
> com.datastax.driver.core.exceptions.DriverInternalError: Tried to execute 
> unknown prepared query 0x67dfcaa71c14d42a0a7f62406b41ea3e
>com.datastax.driver.core.exceptions.DriverInternalError.copy():42
>
> com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException():271
>com.datastax.driver.core.ResultSetFuture.getUninterruptibly():187
>com.datastax.driver.core.Session.execute():126
>tap.command.GetNewsFeed.execute():72
>tap.servlet.HomeServlet.doGet():38
>javax.servlet.http.HttpServlet.service():668
> {quote}
> Anyone encounter this one before? Any suggestions? In case its relevant, line 
> 72 is the for loop statement:
> {quote}
>   PreparedStatement p = s.prepare(
>   "select uuid,to_uuid,to_first_name, 
> to_last_name,from_uuid,from_first_name,from_last_name,action,message "+
>   "from news_feed " +
>   "where person_uuid = ? " +
>   "order by uuid desc " +
>   "limit 50");
>   PreparedStatement q = s.prepare("select count(*) from 
> post_likes where post_uuid=?");
>   PreparedStatement c = s.prepare("select comments from 
> post_counters where uuid=?");
>   PreparedStatement lq = s.prepare("select person_uuid from 
> post_likes where post_uuid=? and person_uuid=?");
>   for(Row r : s.execute(p.bind(user.getPersonUuid( {
>   Message m = new Message();
>   Person to = new Person();
>   to.setUuid(r.getUUID(1));
>   to.setFirstName(r.getString(2));
>   to.setLastName(r.getString(3));
>   Person from = new Person();
>   from.setUuid(r.getUUID(4));
>   from.setFirstName(r.getString(5));
>   from.setLastName(r.getString(6));
>   m.setUuid(r.getUUID(0));
>   m.setTo(to);
>   m.setFrom(from);
>   m.setAction(r.getString(7));
>   m.setMessage(r.getString(8));
>   results.add(m);
>   
> m.setLikeCount((int)s.execute(q.bind(m.getUuid())).one().getLong(0));
>   for(Row r2 : s.execute(c.bind(m.getUuid( {
>   m.setCommentCount((int)r2.getLong(0));
>   }
>   m.setLiked(s.execute(lq.bind(m.getUuid(), 
> user.getPersonUuid())).iterator().hasNext());
>   
> m.setFromMe(from.getUuid().equals(user.getPersonUuid()));
>   m.setToMe(to.getUuid().equals(user.getPersonUuid()));
>   }
> {quote}
> Reworking the code as follows avoids the problem:
> {quote}
>   public List execute() throws IOException {
>   List results = new LinkedList<>();
>   Session s = api.getCassandraSession();
>   PreparedStatement p = s.prepare(
>   "select uuid,to_uuid,to_first_name, 
> to_last_name,from_uuid,from_first_name,from_last_name,action,message "+
>   "from news_feed " +
>   "where person_uuid = ? " +
>   "order by uuid desc " +
>   "limit 50");
>   for(Row r : s.execute(p.bind(user.getPersonUuid( {
>   Message m = new Message();
>   Person to = new Person();
>   to.setUuid(r.getUUID(1));
>   to.setFirstName(r.getString(2));
>   to.setLastName(r.getString(3));
>   Person from = new Person();
>   from.setUuid(r.getUUID(4));
>   from.setFirstName(r.getString(5));
>   from.setLastName(r.getString(6));
>   m.setUuid(r.getUUID(0));
>   m.setTo(to);
>   m.setFrom(from);
>   m.setAction(r.getString(7));

[jira] [Assigned] (CASSANDRA-6172) COPY TO command doesn't escape single quote in collections

2013-11-18 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-6172:
-

Assignee: Mikhail Stepura  (was: Aleksey Yeschenko)

> COPY TO command doesn't escape single quote in collections
> --
>
> Key: CASSANDRA-6172
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6172
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Cassandra 2.0.1, Linux
>Reporter: Ivan Mykhailov
>Assignee: Mikhail Stepura
>Priority: Minor
> Fix For: 2.0.3
>
>
> {code}
> CREATE TABLE test (key text PRIMARY KEY , testcollection set) ;
> INSERT INTO test (key, testcollection ) VALUES ( 'test', {'foo''bar'});
> COPY test TO '/tmp/test.csv';
> COPY test FROM '/tmp/test.csv';
> Bad Request: line 1:73 mismatched character '' expecting '''
> Aborting import at record #0 (line 1). Previously-inserted values still 
> present.
> {code}
> Content of generated '/tmp/test.csv':
> {code}
> test,{'foo'bar'}
> {code}
> Unfortunately, I didn't find workaround with any combination of COPY options 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6275) 2.0.x leaks file handles

2013-11-18 Thread J. Ryan Earl (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825856#comment-13825856
 ] 

J. Ryan Earl commented on CASSANDRA-6275:
-

We recently ran into this issue after upgrading to OpsCenter-4.0.0, it is quite 
easy to reproduce:
# Install Cassandra-2.0.2
# Install OpsCenter-4.0.0 on above cluster.

I upgraded OpsCenter on Friday, and by Sunday I had reached 1 Million open file 
handles.  I had to kill -9 the Cassandra processes as it wouldn't respond to 
sockets, DSC20 restart scripts reported successfully killing the processes but 
in fact did not.

{noformat}
[root@cassandra2 ~]# lsof -u cassandra|wc -l
175416
[root@cassandra2 ~]# lsof -u cassandra|grep -c OpsCenter
174474
{noformat}

> 2.0.x leaks file handles
> 
>
> Key: CASSANDRA-6275
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6275
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: java version "1.7.0_25"
> Java(TM) SE Runtime Environment (build 1.7.0_25-b15)
> Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)
> Linux cassandra-test1 2.6.32-279.el6.x86_64 #1 SMP Thu Jun 21 15:00:18 EDT 
> 2012 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Mikhail Mazursky
> Attachments: c_file-descriptors_strace.tbz, cassandra_jstack.txt, 
> leak.log, position_hints.tgz, slog.gz
>
>
> Looks like C* is leaking file descriptors when doing lots of CAS operations.
> {noformat}
> $ sudo cat /proc/15455/limits
> Limit Soft Limit   Hard Limit   Units
> Max cpu time  unlimitedunlimitedseconds  
> Max file size unlimitedunlimitedbytes
> Max data size unlimitedunlimitedbytes
> Max stack size10485760 unlimitedbytes
> Max core file size00bytes
> Max resident set  unlimitedunlimitedbytes
> Max processes 1024 unlimitedprocesses
> Max open files4096 4096 files
> Max locked memory unlimitedunlimitedbytes
> Max address space unlimitedunlimitedbytes
> Max file locksunlimitedunlimitedlocks
> Max pending signals   1463314633signals  
> Max msgqueue size 819200   819200   bytes
> Max nice priority 00   
> Max realtime priority 00   
> Max realtime timeout  unlimitedunlimitedus 
> {noformat}
> Looks like the problem is not in limits.
> Before load test:
> {noformat}
> cassandra-test0 ~]$ lsof -n | grep java | wc -l
> 166
> cassandra-test1 ~]$ lsof -n | grep java | wc -l
> 164
> cassandra-test2 ~]$ lsof -n | grep java | wc -l
> 180
> {noformat}
> After load test:
> {noformat}
> cassandra-test0 ~]$ lsof -n | grep java | wc -l
> 967
> cassandra-test1 ~]$ lsof -n | grep java | wc -l
> 1766
> cassandra-test2 ~]$ lsof -n | grep java | wc -l
> 2578
> {noformat}
> Most opened files have names like:
> {noformat}
> java  16890 cassandra 1636r  REG 202,17  88724987 
> 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db
> java  16890 cassandra 1637r  REG 202,17 161158485 
> 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db
> java  16890 cassandra 1638r  REG 202,17  88724987 
> 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db
> java  16890 cassandra 1639r  REG 202,17 161158485 
> 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db
> java  16890 cassandra 1640r  REG 202,17  88724987 
> 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db
> java  16890 cassandra 1641r  REG 202,17 161158485 
> 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db
> java  16890 cassandra 1642r  REG 202,17  88724987 
> 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db
> java  16890 cassandra 1643r  REG 202,17 161158485 
> 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db
> java  16890 cassandra 1644r  REG 202,17  88724987 
> 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db
> java  16890 cassandra 1645r  REG 202,17 161158485 
> 655420 /var/lib/cassandra/data/sys

[jira] [Comment Edited] (CASSANDRA-6275) 2.0.x leaks file handles

2013-11-18 Thread J. Ryan Earl (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825856#comment-13825856
 ] 

J. Ryan Earl edited comment on CASSANDRA-6275 at 11/18/13 10:26 PM:


We recently ran into this issue after upgrading to OpsCenter-4.0.0, it is quite 
easy to reproduce:
# Install Cassandra-2.0.2
# Install OpsCenter-4.0.0 on above cluster.

I upgraded OpsCenter on Friday, and by Sunday I had reached 1 Million open file 
handles.  I had to kill -9 the Cassandra processes as it wouldn't respond to 
sockets, DSC20 restart scripts reported successfully killing the processes but 
in fact did not.

{noformat}
[root@cassandra2 ~]# lsof -u cassandra|wc -l
175416
[root@cassandra2 ~]# lsof -u cassandra|grep -c OpsCenter
174474
{noformat}

Most of the handles show as "deleted"
{noformat}
[root@cassandra2 ~]# lsof -u cassandra|grep -c deleted
174449
{noformat}


was (Author: jre):
We recently ran into this issue after upgrading to OpsCenter-4.0.0, it is quite 
easy to reproduce:
# Install Cassandra-2.0.2
# Install OpsCenter-4.0.0 on above cluster.

I upgraded OpsCenter on Friday, and by Sunday I had reached 1 Million open file 
handles.  I had to kill -9 the Cassandra processes as it wouldn't respond to 
sockets, DSC20 restart scripts reported successfully killing the processes but 
in fact did not.

{noformat}
[root@cassandra2 ~]# lsof -u cassandra|wc -l
175416
[root@cassandra2 ~]# lsof -u cassandra|grep -c OpsCenter
174474
{noformat}

> 2.0.x leaks file handles
> 
>
> Key: CASSANDRA-6275
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6275
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: java version "1.7.0_25"
> Java(TM) SE Runtime Environment (build 1.7.0_25-b15)
> Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)
> Linux cassandra-test1 2.6.32-279.el6.x86_64 #1 SMP Thu Jun 21 15:00:18 EDT 
> 2012 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Mikhail Mazursky
> Attachments: c_file-descriptors_strace.tbz, cassandra_jstack.txt, 
> leak.log, position_hints.tgz, slog.gz
>
>
> Looks like C* is leaking file descriptors when doing lots of CAS operations.
> {noformat}
> $ sudo cat /proc/15455/limits
> Limit Soft Limit   Hard Limit   Units
> Max cpu time  unlimitedunlimitedseconds  
> Max file size unlimitedunlimitedbytes
> Max data size unlimitedunlimitedbytes
> Max stack size10485760 unlimitedbytes
> Max core file size00bytes
> Max resident set  unlimitedunlimitedbytes
> Max processes 1024 unlimitedprocesses
> Max open files4096 4096 files
> Max locked memory unlimitedunlimitedbytes
> Max address space unlimitedunlimitedbytes
> Max file locksunlimitedunlimitedlocks
> Max pending signals   1463314633signals  
> Max msgqueue size 819200   819200   bytes
> Max nice priority 00   
> Max realtime priority 00   
> Max realtime timeout  unlimitedunlimitedus 
> {noformat}
> Looks like the problem is not in limits.
> Before load test:
> {noformat}
> cassandra-test0 ~]$ lsof -n | grep java | wc -l
> 166
> cassandra-test1 ~]$ lsof -n | grep java | wc -l
> 164
> cassandra-test2 ~]$ lsof -n | grep java | wc -l
> 180
> {noformat}
> After load test:
> {noformat}
> cassandra-test0 ~]$ lsof -n | grep java | wc -l
> 967
> cassandra-test1 ~]$ lsof -n | grep java | wc -l
> 1766
> cassandra-test2 ~]$ lsof -n | grep java | wc -l
> 2578
> {noformat}
> Most opened files have names like:
> {noformat}
> java  16890 cassandra 1636r  REG 202,17  88724987 
> 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db
> java  16890 cassandra 1637r  REG 202,17 161158485 
> 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db
> java  16890 cassandra 1638r  REG 202,17  88724987 
> 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db
> java  16890 cassandra 1639r  REG 202,17 161158485 
> 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db
> java  16890 cassandra 1640r  REG 202,17  88724987 
> 655520 /var/lib/cassandra/

[jira] [Commented] (CASSANDRA-6275) 2.0.x leaks file handles

2013-11-18 Thread graham sanderson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825892#comment-13825892
 ] 

graham sanderson commented on CASSANDRA-6275:
-

Note also, that most if not all of the deleted files are of the form

{code}
java14018 cassandra  586r   REG   8,33   8792499   1251 
/data/1/cassandra/OpsCenter/rollups60/OpsCenter-rollups60-jb-4656-Data.db 
(deleted)
java14018 cassandra  587r   REG   8,33  27303760   1254 
/data/1/cassandra/OpsCenter/rollups60/OpsCenter-rollups60-jb-4655-Data.db 
(deleted)
java14018 cassandra  588r   REG   8,33   8792499   1251 
/data/1/cassandra/OpsCenter/rollups60/OpsCenter-rollups60-jb-4656-Data.db 
(deleted)
java14018 cassandra  589r   REG   8,33  27303760   1254 
/data/1/cassandra/OpsCenter/rollups60/OpsCenter-rollups60-jb-4655-Data.db 
(deleted)
java14018 cassandra  590r   REG   8,33  10507214936 
/data/1/cassandra/OpsCenter/rollups60/OpsCenter-rollups60-jb-4657-Data.db 
(deleted)
{code}
We have 7 data disks (don't know if this contributes to the problem), and the 
number of such deleted files is very ill balanced with 93% on two of the 7 
disks (on this particular node)... the distribution of live data file size for 
OpsCenter/rollups60 is a little uneven with the same data mounts that have more 
deleted (but open) files having more actual live data, but the deleted file 
counts per mount point vary by several order of magnitudes whereas the data 
itself does not.

> 2.0.x leaks file handles
> 
>
> Key: CASSANDRA-6275
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6275
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: java version "1.7.0_25"
> Java(TM) SE Runtime Environment (build 1.7.0_25-b15)
> Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)
> Linux cassandra-test1 2.6.32-279.el6.x86_64 #1 SMP Thu Jun 21 15:00:18 EDT 
> 2012 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Mikhail Mazursky
> Attachments: c_file-descriptors_strace.tbz, cassandra_jstack.txt, 
> leak.log, position_hints.tgz, slog.gz
>
>
> Looks like C* is leaking file descriptors when doing lots of CAS operations.
> {noformat}
> $ sudo cat /proc/15455/limits
> Limit Soft Limit   Hard Limit   Units
> Max cpu time  unlimitedunlimitedseconds  
> Max file size unlimitedunlimitedbytes
> Max data size unlimitedunlimitedbytes
> Max stack size10485760 unlimitedbytes
> Max core file size00bytes
> Max resident set  unlimitedunlimitedbytes
> Max processes 1024 unlimitedprocesses
> Max open files4096 4096 files
> Max locked memory unlimitedunlimitedbytes
> Max address space unlimitedunlimitedbytes
> Max file locksunlimitedunlimitedlocks
> Max pending signals   1463314633signals  
> Max msgqueue size 819200   819200   bytes
> Max nice priority 00   
> Max realtime priority 00   
> Max realtime timeout  unlimitedunlimitedus 
> {noformat}
> Looks like the problem is not in limits.
> Before load test:
> {noformat}
> cassandra-test0 ~]$ lsof -n | grep java | wc -l
> 166
> cassandra-test1 ~]$ lsof -n | grep java | wc -l
> 164
> cassandra-test2 ~]$ lsof -n | grep java | wc -l
> 180
> {noformat}
> After load test:
> {noformat}
> cassandra-test0 ~]$ lsof -n | grep java | wc -l
> 967
> cassandra-test1 ~]$ lsof -n | grep java | wc -l
> 1766
> cassandra-test2 ~]$ lsof -n | grep java | wc -l
> 2578
> {noformat}
> Most opened files have names like:
> {noformat}
> java  16890 cassandra 1636r  REG 202,17  88724987 
> 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db
> java  16890 cassandra 1637r  REG 202,17 161158485 
> 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db
> java  16890 cassandra 1638r  REG 202,17  88724987 
> 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db
> java  16890 cassandra 1639r  REG 202,17 161158485 
> 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db
> java  16890 cassandra 1640r  REG   

[jira] [Comment Edited] (CASSANDRA-6275) 2.0.x leaks file handles

2013-11-18 Thread graham sanderson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825892#comment-13825892
 ] 

graham sanderson edited comment on CASSANDRA-6275 at 11/18/13 10:53 PM:


Note also, that most if not all of the deleted files are of the form

{code}
java14018 cassandra  586r   REG   8,33   8792499   1251 
/data/1/cassandra/OpsCenter/rollups60/OpsCenter-rollups60-jb-4656-Data.db 
(deleted)
java14018 cassandra  587r   REG   8,33  27303760   1254 
/data/1/cassandra/OpsCenter/rollups60/OpsCenter-rollups60-jb-4655-Data.db 
(deleted)
java14018 cassandra  588r   REG   8,33   8792499   1251 
/data/1/cassandra/OpsCenter/rollups60/OpsCenter-rollups60-jb-4656-Data.db 
(deleted)
java14018 cassandra  589r   REG   8,33  27303760   1254 
/data/1/cassandra/OpsCenter/rollups60/OpsCenter-rollups60-jb-4655-Data.db 
(deleted)
java14018 cassandra  590r   REG   8,33  10507214936 
/data/1/cassandra/OpsCenter/rollups60/OpsCenter-rollups60-jb-4657-Data.db 
(deleted)
{code}
We have 7 data disks per node (don't know if this contributes to the problem), 
and the number of such (open but) deleted files is very ill balanced with 93% 
on two of the 7 disks (on this particular node)... the distribution of live 
data file size for OpsCenter/rollups60 is a little uneven with the same data 
mounts that have more deleted files having more actual live data, but the 
deleted file counts per mount point vary by several order of magnitudes whereas 
the data size itself does not.


was (Author: graham sanderson):
Note also, that most if not all of the deleted files are of the form

{code}
java14018 cassandra  586r   REG   8,33   8792499   1251 
/data/1/cassandra/OpsCenter/rollups60/OpsCenter-rollups60-jb-4656-Data.db 
(deleted)
java14018 cassandra  587r   REG   8,33  27303760   1254 
/data/1/cassandra/OpsCenter/rollups60/OpsCenter-rollups60-jb-4655-Data.db 
(deleted)
java14018 cassandra  588r   REG   8,33   8792499   1251 
/data/1/cassandra/OpsCenter/rollups60/OpsCenter-rollups60-jb-4656-Data.db 
(deleted)
java14018 cassandra  589r   REG   8,33  27303760   1254 
/data/1/cassandra/OpsCenter/rollups60/OpsCenter-rollups60-jb-4655-Data.db 
(deleted)
java14018 cassandra  590r   REG   8,33  10507214936 
/data/1/cassandra/OpsCenter/rollups60/OpsCenter-rollups60-jb-4657-Data.db 
(deleted)
{code}
We have 7 data disks (don't know if this contributes to the problem), and the 
number of such deleted files is very ill balanced with 93% on two of the 7 
disks (on this particular node)... the distribution of live data file size for 
OpsCenter/rollups60 is a little uneven with the same data mounts that have more 
deleted (but open) files having more actual live data, but the deleted file 
counts per mount point vary by several order of magnitudes whereas the data 
itself does not.

> 2.0.x leaks file handles
> 
>
> Key: CASSANDRA-6275
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6275
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: java version "1.7.0_25"
> Java(TM) SE Runtime Environment (build 1.7.0_25-b15)
> Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)
> Linux cassandra-test1 2.6.32-279.el6.x86_64 #1 SMP Thu Jun 21 15:00:18 EDT 
> 2012 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Mikhail Mazursky
> Attachments: c_file-descriptors_strace.tbz, cassandra_jstack.txt, 
> leak.log, position_hints.tgz, slog.gz
>
>
> Looks like C* is leaking file descriptors when doing lots of CAS operations.
> {noformat}
> $ sudo cat /proc/15455/limits
> Limit Soft Limit   Hard Limit   Units
> Max cpu time  unlimitedunlimitedseconds  
> Max file size unlimitedunlimitedbytes
> Max data size unlimitedunlimitedbytes
> Max stack size10485760 unlimitedbytes
> Max core file size00bytes
> Max resident set  unlimitedunlimitedbytes
> Max processes 1024 unlimitedprocesses
> Max open files4096 4096 files
> Max locked memory unlimitedunlimitedbytes
> Max address space unlimitedunlimitedbytes
> Max file locksunlimitedunlimitedlocks
> Max pending signals   1463314633signal

[jira] [Commented] (CASSANDRA-6181) Replaying a commit led to java.lang.StackOverflowError and node crash

2013-11-18 Thread Matt Jurik (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825908#comment-13825908
 ] 

Matt Jurik commented on CASSANDRA-6181:
---

Sorry, I mean what sort of DELETE statements cause this to happen? From reading 
these comments, it seems that there's some threshold at which too many 
deletions cause this to occur? Does the patch raise or eliminate this limit?

> Replaying a commit led to java.lang.StackOverflowError and node crash
> -
>
> Key: CASSANDRA-6181
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6181
> Project: Cassandra
>  Issue Type: Bug
> Environment: 1.2.8 & 1.2.10 - ubuntu 12.04
>Reporter: Jeffrey Damick
>Assignee: Sylvain Lebresne
>Priority: Critical
> Fix For: 1.2.12, 2.0.2
>
> Attachments: 6181.txt
>
>
> 2 of our nodes died after attempting to replay a commit.  I can attach the 
> commit log file if that helps.
> It was occurring on 1.2.8, after several failed attempts to start, we 
> attempted startup with 1.2.10.  This also yielded the same issue (below).  
> The only resolution was to physically move the commit log file out of the way 
> and then the nodes were able to start...  
> The replication factor was 3 so I'm hoping there was no data loss...
> {code}
>  INFO [main] 2013-10-11 14:50:35,891 CommitLogReplayer.java (line 119) 
> Replaying /ebs/cassandra/commitlog/CommitLog-2-1377542389560.log
> ERROR [MutationStage:18] 2013-10-11 14:50:37,387 CassandraDaemon.java (line 
> 191) Exception in thread Thread[MutationStage:18,5,main]
> java.lang.StackOverflowError
> at 
> org.apache.cassandra.db.marshal.TimeUUIDType.compareTimestampBytes(TimeUUIDType.java:68)
> at 
> org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:57)
> at 
> org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:29)
> at 
> org.apache.cassandra.db.marshal.AbstractType.compareCollectionMembers(AbstractType.java:229)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:81)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:31)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:439)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
>  etc over and over until 
> at 
> org.apache.cassandra.db.RangeTombstoneList.weakInsertFrom(RangeTombstoneList.java:472)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertAfter(RangeTombstoneList.java:456)
> at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:405)
> at 
> org.apache.cassandra.db.RangeTombstoneList.add(RangeTombstoneList.java:144)
> at 
> org.apache.cassandra.db.RangeTombstoneList.addAll(RangeTombstoneList.java:186)
> at org.apache.cassandra.db.DeletionInfo.add(DeletionInfo.java:180)
> at 
> org.apache.cassandra.db.AtomicSortedColumns.addAllWithSizeDelta(AtomicSortedColumns.java:197)
> at 
> org.apache.cassandra.db.AbstractColumnContainer.addAllWithSizeDelta(AbstractColumnContainer.java:99)
> at org.apache.cassandra.db.Memtable.resolve(Memtable.java:207)
> at org.apache.cassandra.db.Memtable.put(Memtable.java:170)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:745)
> at org.apache.cassandra.db.Table.apply(Table.java:388)
> at org.apache.cassandra.db.Table.apply(Table.java:353)
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer$1.runMayThrow(CommitLogReplayer.java:258)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at 
> jav

[jira] [Commented] (CASSANDRA-4511) Secondary index support for CQL3 collections

2013-11-18 Thread Zachary Marcantel (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825918#comment-13825918
 ] 

Zachary Marcantel commented on CASSANDRA-4511:
--

I can think of (and need this for) a few use cases. They do revolve around more 
of a filtering aspect, however.

It can be said that sometimes models/data/rows belong to a "theoretical list" 
that contain infinite possibilities. 

Naturally, you do not want to store all possibilities, but may want to filter 
on those which are true while ignoring all other possiblities.

Examples of these lists could be:
- Things of interest to a user
  -- LinkedIn calls these skills, Facebook has 'liked pages', etc
- Movies Watched
  -- Netflix surely doesn't (want to) have a USER x MOVIES sized table
  -- Nor do they want (user x movies) number of columnfamilies
- Places Visited
  -- set yes/no for EVERY location on Earth?

Benchmarks may prove me wrong, but theoretically the performance hit would be 
minimal if the data is truly partitioned well, collections are kept small, and 
secondary indexing used only as a filter and not data storage.

Dynamic columns may make some of my examples easier, but bring their own 
headaches (post-filling dynamically created columns, massively wide tables, 
largely unused data == disk bloat).

I'll give a couple examples and use [~jbellis] syntax, as well as a potential 
map-based indexing.

- Users contained in group(s):
-- Note: this could be done with columns, but if we assume groups can contain 
infinitely many possibilities (like Facebook groups)
- {code:sql}
SELECT * FROM main.users WHERE 'players' IN groups AND 'admins' NOT IN groups;
{code}

- Filter on toggle-based UI elements within user profiles:
{code:sql}
SELECT * FROM main.users WHERE notify['email'] = true;
{code}


Given a possibly endless list, map rows (or data pieces) onto items within that 
list:
For instance, user profiles often have 'interests' that could contain 
one/multiple of thousands if not millions of possibilities.
Currently, one would have to detail the entirety of the list that has been seen 
in one of two ways:
{code:sql}
CREATE TABLE main.interests (
interest_name TEXT PRIMARY KEY,
users LIST 
);
{code}

OR

{code:sql}
CREATE TABLE main.users (
id UUID PRIMARY KEY,
... other user fields ...
interests LIST 
);
{code}

Both of which would require post-result processing (map-reduce or similar) to 
find just the users containing a certain key/value.

Rather, with indexing:
{code:sql}
CREATE TABLE main.users (
id UUID PRIMARY KEY,
name TEXT,
age INT,
interests LIST 
);
{code}

where 'interests' is a relatively small (~10-25 elements) list that can be 
filtered by:

{code:sql}
SELECT * FROM main.users WHERE 'baseball' IN interests AND 'soccer' IN 
interests;
{code}

> Secondary index support for CQL3 collections 
> -
>
> Key: CASSANDRA-4511
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4511
> Project: Cassandra
>  Issue Type: Improvement
>Affects Versions: 1.2.0 beta 1
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 2.1
>
> Attachments: 4511.txt
>
>
> We should allow to 2ndary index on collections. A typical use case would be 
> to add a 'tag set' to say a user profile and to query users based on 
> what tag they have.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-4511) Secondary index support for CQL3 collections

2013-11-18 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825926#comment-13825926
 ] 

Jeremiah Jordan commented on CASSANDRA-4511:


I definitely think being able to index the key of a map is useful.  There are 
many times it would have been nice for me to be able to query "what rows have 
this dynamic column", I actually built my own 2i's so I could do that...  Since 
maps are basically one of the big replacements for dynamic columns, I think it 
would be very useful to index the keys.

> Secondary index support for CQL3 collections 
> -
>
> Key: CASSANDRA-4511
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4511
> Project: Cassandra
>  Issue Type: Improvement
>Affects Versions: 1.2.0 beta 1
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 2.1
>
> Attachments: 4511.txt
>
>
> We should allow to 2ndary index on collections. A typical use case would be 
> to add a 'tag set' to say a user profile and to query users based on 
> what tag they have.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6373) describe_ring hangs with hsha thrift server

2013-11-18 Thread Nick Bailey (JIRA)
Nick Bailey created CASSANDRA-6373:
--

 Summary: describe_ring hangs with hsha thrift server
 Key: CASSANDRA-6373
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6373
 Project: Cassandra
  Issue Type: Bug
Reporter: Nick Bailey
 Fix For: 2.0.3
 Attachments: describe_ring_failure.patch

There is a strange bug with the thrift hsha server in 2.0 (we switched to lmax 
disruptor server).

The bug is that the first call to describe_ring from one connection will hang 
indefinitely when the client is not connecting from localhost (or it at least 
looks like the client is not on the same host). When connecting from localhost 
the first call will work as expected. And in either case subsequent calls from 
the same connection will work as expected. According to git bisect the bad 
commit is the switch to the lmax disruptor server:

https://github.com/apache/cassandra/commit/98eec0a223251ecd8fec7ecc9e46b05497d631c6

I've attached the patch I used to reproduce the error in the unit tests. The 
command to reproduce is: 

{noformat}
PYTHONPATH=test nosetests 
--tests=system.test_thrift_server:TestMutations.test_describe_ring
{noformat}

I reproduced on ec2 and a single machine by having the server bind to the 
private ip on ec2 and the client connect to the public ip (so it appears as if 
the client is non local). I've also reproduced with two different vms though.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6373) describe_ring hangs with hsha thrift server

2013-11-18 Thread Nick Bailey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Bailey updated CASSANDRA-6373:
---

Attachment: describe_ring_failure.patch

> describe_ring hangs with hsha thrift server
> ---
>
> Key: CASSANDRA-6373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6373
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nick Bailey
> Fix For: 2.0.3
>
> Attachments: describe_ring_failure.patch
>
>
> There is a strange bug with the thrift hsha server in 2.0 (we switched to 
> lmax disruptor server).
> The bug is that the first call to describe_ring from one connection will hang 
> indefinitely when the client is not connecting from localhost (or it at least 
> looks like the client is not on the same host). When connecting from 
> localhost the first call will work as expected. And in either case subsequent 
> calls from the same connection will work as expected. According to git bisect 
> the bad commit is the switch to the lmax disruptor server:
> https://github.com/apache/cassandra/commit/98eec0a223251ecd8fec7ecc9e46b05497d631c6
> I've attached the patch I used to reproduce the error in the unit tests. The 
> command to reproduce is: 
> {noformat}
> PYTHONPATH=test nosetests 
> --tests=system.test_thrift_server:TestMutations.test_describe_ring
> {noformat}
> I reproduced on ec2 and a single machine by having the server bind to the 
> private ip on ec2 and the client connect to the public ip (so it appears as 
> if the client is non local). I've also reproduced with two different vms 
> though.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6373) describe_ring hangs with hsha thrift server

2013-11-18 Thread Nick Bailey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Bailey updated CASSANDRA-6373:
---

Description: 
There is a strange bug with the thrift hsha server in 2.0 (we switched to lmax 
disruptor server).

The bug is that the first call to describe_ring from one connection will hang 
indefinitely when the client is not connecting from localhost (or it at least 
looks like the client is not on the same host). Additionally the cluster must 
be using vnodes. When connecting from localhost the first call will work as 
expected. And in either case subsequent calls from the same connection will 
work as expected. According to git bisect the bad commit is the switch to the 
lmax disruptor server:

https://github.com/apache/cassandra/commit/98eec0a223251ecd8fec7ecc9e46b05497d631c6

I've attached the patch I used to reproduce the error in the unit tests. The 
command to reproduce is: 

{noformat}
PYTHONPATH=test nosetests 
--tests=system.test_thrift_server:TestMutations.test_describe_ring
{noformat}

I reproduced on ec2 and a single machine by having the server bind to the 
private ip on ec2 and the client connect to the public ip (so it appears as if 
the client is non local). I've also reproduced with two different vms though.

  was:
There is a strange bug with the thrift hsha server in 2.0 (we switched to lmax 
disruptor server).

The bug is that the first call to describe_ring from one connection will hang 
indefinitely when the client is not connecting from localhost (or it at least 
looks like the client is not on the same host). When connecting from localhost 
the first call will work as expected. And in either case subsequent calls from 
the same connection will work as expected. According to git bisect the bad 
commit is the switch to the lmax disruptor server:

https://github.com/apache/cassandra/commit/98eec0a223251ecd8fec7ecc9e46b05497d631c6

I've attached the patch I used to reproduce the error in the unit tests. The 
command to reproduce is: 

{noformat}
PYTHONPATH=test nosetests 
--tests=system.test_thrift_server:TestMutations.test_describe_ring
{noformat}

I reproduced on ec2 and a single machine by having the server bind to the 
private ip on ec2 and the client connect to the public ip (so it appears as if 
the client is non local). I've also reproduced with two different vms though.


> describe_ring hangs with hsha thrift server
> ---
>
> Key: CASSANDRA-6373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6373
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nick Bailey
> Fix For: 2.0.3
>
> Attachments: describe_ring_failure.patch
>
>
> There is a strange bug with the thrift hsha server in 2.0 (we switched to 
> lmax disruptor server).
> The bug is that the first call to describe_ring from one connection will hang 
> indefinitely when the client is not connecting from localhost (or it at least 
> looks like the client is not on the same host). Additionally the cluster must 
> be using vnodes. When connecting from localhost the first call will work as 
> expected. And in either case subsequent calls from the same connection will 
> work as expected. According to git bisect the bad commit is the switch to the 
> lmax disruptor server:
> https://github.com/apache/cassandra/commit/98eec0a223251ecd8fec7ecc9e46b05497d631c6
> I've attached the patch I used to reproduce the error in the unit tests. The 
> command to reproduce is: 
> {noformat}
> PYTHONPATH=test nosetests 
> --tests=system.test_thrift_server:TestMutations.test_describe_ring
> {noformat}
> I reproduced on ec2 and a single machine by having the server bind to the 
> private ip on ec2 and the client connect to the public ip (so it appears as 
> if the client is non local). I've also reproduced with two different vms 
> though.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-3814) [patch] add assertion message for deserializing columns assertion failure

2013-11-18 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825940#comment-13825940
 ] 

Jonathan Ellis commented on CASSANDRA-3814:
---

Clearing out the archives...

Is this still a problem?

> [patch] add assertion message for deserializing columns assertion failure
> -
>
> Key: CASSANDRA-3814
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3814
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Dave Brosius
>Priority: Trivial
> Attachments: assert_msg.diff
>
>
> use was doing
> create column family report_by_account_content with comparator=UTF8Type;
> update column family report_by_account_content with comparator=UTF8Type and 
> column_metadata = [{ column_name:'meta:account-id', 
> validation_class:UTF8Type,index_type:KEYS},{ column_name:'meta:filter-hash', 
> validation_class:UTF8Type,index_type:KEYS}];
> assert was generated but not represented to client. adding message:
> // column name format ::
> String[] components = 
> columns.getComparator().getString(column.name()).split(":");
> assert components.length == 3 : "Number of Comparator components not 3: " + 
> Arrays.toString(components);



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-4665) Add Sanity Checks for User Modifiable Settings

2013-11-18 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825941#comment-13825941
 ] 

Jonathan Ellis commented on CASSANDRA-4665:
---

[~pmcfadin] do you have a list of Ways Users Screwed Themselves for us here?

> Add Sanity Checks for User Modifiable Settings
> --
>
> Key: CASSANDRA-4665
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4665
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benjamin Coverston
>Priority: Trivial
>
> I've looked at a several environments that have had some problems that may 
> have been more easily identified if there was some type of warning or error 
> thrown if some of the user modifiable settings had some sanity check 
> associated with it.
> In one instance I saw a cluster that was having memory issues. After looking 
> at the YAML they had:
> index_interval: 32
> I'm not sure why anyone would set this lower than the default. They didn't 
> even know what the setting did. Perhaps if we issued a warning on startup in 
> cases like this it could have been resolved more quickly and easily.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5818) Duplicated error messages on directory creation error at startup

2013-11-18 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825949#comment-13825949
 ] 

Jonathan Ellis commented on CASSANDRA-5818:
---

Are you still working on this, [~ksaritek]?

> Duplicated error messages on directory creation error at startup
> 
>
> Key: CASSANDRA-5818
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5818
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michaël Figuière
>Assignee: koray sariteke
>Priority: Trivial
> Fix For: 2.1
>
> Attachments: patch.diff, trunk-5818.patch
>
>
> When I start Cassandra without the appropriate OS access rights to the 
> default Cassandra directories, I get a flood of {{ERROR}} messages at 
> startup, whereas one per directory would be more appropriate. See bellow:
> {code}
> ERROR 13:37:39,792 Failed to create 
> /var/lib/cassandra/data/system/schema_triggers directory
> ERROR 13:37:39,797 Failed to create 
> /var/lib/cassandra/data/system/schema_triggers directory
> ERROR 13:37:39,798 Failed to create 
> /var/lib/cassandra/data/system/schema_triggers directory
> ERROR 13:37:39,798 Failed to create 
> /var/lib/cassandra/data/system/schema_triggers directory
> ERROR 13:37:39,799 Failed to create 
> /var/lib/cassandra/data/system/schema_triggers directory
> ERROR 13:37:39,800 Failed to create /var/lib/cassandra/data/system/batchlog 
> directory
> ERROR 13:37:39,801 Failed to create /var/lib/cassandra/data/system/batchlog 
> directory
> ERROR 13:37:39,801 Failed to create /var/lib/cassandra/data/system/batchlog 
> directory
> ERROR 13:37:39,802 Failed to create /var/lib/cassandra/data/system/batchlog 
> directory
> ERROR 13:37:39,802 Failed to create 
> /var/lib/cassandra/data/system/peer_events directory
> ERROR 13:37:39,803 Failed to create 
> /var/lib/cassandra/data/system/peer_events directory
> ERROR 13:37:39,803 Failed to create 
> /var/lib/cassandra/data/system/peer_events directory
> ERROR 13:37:39,804 Failed to create 
> /var/lib/cassandra/data/system/compactions_in_progress directory
> ERROR 13:37:39,805 Failed to create 
> /var/lib/cassandra/data/system/compactions_in_progress directory
> ERROR 13:37:39,805 Failed to create 
> /var/lib/cassandra/data/system/compactions_in_progress directory
> ERROR 13:37:39,806 Failed to create 
> /var/lib/cassandra/data/system/compactions_in_progress directory
> ERROR 13:37:39,807 Failed to create 
> /var/lib/cassandra/data/system/compactions_in_progress directory
> ERROR 13:37:39,808 Failed to create /var/lib/cassandra/data/system/hints 
> directory
> ERROR 13:37:39,809 Failed to create /var/lib/cassandra/data/system/hints 
> directory
> ERROR 13:37:39,809 Failed to create /var/lib/cassandra/data/system/hints 
> directory
> ERROR 13:37:39,811 Failed to create /var/lib/cassandra/data/system/hints 
> directory
> ERROR 13:37:39,811 Failed to create /var/lib/cassandra/data/system/hints 
> directory
> ERROR 13:37:39,812 Failed to create 
> /var/lib/cassandra/data/system/schema_keyspaces directory
> ERROR 13:37:39,812 Failed to create 
> /var/lib/cassandra/data/system/schema_keyspaces directory
> ERROR 13:37:39,813 Failed to create 
> /var/lib/cassandra/data/system/schema_keyspaces directory
> ERROR 13:37:39,814 Failed to create 
> /var/lib/cassandra/data/system/schema_keyspaces directory
> ERROR 13:37:39,814 Failed to create 
> /var/lib/cassandra/data/system/schema_keyspaces directory
> ERROR 13:37:39,815 Failed to create 
> /var/lib/cassandra/data/system/range_xfers directory
> ERROR 13:37:39,816 Failed to create 
> /var/lib/cassandra/data/system/range_xfers directory
> ERROR 13:37:39,817 Failed to create 
> /var/lib/cassandra/data/system/range_xfers directory
> ERROR 13:37:39,817 Failed to create 
> /var/lib/cassandra/data/system/schema_columnfamilies directory
> ERROR 13:37:39,818 Failed to create 
> /var/lib/cassandra/data/system/schema_columnfamilies directory
> ERROR 13:37:39,818 Failed to create 
> /var/lib/cassandra/data/system/schema_columnfamilies directory
> ERROR 13:37:39,820 Failed to create 
> /var/lib/cassandra/data/system/schema_columnfamilies directory
> ERROR 13:37:39,821 Failed to create 
> /var/lib/cassandra/data/system/schema_columnfamilies directory
> ERROR 13:37:39,821 Failed to create 
> /var/lib/cassandra/data/system/schema_columnfamilies directory
> ERROR 13:37:39,822 Failed to create 
> /var/lib/cassandra/data/system/schema_columnfamilies directory
> ERROR 13:37:39,822 Failed to create 
> /var/lib/cassandra/data/system/schema_columnfamilies directory
> ERROR 13:37:39,823 Failed to create 
> /var/lib/cassandra/data/system/schema_columnfamilies directory
> ERROR 13:37:39,824 Failed to create 
> /var/lib/cassandra/data/system/schema_columnfamilies directory
> ERROR 13:37:39,824 Failed

[jira] [Resolved] (CASSANDRA-4888) store created by and last modified by usernames for keyspaces and column families

2013-11-18 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4888.
---

Resolution: Later

> store created by and last modified by usernames for keyspaces and column 
> families 
> --
>
> Key: CASSANDRA-4888
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4888
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Matthew F. Dennis
>Priority: Trivial
>
> would be useful to store created by and last modified by users for keyspaces 
> and column families, assuming there was a logged in user that did the 
> creation/modification



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-4511) Secondary index support for CQL3 collections

2013-11-18 Thread Zachary Marcantel (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825918#comment-13825918
 ] 

Zachary Marcantel edited comment on CASSANDRA-4511 at 11/19/13 12:04 AM:
-

I can think of (and need this for) a few use cases. They do revolve around more 
of a filtering aspect, however.

It can be said that sometimes models/data/rows belong to a "theoretical list" 
that contain infinite possibilities. 

Naturally, you do not want to store all possibilities, but may want to filter 
on those which are true while ignoring all other possiblities.

Examples of these lists could be:
- Things of interest to a user
  -- LinkedIn calls these skills, Facebook has 'liked pages', etc
- Movies Watched
  -- Netflix surely doesn't (want to) have a USER x MOVIES sized table
  -- Nor do they want (user x movies) number of columnfamilies
- Places Visited
  -- set yes/no for EVERY location on Earth?

Benchmarks may prove me wrong, but theoretically the performance hit would be 
minimal if the data is truly partitioned well, collections are kept small, and 
secondary indexing used only as a filter and not data storage.

Dynamic columns may make some of my examples easier, but bring their own 
headaches (post-filling dynamically created columns, massively wide tables, 
largely unused data == disk bloat).

I'll give a couple examples and use [~jbellis] syntax, as well as a potential 
map-based indexing.

- Users contained in group(s):
-- Note: this could be done with columns, but if we assume groups can contain 
infinitely many possibilities (like Facebook groups), this becomes an issue)
- {code:sql}
SELECT * FROM main.users WHERE 'players' IN groups AND 'admins' NOT IN groups;
{code}

- Filter on toggle-based UI elements within user profiles:
{code:sql}
SELECT * FROM main.users WHERE notify['email'] = true;
{code}


Currently, one would have to detail the entirety of the list that has been seen 
in one of three ways:

{code:sql}
CREATE TABLE main.interests (
interest_name TEXT PRIMARY KEY,
users LIST 
);
{code}

OR

{code:sql}
CREATE TABLE main.users (
id UUID PRIMARY KEY,
... other user fields ...
interests LIST 
);
{code}

OR

{code:SQL}
CREATE TABLE main.users (
id UUID PRIMARY KEY,
a BOOLEAN,
b BOOLEAN,
... Iterate through possibilities ...
z BOOLEAN
);
{code}

The first two would require post-result processing (map-reduce or similar) to 
find just the users containing a certain key/value. The last example would 
require much wasted disk space and post-filling of dynamically created columns. 

Rather, with indexing:
{code:sql}
CREATE TABLE main.users (
id UUID PRIMARY KEY,
name TEXT,
age INT,
interests LIST 
);
{code}

where 'interests' is a relatively small (~10-25 elements) list that can be 
filtered by:

{code:sql}
SELECT * FROM main.users WHERE 'baseball' IN interests AND 'soccer' IN 
interests;
{code}


was (Author: zmarcantel):
I can think of (and need this for) a few use cases. They do revolve around more 
of a filtering aspect, however.

It can be said that sometimes models/data/rows belong to a "theoretical list" 
that contain infinite possibilities. 

Naturally, you do not want to store all possibilities, but may want to filter 
on those which are true while ignoring all other possiblities.

Examples of these lists could be:
- Things of interest to a user
  -- LinkedIn calls these skills, Facebook has 'liked pages', etc
- Movies Watched
  -- Netflix surely doesn't (want to) have a USER x MOVIES sized table
  -- Nor do they want (user x movies) number of columnfamilies
- Places Visited
  -- set yes/no for EVERY location on Earth?

Benchmarks may prove me wrong, but theoretically the performance hit would be 
minimal if the data is truly partitioned well, collections are kept small, and 
secondary indexing used only as a filter and not data storage.

Dynamic columns may make some of my examples easier, but bring their own 
headaches (post-filling dynamically created columns, massively wide tables, 
largely unused data == disk bloat).

I'll give a couple examples and use [~jbellis] syntax, as well as a potential 
map-based indexing.

- Users contained in group(s):
-- Note: this could be done with columns, but if we assume groups can contain 
infinitely many possibilities (like Facebook groups)
- {code:sql}
SELECT * FROM main.users WHERE 'players' IN groups AND 'admins' NOT IN groups;
{code}

- Filter on toggle-based UI elements within user profiles:
{code:sql}
SELECT * FROM main.users WHERE notify['email'] = true;
{code}


Given a possibly endless list, map rows (or data pieces) onto items within that 
list:
For instance, user profiles often have 'interests' that could contain 
one/multiple of thousands if not millions of possibilities.
Currently, one would have to detail the entirety of the list that has been s

[jira] [Resolved] (CASSANDRA-3814) [patch] add assertion message for deserializing columns assertion failure

2013-11-18 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius resolved CASSANDRA-3814.
-

Resolution: Won't Fix

> [patch] add assertion message for deserializing columns assertion failure
> -
>
> Key: CASSANDRA-3814
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3814
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Dave Brosius
>Priority: Trivial
> Attachments: assert_msg.diff
>
>
> use was doing
> create column family report_by_account_content with comparator=UTF8Type;
> update column family report_by_account_content with comparator=UTF8Type and 
> column_metadata = [{ column_name:'meta:account-id', 
> validation_class:UTF8Type,index_type:KEYS},{ column_name:'meta:filter-hash', 
> validation_class:UTF8Type,index_type:KEYS}];
> assert was generated but not represented to client. adding message:
> // column name format ::
> String[] components = 
> columns.getComparator().getString(column.name()).split(":");
> assert components.length == 3 : "Number of Comparator components not 3: " + 
> Arrays.toString(components);



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-4511) Secondary index support for CQL3 collections

2013-11-18 Thread Alex Cruise (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825993#comment-13825993
 ] 

Alex Cruise commented on CASSANDRA-4511:


FWIW I had a strong use case for indexing both map keys and values. The 
indexing difficulties (not just in collections) are a big part of the reason I 
ended up going back to postgres, at least temporarily. :)

My data is tagged with customer-supplied arbitrary name/values, and I need to 
be able to search on both quickly.

> Secondary index support for CQL3 collections 
> -
>
> Key: CASSANDRA-4511
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4511
> Project: Cassandra
>  Issue Type: Improvement
>Affects Versions: 1.2.0 beta 1
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 2.1
>
> Attachments: 4511.txt
>
>
> We should allow to 2ndary index on collections. A typical use case would be 
> to add a 'tag set' to say a user profile and to query users based on 
> what tag they have.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6340) Provide a mechanism for retrieving all replicas

2013-11-18 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826028#comment-13826028
 ] 

Brandon Williams commented on CASSANDRA-6340:
-

To clarify, the idea here is to do a read with RR disabled, so nothing in the 
cluster gets mutated by the read in order to instrument the number of damaged 
replicas.

> Provide a mechanism for retrieving all replicas
> ---
>
> Key: CASSANDRA-6340
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6340
> Project: Cassandra
>  Issue Type: Improvement
> Environment: Production 
>Reporter: Ahmed Bashir
>
> In order to facilitate problem diagnosis, there should exist some mechanism 
> to retrieve all copies of specific columns



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6354) No cleanup of excess storage connections

2013-11-18 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-6354:


Summary: No cleanup of excess storage connections  (was: No cleanup of 
excess gossip connections)

> No cleanup of excess storage connections
> 
>
> Key: CASSANDRA-6354
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6354
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Rick Branson
>Priority: Minor
>
> While trying to cut off communication between two nodes, I noticed a 
> production node had >300 connections active established to another node on 
> the storage port. It looks like there's no check to keep these limited, so 
> they'll just sit around forever.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


git commit: add missing String.format parameter

2013-11-18 Thread dbrosius
Updated Branches:
  refs/heads/cassandra-1.2 1ac601980 -> b678035ed


add missing String.format parameter


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b678035e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b678035e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b678035e

Branch: refs/heads/cassandra-1.2
Commit: b678035ed1a9f0046ea16e78205996d8f58c5ecd
Parents: 1ac6019
Author: Dave Brosius 
Authored: Mon Nov 18 21:29:40 2013 -0500
Committer: Dave Brosius 
Committed: Mon Nov 18 21:29:40 2013 -0500

--
 src/java/org/apache/cassandra/dht/Murmur3Partitioner.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b678035e/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
--
diff --git a/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java 
b/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
index fe1c24f..5afa820 100644
--- a/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
+++ b/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
@@ -191,7 +191,7 @@ public class Murmur3Partitioner extends 
AbstractPartitioner
 }
 catch (NumberFormatException e)
 {
-throw new IllegalArgumentException(String.format("Invalid 
token for Murmur3Partitioner. Got %s but expected a long value (unsigned 8 
bytes integer)."));
+throw new IllegalArgumentException(String.format("Invalid 
token for Murmur3Partitioner. Got %s but expected a long value (unsigned 8 
bytes integer).", string));
 }
 }
 };



[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-11-18 Thread dbrosius
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ade99b91
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ade99b91
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ade99b91

Branch: refs/heads/cassandra-2.0
Commit: ade99b918f8902b735219f2b2434cca2cccf7698
Parents: fd6 b678035
Author: Dave Brosius 
Authored: Mon Nov 18 21:32:46 2013 -0500
Committer: Dave Brosius 
Committed: Mon Nov 18 21:32:46 2013 -0500

--
 src/java/org/apache/cassandra/dht/Murmur3Partitioner.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ade99b91/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
--



[1/3] git commit: add missing String.format parameter

2013-11-18 Thread dbrosius
Updated Branches:
  refs/heads/trunk 542d9c8d1 -> 52cc7efb2


add missing String.format parameter


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b678035e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b678035e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b678035e

Branch: refs/heads/trunk
Commit: b678035ed1a9f0046ea16e78205996d8f58c5ecd
Parents: 1ac6019
Author: Dave Brosius 
Authored: Mon Nov 18 21:29:40 2013 -0500
Committer: Dave Brosius 
Committed: Mon Nov 18 21:29:40 2013 -0500

--
 src/java/org/apache/cassandra/dht/Murmur3Partitioner.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b678035e/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
--
diff --git a/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java 
b/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
index fe1c24f..5afa820 100644
--- a/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
+++ b/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
@@ -191,7 +191,7 @@ public class Murmur3Partitioner extends 
AbstractPartitioner
 }
 catch (NumberFormatException e)
 {
-throw new IllegalArgumentException(String.format("Invalid 
token for Murmur3Partitioner. Got %s but expected a long value (unsigned 8 
bytes integer)."));
+throw new IllegalArgumentException(String.format("Invalid 
token for Murmur3Partitioner. Got %s but expected a long value (unsigned 8 
bytes integer).", string));
 }
 }
 };



[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2013-11-18 Thread dbrosius
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/52cc7efb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/52cc7efb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/52cc7efb

Branch: refs/heads/trunk
Commit: 52cc7efb2bcd47285148da85c089d796cb20734a
Parents: 542d9c8 ade99b9
Author: Dave Brosius 
Authored: Mon Nov 18 21:34:33 2013 -0500
Committer: Dave Brosius 
Committed: Mon Nov 18 21:34:33 2013 -0500

--
 src/java/org/apache/cassandra/dht/Murmur3Partitioner.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--




[2/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-11-18 Thread dbrosius
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ade99b91
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ade99b91
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ade99b91

Branch: refs/heads/trunk
Commit: ade99b918f8902b735219f2b2434cca2cccf7698
Parents: fd6 b678035
Author: Dave Brosius 
Authored: Mon Nov 18 21:32:46 2013 -0500
Committer: Dave Brosius 
Committed: Mon Nov 18 21:32:46 2013 -0500

--
 src/java/org/apache/cassandra/dht/Murmur3Partitioner.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ade99b91/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
--



[1/2] git commit: add missing String.format parameter

2013-11-18 Thread dbrosius
Updated Branches:
  refs/heads/cassandra-2.0 fd6ff -> ade99b918


add missing String.format parameter


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b678035e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b678035e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b678035e

Branch: refs/heads/cassandra-2.0
Commit: b678035ed1a9f0046ea16e78205996d8f58c5ecd
Parents: 1ac6019
Author: Dave Brosius 
Authored: Mon Nov 18 21:29:40 2013 -0500
Committer: Dave Brosius 
Committed: Mon Nov 18 21:29:40 2013 -0500

--
 src/java/org/apache/cassandra/dht/Murmur3Partitioner.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b678035e/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
--
diff --git a/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java 
b/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
index fe1c24f..5afa820 100644
--- a/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
+++ b/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java
@@ -191,7 +191,7 @@ public class Murmur3Partitioner extends 
AbstractPartitioner
 }
 catch (NumberFormatException e)
 {
-throw new IllegalArgumentException(String.format("Invalid 
token for Murmur3Partitioner. Got %s but expected a long value (unsigned 8 
bytes integer)."));
+throw new IllegalArgumentException(String.format("Invalid 
token for Murmur3Partitioner. Got %s but expected a long value (unsigned 8 
bytes integer).", string));
 }
 }
 };



[jira] [Commented] (CASSANDRA-6340) Provide a mechanism for retrieving all replicas

2013-11-18 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826120#comment-13826120
 ] 

Jonathan Ellis commented on CASSANDRA-6340:
---

Well, you can already do CL.ALL reads with RR disabled, so I assume it must 
actually be more involved than that?

> Provide a mechanism for retrieving all replicas
> ---
>
> Key: CASSANDRA-6340
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6340
> Project: Cassandra
>  Issue Type: Improvement
> Environment: Production 
>Reporter: Ahmed Bashir
>
> In order to facilitate problem diagnosis, there should exist some mechanism 
> to retrieve all copies of specific columns



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6283) Windows 7 data files keept open / can't be deleted after compaction.

2013-11-18 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826145#comment-13826145
 ] 

Mikhail Stepura commented on CASSANDRA-6283:


It could be related to 
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4831749, which means the 
only way to "fix" it is to set {{disk_access_mode: standard}}

> Windows 7 data files keept open / can't be deleted after compaction.
> 
>
> Key: CASSANDRA-6283
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6283
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Windows 7 (32) / Java 1.7.0.45
>Reporter: Andreas Schnitzerling
>Priority: Critical
>  Labels: newbie, patch, test
> Fix For: 2.0.3, 2.1
>
> Attachments: screenshot-1.jpg, system.log
>
>
> Files cannot be deleted, patch CASSANDRA-5383 (Win7 deleting problem) doesn't 
> help on Win-7 on Cassandra 2.0.2. Even 2.1 Snapshot is not running. The cause 
> is: Opened file handles seem to be lost and not closed properly. Win 7 
> blames, that another process is still using the file (but its obviously 
> cassandra). Only restart of the server makes the files deleted. But after 
> heavy using (changes) of tables, there are about 24K files in the data folder 
> (instead of 35 after every restart) and Cassandra crashes. I experiminted and 
> I found out, that a finalizer fixes the problem. So after GC the files will 
> be deleted (not optimal, but working fine). It runs now 2 days continously 
> without problem. Possible fix/test:
> I wrote the following finalizer at the end of class 
> org.apache.cassandra.io.util.RandomAccessReader:
>   @Override
>   protected void finalize() throws Throwable {
>   deallocate();
>   super.finalize();
>   }
> Can somebody test / develop / patch it? Thx.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6283) Windows 7 data files keept open / can't be deleted after compaction.

2013-11-18 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826176#comment-13826176
 ] 

Jonathan Ellis commented on CASSANDRA-6283:
---

That could be, but we try to munmap before deleting.  Is there a bug there?

> Windows 7 data files keept open / can't be deleted after compaction.
> 
>
> Key: CASSANDRA-6283
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6283
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Windows 7 (32) / Java 1.7.0.45
>Reporter: Andreas Schnitzerling
>Priority: Critical
>  Labels: newbie, patch, test
> Fix For: 2.0.3, 2.1
>
> Attachments: screenshot-1.jpg, system.log
>
>
> Files cannot be deleted, patch CASSANDRA-5383 (Win7 deleting problem) doesn't 
> help on Win-7 on Cassandra 2.0.2. Even 2.1 Snapshot is not running. The cause 
> is: Opened file handles seem to be lost and not closed properly. Win 7 
> blames, that another process is still using the file (but its obviously 
> cassandra). Only restart of the server makes the files deleted. But after 
> heavy using (changes) of tables, there are about 24K files in the data folder 
> (instead of 35 after every restart) and Cassandra crashes. I experiminted and 
> I found out, that a finalizer fixes the problem. So after GC the files will 
> be deleted (not optimal, but working fine). It runs now 2 days continously 
> without problem. Possible fix/test:
> I wrote the following finalizer at the end of class 
> org.apache.cassandra.io.util.RandomAccessReader:
>   @Override
>   protected void finalize() throws Throwable {
>   deallocate();
>   super.finalize();
>   }
> Can somebody test / develop / patch it? Thx.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6340) Provide a mechanism for retrieving all replicas

2013-11-18 Thread Ahmed Bashir (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826233#comment-13826233
 ] 

Ahmed Bashir commented on CASSANDRA-6340:
-

I'm looking for a way to get all copies of the data, not just one reconciled 
version

> Provide a mechanism for retrieving all replicas
> ---
>
> Key: CASSANDRA-6340
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6340
> Project: Cassandra
>  Issue Type: Improvement
> Environment: Production 
>Reporter: Ahmed Bashir
>
> In order to facilitate problem diagnosis, there should exist some mechanism 
> to retrieve all copies of specific columns



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6374) AssertionError for rows with zero columns

2013-11-18 Thread Anton Gorbunov (JIRA)
Anton Gorbunov created CASSANDRA-6374:
-

 Summary: AssertionError for rows with zero columns
 Key: CASSANDRA-6374
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6374
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Anton Gorbunov


After upgrading from 1.2.5 to 1.2.9 and then to 2.0.2 we've got those 
exceptions:
{code}
ERROR [FlushWriter:1] 2013-11-18 16:14:36,305 CassandraDaemon.java (line 187) 
Exception in thread Thread[FlushWriter:1,5,main]
java.lang.AssertionError
at 
org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:198)
at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:186)
at 
org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:360)
at 
org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:315)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
{code}

Also found similar issue in this thread:
http://www.mail-archive.com/user@cassandra.apache.org/msg32875.html
There Aaron Morton said that its caused by leaving rows with zero columns - 
that's exactly what we do in some CFs (using Thrift & Astyanax).




--
This message was sent by Atlassian JIRA
(v6.1#6144)


git commit: Add missing file

2013-11-18 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.2 b678035ed -> 582a16eff


Add missing file


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/582a16ef
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/582a16ef
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/582a16ef

Branch: refs/heads/cassandra-1.2
Commit: 582a16eff5fb7f94f936c9d6163f0a526c6ec4e4
Parents: b678035
Author: Sylvain Lebresne 
Authored: Tue Nov 19 08:30:08 2013 +0100
Committer: Sylvain Lebresne 
Committed: Tue Nov 19 08:30:08 2013 +0100

--
 .../cql3/MeasurableForPreparedCache.java| 26 
 1 file changed, 26 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/582a16ef/src/java/org/apache/cassandra/cql3/MeasurableForPreparedCache.java
--
diff --git a/src/java/org/apache/cassandra/cql3/MeasurableForPreparedCache.java 
b/src/java/org/apache/cassandra/cql3/MeasurableForPreparedCache.java
new file mode 100644
index 000..6b3b4b5
--- /dev/null
+++ b/src/java/org/apache/cassandra/cql3/MeasurableForPreparedCache.java
@@ -0,0 +1,26 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.cassandra.cql3;
+
+import org.github.jamm.MemoryMeter;
+
+public interface MeasurableForPreparedCache
+{
+public long measureForPreparedCache(MemoryMeter meter);
+}



[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2013-11-18 Thread slebresne
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/88fbdb11
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/88fbdb11
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/88fbdb11

Branch: refs/heads/trunk
Commit: 88fbdb11e7b1c0df46f79d738468221378068d48
Parents: 52cc7ef f651567
Author: Sylvain Lebresne 
Authored: Tue Nov 19 08:32:29 2013 +0100
Committer: Sylvain Lebresne 
Committed: Tue Nov 19 08:32:29 2013 +0100

--

--




[2/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-11-18 Thread slebresne
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f6515673
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f6515673
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f6515673

Branch: refs/heads/trunk
Commit: f65156733a11e05e313d9e0a53edf5bf3b120180
Parents: ade99b9 582a16e
Author: Sylvain Lebresne 
Authored: Tue Nov 19 08:31:53 2013 +0100
Committer: Sylvain Lebresne 
Committed: Tue Nov 19 08:31:53 2013 +0100

--

--




[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-11-18 Thread slebresne
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f6515673
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f6515673
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f6515673

Branch: refs/heads/cassandra-2.0
Commit: f65156733a11e05e313d9e0a53edf5bf3b120180
Parents: ade99b9 582a16e
Author: Sylvain Lebresne 
Authored: Tue Nov 19 08:31:53 2013 +0100
Committer: Sylvain Lebresne 
Committed: Tue Nov 19 08:31:53 2013 +0100

--

--




[1/3] git commit: Add missing file

2013-11-18 Thread slebresne
Updated Branches:
  refs/heads/trunk 52cc7efb2 -> 88fbdb11e


Add missing file


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/582a16ef
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/582a16ef
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/582a16ef

Branch: refs/heads/trunk
Commit: 582a16eff5fb7f94f936c9d6163f0a526c6ec4e4
Parents: b678035
Author: Sylvain Lebresne 
Authored: Tue Nov 19 08:30:08 2013 +0100
Committer: Sylvain Lebresne 
Committed: Tue Nov 19 08:30:08 2013 +0100

--
 .../cql3/MeasurableForPreparedCache.java| 26 
 1 file changed, 26 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/582a16ef/src/java/org/apache/cassandra/cql3/MeasurableForPreparedCache.java
--
diff --git a/src/java/org/apache/cassandra/cql3/MeasurableForPreparedCache.java 
b/src/java/org/apache/cassandra/cql3/MeasurableForPreparedCache.java
new file mode 100644
index 000..6b3b4b5
--- /dev/null
+++ b/src/java/org/apache/cassandra/cql3/MeasurableForPreparedCache.java
@@ -0,0 +1,26 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.cassandra.cql3;
+
+import org.github.jamm.MemoryMeter;
+
+public interface MeasurableForPreparedCache
+{
+public long measureForPreparedCache(MemoryMeter meter);
+}



[1/2] git commit: Add missing file

2013-11-18 Thread slebresne
Updated Branches:
  refs/heads/cassandra-2.0 ade99b918 -> f65156733


Add missing file


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/582a16ef
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/582a16ef
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/582a16ef

Branch: refs/heads/cassandra-2.0
Commit: 582a16eff5fb7f94f936c9d6163f0a526c6ec4e4
Parents: b678035
Author: Sylvain Lebresne 
Authored: Tue Nov 19 08:30:08 2013 +0100
Committer: Sylvain Lebresne 
Committed: Tue Nov 19 08:30:08 2013 +0100

--
 .../cql3/MeasurableForPreparedCache.java| 26 
 1 file changed, 26 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/582a16ef/src/java/org/apache/cassandra/cql3/MeasurableForPreparedCache.java
--
diff --git a/src/java/org/apache/cassandra/cql3/MeasurableForPreparedCache.java 
b/src/java/org/apache/cassandra/cql3/MeasurableForPreparedCache.java
new file mode 100644
index 000..6b3b4b5
--- /dev/null
+++ b/src/java/org/apache/cassandra/cql3/MeasurableForPreparedCache.java
@@ -0,0 +1,26 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.cassandra.cql3;
+
+import org.github.jamm.MemoryMeter;
+
+public interface MeasurableForPreparedCache
+{
+public long measureForPreparedCache(MemoryMeter meter);
+}