[jira] [Commented] (CASSANDRA-4405) SELECT FIRST [N] * does not return KEY

2012-07-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13405645#comment-13405645
 ] 

Jonathan Ellis commented on CASSANDRA-4405:
---

FIRST doesn't exist in cql3.

 SELECT FIRST [N] * does not return KEY
 --

 Key: CASSANDRA-4405
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4405
 Project: Cassandra
  Issue Type: Bug
  Components: Drivers
Affects Versions: 1.1.1
 Environment: CQL version 1.0.10
 Cassandra version 1.1.1
 cqlsh version 2.2.0
 Ubuntu 11.04
Reporter: Blake Visin
Assignee: paul cannon
  Labels: cql, cqlsh

 cqlsh:ovg CREATE COLUMNFAMILY 'testing_bug' (KEY text PRIMARY KEY);
 cqlsh:ovg UPDATE testing_bug SET 'test_col' = 'test_row' where KEY = '1';
 cqlsh:ovg UPDATE testing_bug SET 'test_col_1' = 'test_row_1' where KEY = '1';
 cqlsh:ovg UPDATE testing_bug SET 'test_col_2' = 'test_row_2' where KEY = '1';
 cqlsh:ovg SELECT * FROM testing_bug WHERE KEY = '1';
  KEY | test_col | test_col_1 | test_col_2
 -+--++
1 | test_row | test_row_1 | test_row_2
 cqlsh:ovg SELECT FIRST 1 * FROM testing_bug WHERE KEY = '1';
  test_col
 --
  test_row
 See that KEY is not returned in the second result.  This becomes a problem 
 when combining this with IN as we don't know what the row key is.
 cqlsh:ovg SELECT * FROM testing_bug WHERE KEY IN ('1', '2', '3');
  KEY,1 | test_col,test_row | test_col_1,test_row_1 | test_col_2,test_row_2
  KEY,2
  KEY,3
 This may also be another problem:
 cqlsh:ovg SELECT FIRST 1 * FROM testing_bug WHERE KEY IN ('1', '2', '3');
  test_col,test_row
 need more than 0 values to unpack

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3647) Support set and map value types in CQL

2012-07-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13405794#comment-13405794
 ] 

Sylvain Lebresne commented on CASSANDRA-3647:
-

I've pushed a v3 at https://github.com/pcmanus/cassandra/commits/3647-3 that is 
rebased and adds one last patch to address some of the remarks. More precisely:

bq. Wouldn't it be a good idea to add a boolean isCollectionType() method 
which by default would return false?

I don't know actually. That's definitively an option but on the other side I 
wonder what that would really get us. I know instanceof has bad reputation, and 
I certainly agree that we shouldn't overuse it, but I don't think we should 
avoid it at all cost either. In most instanceof usage for CollectionType and 
CompositeType, a cast follows (and in most case I don't see how to refactor to 
avoid those casts without being uber ugly, though I'm open to suggestion), and 
if your going to cast, I think testing with instanceof is actually safer than 
using a boolean method.

bq. when to append(',') could be distinguished with a condition instead of code 
duplication

Changed.

{quote}
* *Value* I think we should add a way to distinguish between different types of 
values without using instanceof all them
* *UpdateStatement* starting from line 199 - only difference between 
instanceof cases is pre-validation which could be added to the separate 
method in Value
{quote}

If I understand correctly those are related. I've refactored Value.java and 
UpdateStatement a bit to merge the code dealing with the different literals. It 
does not eliminate *all* the instanceof and casts, but I think the remaining 
one are ok (that is, I don't see a clearly better way to do the same thing 
without the instanceof).

bq. *UpdateStatement* mutationForKey method - do we need to enforce using 
group as a last parameter all the time, even when we set it to null ?

I don't understand what you are suggesting.

bq. definition should be definitions

Fixed.

bq. *CollectionType* line 89 *ListType* line 147 argument should be 
arguments

I didn't found any instance of argument on those line. Maybe they were in the 
first patches but removed by the later ones?


 Support set and map value types in CQL
 --

 Key: CASSANDRA-3647
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3647
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
  Labels: cql
 Fix For: 1.2


 Composite columns introduce the ability to have arbitrarily nested data in a 
 Cassandra row.  We should expose this through CQL.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4406) Update stress for CQL3

2012-07-03 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-4406:
---

 Summary: Update stress for CQL3
 Key: CASSANDRA-4406
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4406
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 1.1.0
Reporter: Sylvain Lebresne
 Fix For: 1.2


Stress does not support CQL3. We should add support for it so that:
# we can benchmark CQL3
# we can benchmark CASSANDRA-2478

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2698) Instrument repair to be able to assess it's efficiency (precision)

2012-07-03 Thread Jason Wee (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13405822#comment-13405822
 ] 

Jason Wee commented on CASSANDRA-2698:
--

Hello, I've been using cassandra and develop client application to interfacing 
(crud) to the cassandra. It has been a great software and I would like to 
contribute back to cassandra and I've read 
http://wiki.apache.org/cassandra/HowToContribute 
which link me to 
https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=truejqlQuery=project+%3D+12310865+AND+labels+%3D+lhf+AND+status+!%3D+resolved
 . Since this is a fresh for me, I'm not sure if this is a right place to start 
contributing and I hope you can response where and how should I able to 
contribute.
Thank you.

 Instrument repair to be able to assess it's efficiency (precision)
 --

 Key: CASSANDRA-2698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2698
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Priority: Minor
  Labels: lhf

 Some reports indicate that repair sometime transfer huge amounts of data. One 
 hypothesis is that the merkle tree precision may deteriorate too much at some 
 data size. To check this hypothesis, it would be reasonably to gather 
 statistic during the merkle tree building of how many rows each merkle tree 
 range account for (and the size that this represent). It is probably an 
 interesting statistic to have anyway.   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (CASSANDRA-2698) Instrument repair to be able to assess it's efficiency (precision)

2012-07-03 Thread Jason Wee (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13405822#comment-13405822
 ] 

Jason Wee edited comment on CASSANDRA-2698 at 7/3/12 11:52 AM:
---

Hello, I've been using cassandra and develop client (hector) application to 
interfacing (crud) to the cassandra. It has been a great software and I would 
like to contribute back to cassandra and I've read 
http://wiki.apache.org/cassandra/HowToContribute 
which link me to 
https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=truejqlQuery=project+%3D+12310865+AND+labels+%3D+lhf+AND+status+!%3D+resolved
 . Since this is a fresh for me, I'm not sure if this is a right place to start 
contributing and I hope you can response where and how should I able to 
contribute.
Thank you.

  was (Author: jasonwee):
Hello, I've been using cassandra and develop client application to 
interfacing (crud) to the cassandra. It has been a great software and I would 
like to contribute back to cassandra and I've read 
http://wiki.apache.org/cassandra/HowToContribute 
which link me to 
https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=truejqlQuery=project+%3D+12310865+AND+labels+%3D+lhf+AND+status+!%3D+resolved
 . Since this is a fresh for me, I'm not sure if this is a right place to start 
contributing and I hope you can response where and how should I able to 
contribute.
Thank you.
  
 Instrument repair to be able to assess it's efficiency (precision)
 --

 Key: CASSANDRA-2698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2698
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Priority: Minor
  Labels: lhf

 Some reports indicate that repair sometime transfer huge amounts of data. One 
 hypothesis is that the merkle tree precision may deteriorate too much at some 
 data size. To check this hypothesis, it would be reasonably to gather 
 statistic during the merkle tree building of how many rows each merkle tree 
 range account for (and the size that this represent). It is probably an 
 interesting statistic to have anyway.   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3763) compactionstats throws ArithmeticException: / by zero

2012-07-03 Thread Zenek Kraweznik (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zenek Kraweznik updated CASSANDRA-3763:
---

Affects Version/s: 1.1.1
   1.1.2

 compactionstats throws ArithmeticException: / by zero
 -

 Key: CASSANDRA-3763
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3763
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Tools
Affects Versions: 1.0.7, 1.0.8, 1.0.9, 1.0.10, 1.1.0, 1.1.1, 1.1.2
 Environment: debian linux - openvz kernel, oracle java 1.6.0.26
Reporter: Zenek Kraweznik
Priority: Trivial

 compactionstats looks like this:
 # nodetool -h localhost compactionstats
 Exception in thread main java.lang.ArithmeticException: / by zero
 at 
 org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:435)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:128)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:1060)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:93)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:27)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208)
 at 
 com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:65)
 at 
 com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:216)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:666)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:638)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1404)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1360)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:600)
 at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305)
 at sun.rmi.transport.Transport$1.run(Transport.java:159)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:155)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 #
 nodetool is working fine in other actions:
 # nodetool -h localhost netstats
 Mode: NORMAL
 Not sending any streams.
 Not receiving any streams.
 Pool NameActive   Pending  Completed
 Commandsn/a 0  2
 Responses   n/a 0   1810
 #

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3881) reduce computational complexity of processing topology changes

2012-07-03 Thread Sam Overton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13405917#comment-13405917
 ] 

Sam Overton commented on CASSANDRA-3881:


It's all merged in now, so the patch links in the ticket description are up to 
date.

There was one more place in some tests that TMD needed to be cloned: 
https://github.com/acunu/cassandra/commit/08620c55a77ba1dc257b853610386297ab0c379b


 reduce computational complexity of processing topology changes
 --

 Key: CASSANDRA-3881
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3881
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Peter Schuller
Assignee: Sam Overton
  Labels: vnodes

 This constitutes follow-up work from CASSANDRA-3831 where a partial 
 improvement was committed, but the fundamental issue was not fixed. The 
 maximum practical cluster size was significantly improved, but further work 
 is expected to be necessary as cluster sizes grow.
 _Edit0: Appended patch information._
 h3. Patches
 ||Compare||Raw diff||Description||
 |[00_snitch_topology|https://github.com/acunu/cassandra/compare/refs/top-bases/p/3881/00_snitch_topology...p/3881/00_snitch_topology]|[00_snitch_topology.patch|https://github.com/acunu/cassandra/compare/refs/top-bases/p/3881/00_snitch_topology...p/3881/00_snitch_topology.diff]|Adds
  some functionality to TokenMetadata to track which endpoints and racks exist 
 in a DC.|
 |[01_calc_natural_endpoints|https://github.com/acunu/cassandra/compare/refs/top-bases/p/3881/01_calc_natural_endpoints...p/3881/01_calc_natural_endpoints]|[01_calc_natural_endpoints.patch|https://github.com/acunu/cassandra/compare/refs/top-bases/p/3881/01_calc_natural_endpoints...p/3881/01_calc_natural_endpoints.diff]|Rewritten
  O(logN) implementation of calculateNaturalEndpoints using the topology 
 information from the tokenMetadata.|
 
 _Note: These are branches managed with TopGit. If you are applying the patch 
 output manually, you will either need to filter the TopGit metadata files 
 (i.e. {{wget -O - url | filterdiff -x*.topdeps -x*.topmsg | patch -p1}}), 
 or remove them afterward ({{rm .topmsg .topdeps}})._

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4337) Data insertion fails because of commitlog rename failure

2012-07-03 Thread Patrycjusz Matuszak (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13405929#comment-13405929
 ] 

Patrycjusz Matuszak commented on CASSANDRA-4337:


I've tested with attached patch and couldn't reproduce this bug.

 Data insertion fails because of commitlog rename failure
 

 Key: CASSANDRA-4337
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4337
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
 Environment: - Node 1:
Hardware: Intel Xeon 2.83 GHz (4 cores), 24GB RAM, Dell VIRTUAL DISK SCSI 
 500GB
System: Windows Server 2008 R2 x64
Java version: 7 update 4 x64
 - Node 2:
 Hardware: Intel Xeon 2.83 GHz (4 cores), 8GB RAM, Dell VIRTUAL DISK SCSI 
 500GB
 System: Windows Server 2008 R2 x64
   Java version: 7 update 4 x64
Reporter: Patrycjusz Matuszak
Assignee: Jonathan Ellis
  Labels: commitlog
 Fix For: 1.1.3

 Attachments: 4337-poc.txt, system-node1-stress-test.log, 
 system-node1.log, system-node2-stress-test.log, system-node2.log


 h3. Configuration
 Cassandra server configuration:
 {noformat}heap size: 4 GB
 seed_provider:
 - class_name: org.apache.cassandra.locator.SimpleSeedProvider
   parameters:
   - seeds: xxx.xxx.xxx.10,xxx.xxx.xxx.11
 listen_address: xxx.xxx.xxx.10
 rpc_address: 0.0.0.0
 rpc_port: 9160
 rpc_timeout_in_ms: 2
 endpoint_snitch: PropertyFileSnitch{noformat}
 cassandra-topology.properties
 {noformat}xxx.xxx.xxx.10=datacenter1:rack1
 xxx.xxx.xxx.11=datacenter1:rack1
 default=datacenter1:rack1{noformat}
 Ring configuration:
 {noformat}Address DC  RackStatus State   Load 
Effective-Ownership Token
   
  85070591730234615865843651857942052864
 xxx.xxx.xxx.10  datacenter1 rack1   Up Normal  23,11 kB
 100,00% 0
 xxx.xxx.xxx.11  datacenter1 rack1   Up Normal  23,25 kB
 100,00% 85070591730234615865843651857942052864{noformat}
 h3.Problem
 I have ctreated keyspace and column family using CLI commands:
 {noformat}create keyspace testks with placement_strategy = 
 'org.apache.cassandra.locator.NetworkTopologyStrategy' and strategy_options = 
 {datacenter1:2};
 use testks;
 create column family testcf;{noformat}
 Then I started my Java application, which inserts 50 000 000 rows to created 
 column family using Hector client. Client is connected to node 1.
 After about 30 seconds (160 000 rows were inserted) Cassandra server on node 
 1 throws an exception:
 {noformat}ERROR [COMMIT-LOG-ALLOCATOR] 2012-06-13 10:26:38,393 
 AbstractCassandraDaemon.java (line 134) Exception in thread 
 Thread[COMMIT-LOG-ALLOCATOR,5,main]
 java.io.IOError: java.io.IOException: Rename from 
 c:\apache-cassandra\storage\commitlog\CommitLog-7345742389552.log to 
 7475933520374 failed
   at 
 org.apache.cassandra.db.commitlog.CommitLogSegment.init(CommitLogSegment.java:127)
   at 
 org.apache.cassandra.db.commitlog.CommitLogSegment.recycle(CommitLogSegment.java:204)
   at 
 org.apache.cassandra.db.commitlog.CommitLogAllocator$2.run(CommitLogAllocator.java:166)
   at 
 org.apache.cassandra.db.commitlog.CommitLogAllocator$1.runMayThrow(CommitLogAllocator.java:95)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.io.IOException: Rename from 
 c:\apache-cassandra\storage\commitlog\CommitLog-7345742389552.log to 
 7475933520374 failed
   at 
 org.apache.cassandra.db.commitlog.CommitLogSegment.init(CommitLogSegment.java:105)
   ... 5 more{noformat}
   
 Then, few seconds later Cassandra server on node 2 throws the same exception:
 {noformat}ERROR [COMMIT-LOG-ALLOCATOR] 2012-06-14 10:26:44,005 
 AbstractCassandraDaemon.java (line 134) Exception in thread 
 Thread[COMMIT-LOG-ALLOCATOR,5,main]
 java.io.IOError: java.io.IOException: Rename from 
 c:\apache-cassandra\storage\commitlog\CommitLog-7320337904033.log to 
 7437675489307 failed
   at 
 org.apache.cassandra.db.commitlog.CommitLogSegment.init(CommitLogSegment.java:127)
   at 
 org.apache.cassandra.db.commitlog.CommitLogSegment.recycle(CommitLogSegment.java:204)
   at 
 org.apache.cassandra.db.commitlog.CommitLogAllocator$2.run(CommitLogAllocator.java:166)
   at 
 org.apache.cassandra.db.commitlog.CommitLogAllocator$1.runMayThrow(CommitLogAllocator.java:95)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
   at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.IOException: Rename from 
 

[jira] [Commented] (CASSANDRA-2478) Custom CQL protocol/transport

2012-07-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13405935#comment-13405935
 ] 

Sylvain Lebresne commented on CASSANDRA-2478:
-

bq. All of protocol related classes are placed under 
org.apache.cassandra.cql3.transport, but I think it'd be better to keep them 
under org.apache.cassandra.transport.

Make sense.

bq. Native transport service starts by default along with thrift service, but 
isn't it better to turn off by default to indicate for developing client only?

I agree. And in any case, most people won't want to run both server at the same 
time anyway, so I've added flag for both thrift and the new native server in 
the config file to choose whether to start those. You can still overwrite those 
by used startup flags, but I believe most of the time the config setting will 
be more convenient. And the default is to not start the native transport by 
default while it's still beta.

I've rebased the branch and added the two changes above in 
https://github.com/pcmanus/cassandra/commits/2478-3. I also made a small 
modification due to a remark made by Norman on github. Taking about that, 
@Norman, was that your only remark or are you just not done looking at this?

bq. And not necessary at this time, but it is nice to have unit test like 
CliTest, based on debug client.

Not sure what you mean by that.


 Custom CQL protocol/transport
 -

 Key: CASSANDRA-2478
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2478
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Eric Evans
Assignee: Sylvain Lebresne
Priority: Minor
  Labels: cql
 Attachments: cql_binary_protocol, cql_binary_protocol-v2


 A custom wire protocol would give us the flexibility to optimize for our 
 specific use-cases, and eliminate a troublesome dependency (I'm referring to 
 Thrift, but none of the others would be significantly better).  Additionally, 
 RPC is bad fit here, and we'd do better to move in the direction of something 
 that natively supports streaming.
 I don't think this is as daunting as it might seem initially.  Utilizing an 
 existing server framework like Netty, combined with some copy-and-paste of 
 bits from other FLOSS projects would probably get us 80% of the way there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3647) Support set and map value types in CQL

2012-07-03 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13405943#comment-13405943
 ] 

Pavel Yaskevich commented on CASSANDRA-3647:


bq. I don't know actually. That's definitively an option but on the other side 
I wonder what that would really get us. I know instanceof has bad reputation, 
and I certainly agree that we shouldn't overuse it, but I don't think we should 
avoid it at all cost either. In most instanceof usage for CollectionType and 
CompositeType, a cast follows (and in most case I don't see how to refactor to 
avoid those casts without being uber ugly, though I'm open to suggestion), and 
if your going to cast, I think testing with instanceof is actually safer than 
using a boolean method.

This is the main problem on my opinion - that we need a casts/instanceof checks 
which is a chronic problem of type hierarchy of o.a.c.db.marshal package 
related to Composite and Collection types, I think we should reflect their most 
commonly used functionality in AbstractType.

bq. If I understand correctly those are related. I've refactored Value.java and 
UpdateStatement a bit to merge the code dealing with the different literals. It 
does not eliminate all the instanceof and casts, but I think the remaining one 
are ok (that is, I don't see a clearly better way to do the same thing without 
the instanceof).

I like what you did there, how about we go further and move 
validateType(CFDefinition.Name) and constructionFunction() to Value and remove 
(or rename Value to) Literal as all of the classes implement only that one 
interface, so it would be someting like Literal.{List, Set, Map} and you would 
be able to assert instance of Literal in UpdateStatement?...

bq. definition should be definitions

There are couple of same typos in the UpdateStatement left - lines _341_, _349_ 
and _373_

bq. UpdateStatement mutationForKey method - do we need to enforce using 
group as a last parameter all the time, even when we set it to null ?

That was fixed in the last commits so no problem.

bq. CollectionType line 89 ListType line 147 argument should be arguments

That seems to be removed too, sorry, I have seen they in a first commits. 



 Support set and map value types in CQL
 --

 Key: CASSANDRA-3647
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3647
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
  Labels: cql
 Fix For: 1.2


 Composite columns introduce the ability to have arbitrarily nested data in a 
 Cassandra row.  We should expose this through CQL.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[1/3] git commit: merge from 1.1

2012-07-03 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.1 867478493 - 67dec69f5
  refs/heads/trunk 76ada1106 - 602e383d6


merge from 1.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/602e383d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/602e383d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/602e383d

Branch: refs/heads/trunk
Commit: 602e383d670f1f2a7dd6c220482b6dc6ba57ba4b
Parents: 76ada11 67dec69
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Jul 3 11:49:34 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Jul 3 11:49:34 2012 -0500

--
 .../cassandra/db/AbstractColumnContainer.java  |8 ++-
 .../db/AbstractThreadUnsafeSortedColumns.java  |6 +++
 .../apache/cassandra/db/AtomicSortedColumns.java   |   31 +-
 .../org/apache/cassandra/db/ISortedColumns.java|7 +++
 src/java/org/apache/cassandra/db/Memtable.java |   24 +--
 5 files changed, 57 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/602e383d/src/java/org/apache/cassandra/db/AbstractColumnContainer.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/602e383d/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/602e383d/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
--
diff --cc src/java/org/apache/cassandra/db/AtomicSortedColumns.java
index a85b41e,9cb44d2..c8f5f3a
--- a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
@@@ -172,8 -176,11 +179,9 @@@ public class AtomicSortedColumns implem
  main_loop:
  do
  {
+ sizeDelta = 0;
  current = ref.get();
 -DeletionInfo newDelInfo = current.deletionInfo;
 -if (newDelInfo.markedForDeleteAt  
cm.getDeletionInfo().markedForDeleteAt)
 -newDelInfo = cm.getDeletionInfo();
 +DeletionInfo newDelInfo = 
current.deletionInfo.add(cm.getDeletionInfo());
  modified = new Holder(current.map.clone(), newDelInfo);
  
  for (IColumn column : cm.getSortedColumns())
@@@ -329,12 -343,22 +339,22 @@@
  {
  ByteBuffer name = column.name();
  IColumn oldColumn;
- while ((oldColumn = map.putIfAbsent(name, column)) != null)
+ long sizeDelta = 0;
+ while (true)
  {
+ oldColumn = map.putIfAbsent(name, column);
+ if (oldColumn == null)
+ {
 -sizeDelta += column.serializedSize();
++sizeDelta += column.dataSize();
+ break;
+ }
+ 
  if (oldColumn instanceof SuperColumn)
  {
  assert column instanceof SuperColumn;
 -long previousSize = oldColumn.serializedSize();
++long previousSize = oldColumn.dataSize();
  ((SuperColumn) oldColumn).putColumn((SuperColumn)column, 
allocator);
 -sizeDelta += oldColumn.serializedSize() - previousSize;
++sizeDelta += oldColumn.dataSize() - previousSize;
  break;  // Delegated to SuperColumn
  }
  else
@@@ -342,7 -366,10 +362,10 @@@
  // calculate reconciled col from old (existing) col and 
new col
  IColumn reconciledColumn = column.reconcile(oldColumn, 
allocator);
  if (map.replace(name, oldColumn, reconciledColumn))
+ {
 -sizeDelta += reconciledColumn.serializedSize() - 
oldColumn.serializedSize();
++sizeDelta += reconciledColumn.dataSize() - 
oldColumn.dataSize();
  break;
+ }
  
  // We failed to replace column due to a concurrent update 
or a concurrent removal. Keep trying.
  // (Currently, concurrent removal should not happen (only 
updates), but let us support that anyway.)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/602e383d/src/java/org/apache/cassandra/db/ISortedColumns.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/602e383d/src/java/org/apache/cassandra/db/Memtable.java
--
diff --cc 

[2/3] git commit: use data size ratio in liveRatio instead of live size : serialized throughput patch by jbellis; reviewed by slebresne for CASSANDRA-4399

2012-07-03 Thread jbellis
use data size ratio in liveRatio instead of live size : serialized throughput
patch by jbellis; reviewed by slebresne for CASSANDRA-4399


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/67dec69f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/67dec69f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/67dec69f

Branch: refs/heads/trunk
Commit: 67dec69f53d2bfd3818fea4ede40e5d5a6b2356b
Parents: 8674784
Author: Jonathan Ellis jbel...@apache.org
Authored: Mon Jul 2 01:40:38 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Jul 3 11:44:46 2012 -0500

--
 .../cassandra/db/AbstractColumnContainer.java  |8 ++-
 .../db/AbstractThreadUnsafeSortedColumns.java  |6 +++
 .../apache/cassandra/db/AtomicSortedColumns.java   |   31 +-
 .../org/apache/cassandra/db/ISortedColumns.java|7 +++
 src/java/org/apache/cassandra/db/Memtable.java |   24 +--
 5 files changed, 57 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/67dec69f/src/java/org/apache/cassandra/db/AbstractColumnContainer.java
--
diff --git a/src/java/org/apache/cassandra/db/AbstractColumnContainer.java 
b/src/java/org/apache/cassandra/db/AbstractColumnContainer.java
index c7922b1..c35c63c 100644
--- a/src/java/org/apache/cassandra/db/AbstractColumnContainer.java
+++ b/src/java/org/apache/cassandra/db/AbstractColumnContainer.java
@@ -84,9 +84,11 @@ public abstract class AbstractColumnContainer implements 
IColumnContainer, IIter
 columns.maybeResetDeletionTimes(gcBefore);
 }
 
-/**
- * We need to go through each column in the column container and resolve 
it before adding
- */
+public long addAllWithSizeDelta(AbstractColumnContainer cc, Allocator 
allocator, FunctionIColumn, IColumn transformation)
+{
+return columns.addAllWithSizeDelta(cc.columns, allocator, 
transformation);
+}
+
 public void addAll(AbstractColumnContainer cc, Allocator allocator, 
FunctionIColumn, IColumn transformation)
 {
 columns.addAll(cc.columns, allocator, transformation);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/67dec69f/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
--
diff --git 
a/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java 
b/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
index b09b5ee..1360336 100644
--- a/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
@@ -93,6 +93,12 @@ public abstract class AbstractThreadUnsafeSortedColumns 
implements ISortedColumn
 // having to care about the deletion infos
 protected abstract void addAllColumns(ISortedColumns columns, Allocator 
allocator, FunctionIColumn, IColumn transformation);
 
+public long addAllWithSizeDelta(ISortedColumns cm, Allocator allocator, 
FunctionIColumn, IColumn transformation)
+{
+// sizeDelta is only needed by memtable updates which should not be 
using thread-unsafe containers
+throw new UnsupportedOperationException();
+}
+
 public void addAll(ISortedColumns columns, Allocator allocator, 
FunctionIColumn, IColumn transformation)
 {
 addAllColumns(columns, allocator, transformation);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/67dec69f/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
--
diff --git a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java 
b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
index 5fdc0f6..9cb44d2 100644
--- a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
@@ -154,6 +154,11 @@ public class AtomicSortedColumns implements ISortedColumns
 
 public void addAll(ISortedColumns cm, Allocator allocator, 
FunctionIColumn, IColumn transformation)
 {
+addAllWithSizeDelta(cm, allocator, transformation);
+}
+
+public long addAllWithSizeDelta(ISortedColumns cm, Allocator allocator, 
FunctionIColumn, IColumn transformation)
+{
 /*
  * This operation needs to atomicity and isolation. To that end, we
  * add the new column to a copy of the map (a cheap O(1) snapTree
@@ -166,9 +171,12 @@ public class AtomicSortedColumns implements ISortedColumns
  * we bail early, avoiding unnecessary work if possible.
  */
 Holder current, modified;
+long sizeDelta;
+
 main_loop:
 do
 

[3/3] git commit: use data size ratio in liveRatio instead of live size : serialized throughput patch by jbellis; reviewed by slebresne for CASSANDRA-4399

2012-07-03 Thread jbellis
use data size ratio in liveRatio instead of live size : serialized throughput
patch by jbellis; reviewed by slebresne for CASSANDRA-4399


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/67dec69f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/67dec69f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/67dec69f

Branch: refs/heads/cassandra-1.1
Commit: 67dec69f53d2bfd3818fea4ede40e5d5a6b2356b
Parents: 8674784
Author: Jonathan Ellis jbel...@apache.org
Authored: Mon Jul 2 01:40:38 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Jul 3 11:44:46 2012 -0500

--
 .../cassandra/db/AbstractColumnContainer.java  |8 ++-
 .../db/AbstractThreadUnsafeSortedColumns.java  |6 +++
 .../apache/cassandra/db/AtomicSortedColumns.java   |   31 +-
 .../org/apache/cassandra/db/ISortedColumns.java|7 +++
 src/java/org/apache/cassandra/db/Memtable.java |   24 +--
 5 files changed, 57 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/67dec69f/src/java/org/apache/cassandra/db/AbstractColumnContainer.java
--
diff --git a/src/java/org/apache/cassandra/db/AbstractColumnContainer.java 
b/src/java/org/apache/cassandra/db/AbstractColumnContainer.java
index c7922b1..c35c63c 100644
--- a/src/java/org/apache/cassandra/db/AbstractColumnContainer.java
+++ b/src/java/org/apache/cassandra/db/AbstractColumnContainer.java
@@ -84,9 +84,11 @@ public abstract class AbstractColumnContainer implements 
IColumnContainer, IIter
 columns.maybeResetDeletionTimes(gcBefore);
 }
 
-/**
- * We need to go through each column in the column container and resolve 
it before adding
- */
+public long addAllWithSizeDelta(AbstractColumnContainer cc, Allocator 
allocator, FunctionIColumn, IColumn transformation)
+{
+return columns.addAllWithSizeDelta(cc.columns, allocator, 
transformation);
+}
+
 public void addAll(AbstractColumnContainer cc, Allocator allocator, 
FunctionIColumn, IColumn transformation)
 {
 columns.addAll(cc.columns, allocator, transformation);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/67dec69f/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
--
diff --git 
a/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java 
b/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
index b09b5ee..1360336 100644
--- a/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
@@ -93,6 +93,12 @@ public abstract class AbstractThreadUnsafeSortedColumns 
implements ISortedColumn
 // having to care about the deletion infos
 protected abstract void addAllColumns(ISortedColumns columns, Allocator 
allocator, FunctionIColumn, IColumn transformation);
 
+public long addAllWithSizeDelta(ISortedColumns cm, Allocator allocator, 
FunctionIColumn, IColumn transformation)
+{
+// sizeDelta is only needed by memtable updates which should not be 
using thread-unsafe containers
+throw new UnsupportedOperationException();
+}
+
 public void addAll(ISortedColumns columns, Allocator allocator, 
FunctionIColumn, IColumn transformation)
 {
 addAllColumns(columns, allocator, transformation);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/67dec69f/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
--
diff --git a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java 
b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
index 5fdc0f6..9cb44d2 100644
--- a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
@@ -154,6 +154,11 @@ public class AtomicSortedColumns implements ISortedColumns
 
 public void addAll(ISortedColumns cm, Allocator allocator, 
FunctionIColumn, IColumn transformation)
 {
+addAllWithSizeDelta(cm, allocator, transformation);
+}
+
+public long addAllWithSizeDelta(ISortedColumns cm, Allocator allocator, 
FunctionIColumn, IColumn transformation)
+{
 /*
  * This operation needs to atomicity and isolation. To that end, we
  * add the new column to a copy of the map (a cheap O(1) snapTree
@@ -166,9 +171,12 @@ public class AtomicSortedColumns implements ISortedColumns
  * we bail early, avoiding unnecessary work if possible.
  */
 Holder current, modified;
+long sizeDelta;
+
 main_loop:
 

[4/5] git commit: update NTS calculateNaturalEndpoints to be O(N log N) patch by Sam Overton; reviewed by jbellis for CASSANDRA-3881

2012-07-03 Thread jbellis
update NTS calculateNaturalEndpoints to be O(N log N)
patch by Sam Overton; reviewed by jbellis for CASSANDRA-3881


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9688a79d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9688a79d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9688a79d

Branch: refs/heads/trunk
Commit: 9688a79d0c315395772d15d92e051d00e18b966b
Parents: 893d1da
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Jul 3 11:58:23 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Jul 3 11:58:23 2012 -0500

--
 CHANGES.txt|1 +
 .../cassandra/locator/NetworkTopologyStrategy.java |  123 ++-
 .../locator/NetworkTopologyStrategyTest.java   |   62 +++-
 3 files changed, 140 insertions(+), 46 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9688a79d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7f36e9f..4ce9884 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 1.2-dev
+ * update NTS calculateNaturalEndpoints to be O(N log N) (CASSANDRA-3881)
  * add UseCondCardMark XX jvm settings on jdk 1.7 (CASSANDRA-4366)
  * split up rpc timeout by operation type (CASSANDRA-2819)
  * rewrite key cache save/load to use only sequential i/o (CASSANDRA-3762)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9688a79d/src/java/org/apache/cassandra/locator/NetworkTopologyStrategy.java
--
diff --git a/src/java/org/apache/cassandra/locator/NetworkTopologyStrategy.java 
b/src/java/org/apache/cassandra/locator/NetworkTopologyStrategy.java
index 7b2ec91..30629d8 100644
--- a/src/java/org/apache/cassandra/locator/NetworkTopologyStrategy.java
+++ b/src/java/org/apache/cassandra/locator/NetworkTopologyStrategy.java
@@ -27,8 +27,10 @@ import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.config.ConfigurationException;
 import org.apache.cassandra.dht.Token;
+import org.apache.cassandra.locator.TokenMetadata.Topology;
 import org.apache.cassandra.utils.FBUtilities;
-import org.apache.cassandra.utils.Pair;
+
+import com.google.common.collect.Multimap;
 
 /**
  * This Replication Strategy takes a property file that gives the intended
@@ -71,59 +73,96 @@ public class NetworkTopologyStrategy extends 
AbstractReplicationStrategy
 logger.debug(Configured datacenter replicas are {}, 
FBUtilities.toString(datacenters));
 }
 
+/**
+ * calculate endpoints in one pass through the tokens by tracking our 
progress in each DC, rack etc.
+ */
+@SuppressWarnings(serial)
 public ListInetAddress calculateNaturalEndpoints(Token searchToken, 
TokenMetadata tokenMetadata)
 {
-ListInetAddress endpoints = new 
ArrayListInetAddress(getReplicationFactor());
-
-for (EntryString, Integer dcEntry : datacenters.entrySet())
+SetInetAddress replicas = new HashSetInetAddress();
+// replicas we have found in each DC
+MapString, SetInetAddress dcReplicas = new HashMapString, 
SetInetAddress(datacenters.size())
+{{
+for (Map.EntryString, Integer dc : datacenters.entrySet())
+put(dc.getKey(), new HashSetInetAddress(dc.getValue()));
+}};
+Topology topology = tokenMetadata.getTopology();
+// all endpoints in each DC, so we can check when we have exhausted 
all the members of a DC
+MultimapString, InetAddress allEndpoints = 
topology.getDatacenterEndpoints();
+// all racks in a DC so we can check when we have exhausted all racks 
in a DC
+MapString, MultimapString, InetAddress racks = 
topology.getDatacenterRacks();
+assert !allEndpoints.isEmpty()  !racks.isEmpty() : not aware of any 
cluster members;
+
+// tracks the racks we have already placed replicas in
+MapString, SetString seenRacks = new HashMapString, 
SetString(datacenters.size())
+{{
+for (Map.EntryString, Integer dc : datacenters.entrySet())
+put(dc.getKey(), new HashSetString());
+}};
+// tracks the endpoints that we skipped over while looking for unique 
racks
+// when we relax the rack uniqueness we can append this to the current 
result so we don't have to wind back the iterator
+MapString, SetInetAddress skippedDcEndpoints = new HashMapString, 
SetInetAddress(datacenters.size())
+{{
+for (Map.EntryString, Integer dc : datacenters.entrySet())
+put(dc.getKey(), new LinkedHashSetInetAddress());
+}};
+IteratorToken tokenIter = 
TokenMetadata.ringIterator(tokenMetadata.sortedTokens(), 

[1/5] git commit: Merge branch 'cassandra-1.1' into trunk

2012-07-03 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.1 67dec69f5 - be969899c
  refs/heads/trunk 602e383d6 - 45b057bcf


Merge branch 'cassandra-1.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/45b057bc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/45b057bc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/45b057bc

Branch: refs/heads/trunk
Commit: 45b057bcf816831732d62397d53450c7fefaf11f
Parents: 9688a79 be96989
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Jul 3 12:01:20 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Jul 3 12:01:20 2012 -0500

--
 CHANGES.txt|2 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   28 +++
 2 files changed, 15 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/45b057bc/CHANGES.txt
--
diff --cc CHANGES.txt
index 4ce9884,7cd12a5..266fe35
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,33 -1,6 +1,35 @@@
 +1.2-dev
 + * update NTS calculateNaturalEndpoints to be O(N log N) (CASSANDRA-3881)
 + * add UseCondCardMark XX jvm settings on jdk 1.7 (CASSANDRA-4366)
 + * split up rpc timeout by operation type (CASSANDRA-2819)
 + * rewrite key cache save/load to use only sequential i/o (CASSANDRA-3762)
 + * update MS protocol with a version handshake + broadcast address id
 +   (CASSANDRA-4311)
 + * multithreaded hint replay (CASSANDRA-4189)
 + * add inter-node message compression (CASSANDRA-3127)
 + * remove COPP (CASSANDRA-2479)
 + * Track tombstone expiration and compact when tombstone content is
 +   higher than a configurable threshold, default 20% (CASSANDRA-3442)
 + * update MurmurHash to version 3 (CASSANDRA-2975)
 + * (CLI) track elapsed time for `delete' operation (CASSANDRA-4060)
 + * (CLI) jline version is bumped to 1.0 to properly  support
 +   'delete' key function (CASSANDRA-4132)
 + * Save IndexSummary into new SSTable 'Summary' component (CASSANDRA-2392, 
4289)
 + * Add support for range tombstones (CASSANDRA-3708)
 + * Improve MessagingService efficiency (CASSANDRA-3617)
 + * Avoid ID conflicts from concurrent schema changes (CASSANDRA-3794)
 + * Set thrift HSHA server thread limit to unlimited by default 
(CASSANDRA-4277)
 + * Avoids double serialization of CF id in RowMutation messages
 +   (CASSANDRA-4293)
 + * stream compressed sstables directly with java nio (CASSANDRA-4297)
 + * Support multiple ranges in SliceQueryFilter (CASSANDRA-3885)
 + * Add column metadata to system column families (CASSANDRA-4018)
 + * (cql3) always use composite types by default (CASSANDRA-4329)
 +
 +
  1.1.3
+  * avoid using global partitioner to estimate ranges in index sstables
+(CASSANDRA-4403)
   * restore pre-CASSANDRA-3862 approach to removing expired tombstones
 from row cache during compaction (CASSANDRA-4364)
   * (stress) support for CQL prepared statements (CASSANDRA-3633)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/45b057bc/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--



[2/5] git commit: avoid using global partitioner to estimate ranges in index sstables patch by jbellis; reviewed by yukim for CASSANDRA-4403

2012-07-03 Thread jbellis
avoid using global partitioner to estimate ranges in index sstables
patch by jbellis; reviewed by yukim for CASSANDRA-4403


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/be969899
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/be969899
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/be969899

Branch: refs/heads/cassandra-1.1
Commit: be969899c954751f861b3861b3b709be56270ccd
Parents: 67dec69
Author: Jonathan Ellis jbel...@apache.org
Authored: Mon Jul 2 17:08:36 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Jul 3 12:01:10 2012 -0500

--
 CHANGES.txt|2 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   28 +++
 2 files changed, 15 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/be969899/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 72991f1..7cd12a5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 1.1.3
+ * avoid using global partitioner to estimate ranges in index sstables
+   (CASSANDRA-4403)
  * restore pre-CASSANDRA-3862 approach to removing expired tombstones
from row cache during compaction (CASSANDRA-4364)
  * (stress) support for CQL prepared statements (CASSANDRA-3633)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/be969899/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 93f022d..0b66020 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -904,9 +904,10 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 /**
  * Calculate expected file size of SSTable after compaction.
  *
- * If operation type is {@code CLEANUP}, then we calculate expected file 
size
- * with checking token range to be eliminated.
- * Other than that, we just add up all the files' size, which is the worst 
case file
+ * If operation type is {@code CLEANUP} and we're not dealing with an 
index sstable,
+ * then we calculate expected file size with checking token range to be 
eliminated.
+ *
+ * Otherwise, we just add up all the files' size, which is the worst case 
file
  * size for compaction of all the list of files given.
  *
  * @param sstables SSTables to calculate expected compacted file size
@@ -915,21 +916,18 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
  */
 public long getExpectedCompactedFileSize(IterableSSTableReader sstables, 
OperationType operation)
 {
-long expectedFileSize = 0;
-if (operation == OperationType.CLEANUP)
+if (operation != OperationType.CLEANUP || isIndex())
 {
-CollectionRangeToken ranges = 
StorageService.instance.getLocalRanges(table.name);
-for (SSTableReader sstable : sstables)
-{
-ListPairLong, Long positions = 
sstable.getPositionsForRanges(ranges);
-for (PairLong, Long position : positions)
-expectedFileSize += position.right - position.left;
-}
+return SSTable.getTotalBytes(sstables);
 }
-else
+
+long expectedFileSize = 0;
+CollectionRangeToken ranges = 
StorageService.instance.getLocalRanges(table.name);
+for (SSTableReader sstable : sstables)
 {
-for (SSTableReader sstable : sstables)
-expectedFileSize += sstable.onDiskLength();
+ListPairLong, Long positions = 
sstable.getPositionsForRanges(ranges);
+for (PairLong, Long position : positions)
+expectedFileSize += position.right - position.left;
 }
 return expectedFileSize;
 }



[3/5] git commit: avoid using global partitioner to estimate ranges in index sstables patch by jbellis; reviewed by yukim for CASSANDRA-4403

2012-07-03 Thread jbellis
avoid using global partitioner to estimate ranges in index sstables
patch by jbellis; reviewed by yukim for CASSANDRA-4403


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/be969899
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/be969899
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/be969899

Branch: refs/heads/trunk
Commit: be969899c954751f861b3861b3b709be56270ccd
Parents: 67dec69
Author: Jonathan Ellis jbel...@apache.org
Authored: Mon Jul 2 17:08:36 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Jul 3 12:01:10 2012 -0500

--
 CHANGES.txt|2 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   28 +++
 2 files changed, 15 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/be969899/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 72991f1..7cd12a5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 1.1.3
+ * avoid using global partitioner to estimate ranges in index sstables
+   (CASSANDRA-4403)
  * restore pre-CASSANDRA-3862 approach to removing expired tombstones
from row cache during compaction (CASSANDRA-4364)
  * (stress) support for CQL prepared statements (CASSANDRA-3633)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/be969899/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 93f022d..0b66020 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -904,9 +904,10 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 /**
  * Calculate expected file size of SSTable after compaction.
  *
- * If operation type is {@code CLEANUP}, then we calculate expected file 
size
- * with checking token range to be eliminated.
- * Other than that, we just add up all the files' size, which is the worst 
case file
+ * If operation type is {@code CLEANUP} and we're not dealing with an 
index sstable,
+ * then we calculate expected file size with checking token range to be 
eliminated.
+ *
+ * Otherwise, we just add up all the files' size, which is the worst case 
file
  * size for compaction of all the list of files given.
  *
  * @param sstables SSTables to calculate expected compacted file size
@@ -915,21 +916,18 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
  */
 public long getExpectedCompactedFileSize(IterableSSTableReader sstables, 
OperationType operation)
 {
-long expectedFileSize = 0;
-if (operation == OperationType.CLEANUP)
+if (operation != OperationType.CLEANUP || isIndex())
 {
-CollectionRangeToken ranges = 
StorageService.instance.getLocalRanges(table.name);
-for (SSTableReader sstable : sstables)
-{
-ListPairLong, Long positions = 
sstable.getPositionsForRanges(ranges);
-for (PairLong, Long position : positions)
-expectedFileSize += position.right - position.left;
-}
+return SSTable.getTotalBytes(sstables);
 }
-else
+
+long expectedFileSize = 0;
+CollectionRangeToken ranges = 
StorageService.instance.getLocalRanges(table.name);
+for (SSTableReader sstable : sstables)
 {
-for (SSTableReader sstable : sstables)
-expectedFileSize += sstable.onDiskLength();
+ListPairLong, Long positions = 
sstable.getPositionsForRanges(ranges);
+for (PairLong, Long position : positions)
+expectedFileSize += position.right - position.left;
 }
 return expectedFileSize;
 }



[5/5] git commit: add Topology to TokenMetadata and clean up thread safety design patch by Sam Overton; reviewed by jbellis for CASSANDRA-3881

2012-07-03 Thread jbellis
add Topology to TokenMetadata and clean up thread safety design
patch by Sam Overton; reviewed by jbellis for CASSANDRA-3881


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/893d1da9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/893d1da9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/893d1da9

Branch: refs/heads/trunk
Commit: 893d1da990b4f31462ad241dc0c4b6a91cf3dbee
Parents: 602e383
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Jul 3 11:55:54 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Jul 3 11:55:54 2012 -0500

--
 .../org/apache/cassandra/dht/RangeStreamer.java|2 +-
 .../locator/AbstractReplicationStrategy.java   |2 +-
 .../apache/cassandra/locator/TokenMetadata.java|  175 ---
 .../apache/cassandra/service/StorageService.java   |   31 ++--
 .../service/AntiEntropyServiceTestAbstract.java|2 +-
 .../cassandra/service/LeaveAndBootstrapTest.java   |2 +-
 .../org/apache/cassandra/service/MoveTest.java |2 +-
 7 files changed, 164 insertions(+), 52 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/893d1da9/src/java/org/apache/cassandra/dht/RangeStreamer.java
--
diff --git a/src/java/org/apache/cassandra/dht/RangeStreamer.java 
b/src/java/org/apache/cassandra/dht/RangeStreamer.java
index 33777f0..ce82319 100644
--- a/src/java/org/apache/cassandra/dht/RangeStreamer.java
+++ b/src/java/org/apache/cassandra/dht/RangeStreamer.java
@@ -149,7 +149,7 @@ public class RangeStreamer implements 
IEndpointStateChangeSubscriber, IFailureDe
 private MultimapRangeToken, InetAddress 
getAllRangesWithSourcesFor(String table, CollectionRangeToken desiredRanges)
 {
 AbstractReplicationStrategy strat = 
Table.open(table).getReplicationStrategy();
-MultimapRangeToken, InetAddress rangeAddresses = 
strat.getRangeAddresses(metadata);
+MultimapRangeToken, InetAddress rangeAddresses = 
strat.getRangeAddresses(metadata.cloneOnlyTokenMap());
 
 MultimapRangeToken, InetAddress rangeSources = 
ArrayListMultimap.create();
 for (RangeToken desiredRange : desiredRanges)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/893d1da9/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java 
b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
index b654fe2..7fa431a 100644
--- a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
+++ b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
@@ -176,7 +176,7 @@ public abstract class AbstractReplicationStrategy
 
 public MultimapInetAddress, RangeToken getAddressRanges()
 {
-return getAddressRanges(tokenMetadata);
+return getAddressRanges(tokenMetadata.cloneOnlyTokenMap());
 }
 
 public CollectionRangeToken getPendingAddressRanges(TokenMetadata 
metadata, Token pendingToken, InetAddress pendingAddress)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/893d1da9/src/java/org/apache/cassandra/locator/TokenMetadata.java
--
diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java 
b/src/java/org/apache/cassandra/locator/TokenMetadata.java
index b78a748..3340b2b 100644
--- a/src/java/org/apache/cassandra/locator/TokenMetadata.java
+++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java
@@ -26,11 +26,13 @@ import java.util.concurrent.locks.ReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 
 import com.google.common.collect.*;
+
 import org.apache.cassandra.utils.Pair;
 import org.apache.commons.lang.StringUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.dht.Token;
 import org.apache.cassandra.gms.FailureDetector;
@@ -70,7 +72,7 @@ public class TokenMetadata
 // Finally, note that recording the tokens of joining nodes in 
bootstrapTokens also
 // means we can detect and reject the addition of multiple nodes at the 
same token
 // before one becomes part of the ring.
-private final BiMapToken, InetAddress bootstrapTokens = 
Maps.synchronizedBiMap(HashBiMap.Token, InetAddresscreate());
+private final BiMapToken, InetAddress bootstrapTokens = 
HashBiMap.Token, InetAddresscreate();
 // (don't need to record Token here since it's still part of 
tokenToEndpointMap until it's done leaving)
 private final 

git commit: implement addAllWithSizeDelta for ThreadSafeSortedColumns (used in Memtable for supercolumns); see #4399

2012-07-03 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.1 be969899c - e0f4c7ccc


implement addAllWithSizeDelta for ThreadSafeSortedColumns (used in Memtable for 
supercolumns); see #4399


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e0f4c7cc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e0f4c7cc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e0f4c7cc

Branch: refs/heads/cassandra-1.1
Commit: e0f4c7cccff023140b33ae2223fbb2f36e20265f
Parents: be96989
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Jul 3 12:29:24 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Jul 3 12:29:24 2012 -0500

--
 .../db/AbstractThreadUnsafeSortedColumns.java  |   10 +
 .../cassandra/db/ArrayBackedSortedColumns.java |3 +-
 .../apache/cassandra/db/AtomicSortedColumns.java   |   19 ++---
 .../cassandra/db/ThreadSafeSortedColumns.java  |   30 ---
 .../cassandra/db/TreeMapBackedSortedColumns.java   |3 +-
 5 files changed, 33 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0f4c7cc/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
--
diff --git 
a/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java 
b/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
index 1360336..b50dda5 100644
--- a/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
@@ -89,21 +89,13 @@ public abstract class AbstractThreadUnsafeSortedColumns 
implements ISortedColumn
 }
 }
 
-// Implementations should implement this rather than addAll to avoid
-// having to care about the deletion infos
-protected abstract void addAllColumns(ISortedColumns columns, Allocator 
allocator, FunctionIColumn, IColumn transformation);
-
 public long addAllWithSizeDelta(ISortedColumns cm, Allocator allocator, 
FunctionIColumn, IColumn transformation)
 {
 // sizeDelta is only needed by memtable updates which should not be 
using thread-unsafe containers
 throw new UnsupportedOperationException();
 }
 
-public void addAll(ISortedColumns columns, Allocator allocator, 
FunctionIColumn, IColumn transformation)
-{
-addAllColumns(columns, allocator, transformation);
-delete(columns.getDeletionInfo());
-}
+public abstract void addAll(ISortedColumns columns, Allocator allocator, 
FunctionIColumn, IColumn transformation);
 
 public boolean isEmpty()
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0f4c7cc/src/java/org/apache/cassandra/db/ArrayBackedSortedColumns.java
--
diff --git a/src/java/org/apache/cassandra/db/ArrayBackedSortedColumns.java 
b/src/java/org/apache/cassandra/db/ArrayBackedSortedColumns.java
index 246b133..1ce3aac 100644
--- a/src/java/org/apache/cassandra/db/ArrayBackedSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/ArrayBackedSortedColumns.java
@@ -201,8 +201,9 @@ public class ArrayBackedSortedColumns extends 
AbstractThreadUnsafeSortedColumns
 return -mid - (result  0 ? 1 : 2);
 }
 
-protected void addAllColumns(ISortedColumns cm, Allocator allocator, 
FunctionIColumn, IColumn transformation)
+public void addAll(ISortedColumns cm, Allocator allocator, 
FunctionIColumn, IColumn transformation)
 {
+delete(cm.getDeletionInfo());
 if (cm.isEmpty())
 return;
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0f4c7cc/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
--
diff --git a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java 
b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
index 9cb44d2..d421d56 100644
--- a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
@@ -342,41 +342,30 @@ public class AtomicSortedColumns implements ISortedColumns
 long addColumn(IColumn column, Allocator allocator)
 {
 ByteBuffer name = column.name();
-IColumn oldColumn;
-long sizeDelta = 0;
 while (true)
 {
-oldColumn = map.putIfAbsent(name, column);
+IColumn oldColumn = map.putIfAbsent(name, column);
 if (oldColumn == null)
-{
-sizeDelta += column.serializedSize();
-break;
-}
+return column.serializedSize();
 
 

git commit: use proper partitioner for Range; patch by yukim, reviewed by jbellis for CASSANDRA-4404

2012-07-03 Thread yukim
Updated Branches:
  refs/heads/trunk 45b057bcf - d2b60f289


use proper partitioner for Range; patch by yukim, reviewed by jbellis for 
CASSANDRA-4404


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d2b60f28
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d2b60f28
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d2b60f28

Branch: refs/heads/trunk
Commit: d2b60f28935466f6e37fc9d64a44c5c81bc14fb4
Parents: 45b057b
Author: Yuki Morishita yu...@apache.org
Authored: Tue Jul 3 12:44:14 2012 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Tue Jul 3 12:44:14 2012 -0500

--
 .../compaction/SizeTieredCompactionStrategy.java   |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d2b60f28/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java
index 9c07a93..67d2e77 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java
@@ -107,7 +107,7 @@ public class SizeTieredCompactionStrategy extends 
AbstractCompactionStrategy
 long keys = table.estimatedKeys();
 SetRangeToken ranges = new HashSetRangeToken();
 for (SSTableReader overlap : overlaps)
-ranges.add(new RangeToken(overlap.first.token, 
overlap.last.token));
+ranges.add(new RangeToken(overlap.first.token, 
overlap.last.token, overlap.partitioner));
 long remainingKeys = keys - 
table.estimatedKeysForRanges(ranges);
 // next, calculate what percentage of columns we have 
within those keys
 double remainingKeysRatio = ((double) remainingKeys) / 
keys;



[jira] [Updated] (CASSANDRA-4406) Update stress for CQL3

2012-07-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4406:
--

Reviewer: xedin

 Update stress for CQL3
 --

 Key: CASSANDRA-4406
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4406
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 1.1.0
Reporter: Sylvain Lebresne
Assignee: David Alves
  Labels: stress
 Fix For: 1.2


 Stress does not support CQL3. We should add support for it so that:
 # we can benchmark CQL3
 # we can benchmark CASSANDRA-2478

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (CASSANDRA-4406) Update stress for CQL3

2012-07-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-4406:
-

Assignee: David Alves

 Update stress for CQL3
 --

 Key: CASSANDRA-4406
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4406
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 1.1.0
Reporter: Sylvain Lebresne
Assignee: David Alves
  Labels: stress
 Fix For: 1.2


 Stress does not support CQL3. We should add support for it so that:
 # we can benchmark CQL3
 # we can benchmark CASSANDRA-2478

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3881) reduce computational complexity of processing topology changes

2012-07-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3881.
---

   Resolution: Fixed
Fix Version/s: 1.2
 Reviewer: jbellis  (was: scode)

thanks, committed!

 reduce computational complexity of processing topology changes
 --

 Key: CASSANDRA-3881
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3881
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Peter Schuller
Assignee: Sam Overton
  Labels: vnodes
 Fix For: 1.2


 This constitutes follow-up work from CASSANDRA-3831 where a partial 
 improvement was committed, but the fundamental issue was not fixed. The 
 maximum practical cluster size was significantly improved, but further work 
 is expected to be necessary as cluster sizes grow.
 _Edit0: Appended patch information._
 h3. Patches
 ||Compare||Raw diff||Description||
 |[00_snitch_topology|https://github.com/acunu/cassandra/compare/refs/top-bases/p/3881/00_snitch_topology...p/3881/00_snitch_topology]|[00_snitch_topology.patch|https://github.com/acunu/cassandra/compare/refs/top-bases/p/3881/00_snitch_topology...p/3881/00_snitch_topology.diff]|Adds
  some functionality to TokenMetadata to track which endpoints and racks exist 
 in a DC.|
 |[01_calc_natural_endpoints|https://github.com/acunu/cassandra/compare/refs/top-bases/p/3881/01_calc_natural_endpoints...p/3881/01_calc_natural_endpoints]|[01_calc_natural_endpoints.patch|https://github.com/acunu/cassandra/compare/refs/top-bases/p/3881/01_calc_natural_endpoints...p/3881/01_calc_natural_endpoints.diff]|Rewritten
  O(logN) implementation of calculateNaturalEndpoints using the topology 
 information from the tokenMetadata.|
 
 _Note: These are branches managed with TopGit. If you are applying the patch 
 output manually, you will either need to filter the TopGit metadata files 
 (i.e. {{wget -O - url | filterdiff -x*.topdeps -x*.topmsg | patch -p1}}), 
 or remove them afterward ({{rm .topmsg .topdeps}})._

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4121) TokenMetadata supports multiple tokens per host

2012-07-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13405967#comment-13405967
 ] 

Jonathan Ellis commented on CASSANDRA-4121:
---

Time to look at this with CASSANDRA-3881 committed?

 TokenMetadata supports multiple tokens per host
 ---

 Key: CASSANDRA-4121
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4121
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Sam Overton
Assignee: Sam Overton
  Labels: vnodes

 _Edit0: Append patch information._
 h3. Patches
 ||Compare||Raw diff||Description||
 |[01_support_multiple_tokens_per_host|https://github.com/acunu/cassandra/compare/top-bases/p/4121/01_support_multiple_tokens_per_host...p/4121/01_support_multiple_tokens_per_host]|[01_support_multiple_tokens_per_host.patch|https://github.com/acunu/cassandra/compare/top-bases/p/4121/01_support_multiple_tokens_per_host...p/4121/01_support_multiple_tokens_per_host.diff]|Support
  associating more than one token per node|
 
 _Note: These are branches managed with TopGit. If you are applying the patch 
 output manually, you will either need to filter the TopGit metadata files 
 (i.e. {{wget -O - url | filterdiff -x*.topdeps -x*.topmsg | patch -p1}}), 
 or remove them afterward ({{rm .topmsg .topdeps}})._

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4121) TokenMetadata supports multiple tokens per host

2012-07-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4121:
--

 Reviewer: jbellis
  Description: 
_Edit0: Append patch information._

h3. Patches
||Compare||Raw diff||Description||
|[01_support_multiple_tokens_per_host|https://github.com/acunu/cassandra/compare/top-bases/p/4121/01_support_multiple_tokens_per_host...p/4121/01_support_multiple_tokens_per_host]|[01_support_multiple_tokens_per_host.patch|https://github.com/acunu/cassandra/compare/top-bases/p/4121/01_support_multiple_tokens_per_host...p/4121/01_support_multiple_tokens_per_host.diff]|Support
 associating more than one token per node|



_Note: These are branches managed with TopGit. If you are applying the patch 
output manually, you will either need to filter the TopGit metadata files (i.e. 
{{wget -O - url | filterdiff -x*.topdeps -x*.topmsg | patch -p1}}), or remove 
them afterward ({{rm .topmsg .topdeps}})._

  was:

_Edit0: Append patch information._

h3. Patches
||Compare||Raw diff||Description||
|[01_support_multiple_tokens_per_host|https://github.com/acunu/cassandra/compare/top-bases/p/4121/01_support_multiple_tokens_per_host...p/4121/01_support_multiple_tokens_per_host]|[01_support_multiple_tokens_per_host.patch|https://github.com/acunu/cassandra/compare/top-bases/p/4121/01_support_multiple_tokens_per_host...p/4121/01_support_multiple_tokens_per_host.diff]|Support
 associating more than one token per node|



_Note: These are branches managed with TopGit. If you are applying the patch 
output manually, you will either need to filter the TopGit metadata files (i.e. 
{{wget -O - url | filterdiff -x*.topdeps -x*.topmsg | patch -p1}}), or remove 
them afterward ({{rm .topmsg .topdeps}})._

Fix Version/s: 1.2

 TokenMetadata supports multiple tokens per host
 ---

 Key: CASSANDRA-4121
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4121
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Sam Overton
Assignee: Sam Overton
  Labels: vnodes
 Fix For: 1.2


 _Edit0: Append patch information._
 h3. Patches
 ||Compare||Raw diff||Description||
 |[01_support_multiple_tokens_per_host|https://github.com/acunu/cassandra/compare/top-bases/p/4121/01_support_multiple_tokens_per_host...p/4121/01_support_multiple_tokens_per_host]|[01_support_multiple_tokens_per_host.patch|https://github.com/acunu/cassandra/compare/top-bases/p/4121/01_support_multiple_tokens_per_host...p/4121/01_support_multiple_tokens_per_host.diff]|Support
  associating more than one token per node|
 
 _Note: These are branches managed with TopGit. If you are applying the patch 
 output manually, you will either need to filter the TopGit metadata files 
 (i.e. {{wget -O - url | filterdiff -x*.topdeps -x*.topmsg | patch -p1}}), 
 or remove them afterward ({{rm .topmsg .topdeps}})._

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4125) Update nodetool for vnodes

2012-07-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4125:
--

Reviewer: brandon.williams

 Update nodetool for vnodes
 --

 Key: CASSANDRA-4125
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4125
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Sam Overton
Assignee: Eric Evans

 The proposed changes are intended to preserve backwards compatibility:
 || op || behaviour || deprecated warning?
 | join | Join the ring, use with {{-t}} to join at a specific token, or to 
 add a token to an existing host |
 | ring | prints the first token for each node, add {{-a}} to print all tokens 
 |
 | move new token | if the node only has 1 token then move it. Otherwise die 
 with an error. | *deprecated*
 | removetoken status/force/token | removes the node who owns token from 
 the ring. use {{-t}} option to only remove the token from the node instead of 
 the whole node. |
 | describering [keyspace] | show all ranges and their endpoints |
 | getendpoints keyspace cf key | Print the endpoints that own the key 
 and also their list of tokens |
 _Edit0: Appended patch information._
 h3. Patches
 ||Compare||Raw diff||Description||
 |[01_admin_tools|https://github.com/acunu/cassandra/compare/top-bases/p/4125/01_admin_tools...p/4125/01_admin_tools]|[01_admin_tools.patch|https://github.com/acunu/cassandra/compare/top-bases/p/4125/01_admin_tools...p/4125/01_admin_tools.diff]|Updated
  nodetool|
 
 _Note: These are branches managed with TopGit. If you are applying the patch 
 output manually, you will either need to filter the TopGit metadata files 
 (i.e. {{wget -O - url | filterdiff -x*.topdeps -x*.topmsg | patch -p1}}), 
 or remove them afterward ({{rm .topmsg .topdeps}})._

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4404) tombstone estimation needs to avoid using global partitioner against index sstables

2012-07-03 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita resolved CASSANDRA-4404.
---

Resolution: Fixed

Patch is available at 
https://github.com/yukim/cassandra/commit/1644a5b701b054b646e049ef5cf725b2b2670709.diff
Reviewed and committed while JIRA is down.

 tombstone estimation needs to avoid using global partitioner against index 
 sstables
 ---

 Key: CASSANDRA-4404
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4404
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2
Reporter: Jonathan Ellis
Assignee: Yuki Morishita
  Labels: compaction
 Fix For: 1.2




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2698) Instrument repair to be able to assess it's efficiency (precision)

2012-07-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13405990#comment-13405990
 ] 

Jonathan Ellis commented on CASSANDRA-2698:
---

Hi Jason, you've come to the right place.  Fire away.

 Instrument repair to be able to assess it's efficiency (precision)
 --

 Key: CASSANDRA-2698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2698
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Priority: Minor
  Labels: lhf

 Some reports indicate that repair sometime transfer huge amounts of data. One 
 hypothesis is that the merkle tree precision may deteriorate too much at some 
 data size. To check this hypothesis, it would be reasonably to gather 
 statistic during the merkle tree building of how many rows each merkle tree 
 range account for (and the size that this represent). It is probably an 
 interesting statistic to have anyway.   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Git Push Summary

2012-07-03 Thread eevans
Updated Tags:  refs/tags/0.8.10-tentative [created] 038b8f212
  refs/tags/1.0.10-tentative [created] b2ca7f821
  refs/tags/1.0.7-tentative [created] 6a1ed6205
  refs/tags/1.0.8-tentative [created] fe6980eb7
  refs/tags/1.0.9-tentative [created] 3fd0fb6aa
  refs/tags/1.1.0-beta1-tentative [created] 271630d58
  refs/tags/1.1.0-beta2-tentative [created] 643d18af2
  refs/tags/1.1.0-tentative [created] c67153282
  refs/tags/1.1.2-tentative [created] b94d8d40f
  refs/tags/BeforeRebase20Jan [created] eba6c6fd5
  refs/tags/trunk/3881 [created] 602e383d6
  refs/tags/trunk/4120 [created] 087f902d6


[jira] [Commented] (CASSANDRA-2181) sstable2json should return better error message if the usage is wrong

2012-07-03 Thread David Alves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406013#comment-13406013
 ] 

David Alves commented on CASSANDRA-2181:


Shotaro, any news on this?

 sstable2json should return better error message if the usage is wrong
 -

 Key: CASSANDRA-2181
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2181
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
 Environment: linux
Reporter: Shotaro Kamio
Assignee: David Alves
Priority: Minor
 Fix For: 1.2

 Attachments: 2181.patch


 These errors are not user friendly.
 (Cassandra 0.7.2)
 $ bin/sstable2json PATH_TO/Order-f-7-Data.db -k aaa -x 0
  WARN 21:55:34,383 Schema definitions were defined both locally and in 
 cassandra.yaml. Definitions in cassandra.yaml were ignored.
 {
 aaa: {Exception in thread main java.lang.NullPointerException
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:108)
   at 
 org.apache.cassandra.tools.SSTableExport.serializeRow(SSTableExport.java:178)
   at 
 org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:310)
   at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:444)
 $ bin/sstable2json PATH_TO/Order-f-7-Data.db -k aaa 
  WARN 21:55:49,603 Schema definitions were defined both locally and in 
 cassandra.yaml. Definitions in cassandra.yaml were ignored.
 Exception in thread main java.lang.NullPointerException
   at 
 org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:284)
   at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:444)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Git Push Summary

2012-07-03 Thread eevans
Updated Tags:  refs/tags/trunk/3881 [deleted] 602e383d6


[jira] [Created] (CASSANDRA-4407) timout excpetion in client side and AssertionError in cassandra log

2012-07-03 Thread Nare Gasparyan (JIRA)
Nare Gasparyan created CASSANDRA-4407:
-

 Summary: timout excpetion in client side and AssertionError in 
cassandra log
 Key: CASSANDRA-4407
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4407
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core
Affects Versions: 1.0.10, 1.0.8
 Environment: Debian linux
Reporter: Nare Gasparyan
Priority: Critical


Hello.

I moved my cassandra from one sever to another. Cassandra's version on the 
first server was 1.0.8 on the new one it is 1.0.10. I moved also data by 
copying all files under /var/lib/cassandra/data/mykeyspace.
and now it seems to me that some data has been corrupted during migration as I 
get the following exceptions when getting data for some keys.
In cassandra client I get

me.prettyprint.hector.api.exceptions.HTimedOutException: TimedOutException()
at 
me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:42)
at 
me.prettyprint.cassandra.service.KeyspaceServiceImpl$11.execute(KeyspaceServiceImpl.java:432)
at 
me.prettyprint.cassandra.service.KeyspaceServiceImpl$11.execute(KeyspaceServiceImpl.java:416)
at 
me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:103)

In cassandra system.log I get

ERROR [ReadStage:492] 2012-07-02 21:50:50,280 AbstractCassandraDaemon.java 
(line 139) Fatal exception in thread Thread[ReadStage:492,5,main]
java.lang.AssertionError: 820255506 vs 518814720
at 
org.apache.cassandra.io.util.MmappedSegmentedFile.floor(MmappedSegmentedFile.java:65)
at 
org.apache.cassandra.io.util.MmappedSegmentedFile.getSegment(MmappedSegmentedFile.java:80)
at 
org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:830)

Is this really data corruption problem and how can I fix it?
I have only one node so nodetool repair won't help.

Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Git Push Summary

2012-07-03 Thread eevans
Updated Tags:  refs/tags/trunk/4120 [deleted] 087f902d6


[jira] [Updated] (CASSANDRA-4408) Don't log mx4j stuff at info

2012-07-03 Thread Nick Bailey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Bailey updated CASSANDRA-4408:
---

Attachment: 0001-Log-mx4j-info-at-debug.patch

 Don't log mx4j stuff at info
 

 Key: CASSANDRA-4408
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4408
 Project: Cassandra
  Issue Type: Improvement
Reporter: Nick Bailey
Assignee: Nick Bailey
 Fix For: 1.2

 Attachments: 0001-Log-mx4j-info-at-debug.patch


 MX4J is an optional dependency. AFAIK not many people use it, but I often see 
 people confused at the log message we output saying Will not load mx4j...
 Just going to change the log message to debug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4408) Don't log mx4j stuff at info

2012-07-03 Thread Nick Bailey (JIRA)
Nick Bailey created CASSANDRA-4408:
--

 Summary: Don't log mx4j stuff at info
 Key: CASSANDRA-4408
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4408
 Project: Cassandra
  Issue Type: Improvement
Reporter: Nick Bailey
Assignee: Nick Bailey
 Fix For: 1.2
 Attachments: 0001-Log-mx4j-info-at-debug.patch

MX4J is an optional dependency. AFAIK not many people use it, but I often see 
people confused at the log message we output saying Will not load mx4j...

Just going to change the log message to debug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4121) TokenMetadata supports multiple tokens per host

2012-07-03 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406090#comment-13406090
 ] 

Eric Evans commented on CASSANDRA-4121:
---

bq. If we do go with these wrappers I'd prefer to keep it as thin as possible – 
no internal synchronization, no copies on inverse(). (I think the existing 
synchronizied wrapper around the bootstrapTokens BiMap saves us exactly one use 
of the explicit lock, in pendingRangeChanges. I'm fine with giving that up.)

I think the code reads better with, than without them, but I can understand the 
argument for keeping them thin.  Sam?

bq. Would it simplify things to make calculateNaturalEndpoints return Set? 
getNaturalEndpoints can still copy into an ArrayList for callers that want to 
sort.

I'm not sure it will simplify the existing code per say, but it's probably more 
correct.

bq. forceTableRepairPrimaryRange could use FBUtilities.waitOnFutures

I updated the patch to make it use FBUtilities.waitOnFuture (singular); The mix 
of unparametized and wildcard generic Futures is kind of a mess here.

{quote}
* @Override on SBMVM is redundant
* some redundant type information on .create calls
{quote}

Done.

 TokenMetadata supports multiple tokens per host
 ---

 Key: CASSANDRA-4121
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4121
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Sam Overton
Assignee: Sam Overton
  Labels: vnodes
 Fix For: 1.2


 _Edit0: Append patch information._
 h3. Patches
 ||Compare||Raw diff||Description||
 |[01_support_multiple_tokens_per_host|https://github.com/acunu/cassandra/compare/top-bases/p/4121/01_support_multiple_tokens_per_host...p/4121/01_support_multiple_tokens_per_host]|[01_support_multiple_tokens_per_host.patch|https://github.com/acunu/cassandra/compare/top-bases/p/4121/01_support_multiple_tokens_per_host...p/4121/01_support_multiple_tokens_per_host.diff]|Support
  associating more than one token per node|
 
 _Note: These are branches managed with TopGit. If you are applying the patch 
 output manually, you will either need to filter the TopGit metadata files 
 (i.e. {{wget -O - url | filterdiff -x*.topdeps -x*.topmsg | patch -p1}}), 
 or remove them afterward ({{rm .topmsg .topdeps}})._

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3564) flush before shutdown so restart is faster

2012-07-03 Thread David Alves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406093#comment-13406093
 ] 

David Alves commented on CASSANDRA-3564:


I'm trying to wrap this up...

I changed nodecmd to handle the expected exception nicely (the EOF one because 
thrift was closed on the other side). the cassandra shell script does not 
deal with PID reading/killing. So I was wondering should we include this call 
nodetool, read/check pid and kill funcionality in  cassandra or should we 
deal with it within nodetool (maybe make the flushAndExit function return the 
PID to nodetool and have that kill the process if required).



 flush before shutdown so restart is faster
 --

 Key: CASSANDRA-3564
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3564
 Project: Cassandra
  Issue Type: New Feature
  Components: Packaging
Reporter: Jonathan Ellis
Assignee: David Alves
Priority: Minor
 Fix For: 1.2

 Attachments: 3564.patch


 Cassandra handles flush in its shutdown hook for durable_writes=false CFs 
 (otherwise we're *guaranteed* to lose data) but leaves it up to the operator 
 otherwise.  I'd rather leave it that way to offer these semantics:
 - cassandra stop = shutdown nicely [explicit flush, then kill -int]
 - kill -INT = shutdown faster but don't lose any updates [current behavior]
 - kill -KILL = lose most recent writes unless durable_writes=true and batch 
 commits are on [also current behavior]
 But if it's not reasonable to use nodetool from the init script then I guess 
 we can just make the shutdown hook flush everything.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (CASSANDRA-3564) flush before shutdown so restart is faster

2012-07-03 Thread David Alves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406093#comment-13406093
 ] 

David Alves edited comment on CASSANDRA-3564 at 7/3/12 9:58 PM:


I'm trying to wrap this up...

I changed nodecmd to handle the expected exception nicely (the EOF one because 
thrift was closed on the other side). the cassandra shell script does not 
deal with PID reading/killing. So I was wondering should we include this 
functionality in cassandra, i.e., call nodetool, read/check pid wait and 
kill, or should we deal with it within nodetool (maybe make the flushAndExit 
function return the PID to nodetool and have that kill the process if required).



  was (Author: dr-alves):
I'm trying to wrap this up...

I changed nodecmd to handle the expected exception nicely (the EOF one because 
thrift was closed on the other side). the cassandra shell script does not 
deal with PID reading/killing. So I was wondering should we include this call 
nodetool, read/check pid and kill funcionality in  cassandra or should we 
deal with it within nodetool (maybe make the flushAndExit function return the 
PID to nodetool and have that kill the process if required).


  
 flush before shutdown so restart is faster
 --

 Key: CASSANDRA-3564
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3564
 Project: Cassandra
  Issue Type: New Feature
  Components: Packaging
Reporter: Jonathan Ellis
Assignee: David Alves
Priority: Minor
 Fix For: 1.2

 Attachments: 3564.patch


 Cassandra handles flush in its shutdown hook for durable_writes=false CFs 
 (otherwise we're *guaranteed* to lose data) but leaves it up to the operator 
 otherwise.  I'd rather leave it that way to offer these semantics:
 - cassandra stop = shutdown nicely [explicit flush, then kill -int]
 - kill -INT = shutdown faster but don't lose any updates [current behavior]
 - kill -KILL = lose most recent writes unless durable_writes=true and batch 
 commits are on [also current behavior]
 But if it's not reasonable to use nodetool from the init script then I guess 
 we can just make the shutdown hook flush everything.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3564) flush before shutdown so restart is faster

2012-07-03 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406098#comment-13406098
 ] 

Brandon Williams commented on CASSANDRA-3564:
-

The debian/init script should handle it (and has an is_running function that 
you can use and get the pid from)

 flush before shutdown so restart is faster
 --

 Key: CASSANDRA-3564
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3564
 Project: Cassandra
  Issue Type: New Feature
  Components: Packaging
Reporter: Jonathan Ellis
Assignee: David Alves
Priority: Minor
 Fix For: 1.2

 Attachments: 3564.patch


 Cassandra handles flush in its shutdown hook for durable_writes=false CFs 
 (otherwise we're *guaranteed* to lose data) but leaves it up to the operator 
 otherwise.  I'd rather leave it that way to offer these semantics:
 - cassandra stop = shutdown nicely [explicit flush, then kill -int]
 - kill -INT = shutdown faster but don't lose any updates [current behavior]
 - kill -KILL = lose most recent writes unless durable_writes=true and batch 
 commits are on [also current behavior]
 But if it's not reasonable to use nodetool from the init script then I guess 
 we can just make the shutdown hook flush everything.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Git Push Summary

2012-07-03 Thread eevans
Updated Tags:  refs/tags/0.8.10-tentative [deleted] 038b8f212


Git Push Summary

2012-07-03 Thread eevans
Updated Tags:  refs/tags/1.0.10-tentative [deleted] b2ca7f821


Git Push Summary

2012-07-03 Thread eevans
Updated Tags:  refs/tags/1.0.7-tentative [deleted] 6a1ed6205


Git Push Summary

2012-07-03 Thread eevans
Updated Tags:  refs/tags/1.0.9-tentative [deleted] 3fd0fb6aa


Git Push Summary

2012-07-03 Thread eevans
Updated Tags:  refs/tags/1.1.0-beta2-tentative [deleted] 643d18af2


Git Push Summary

2012-07-03 Thread eevans
Updated Tags:  refs/tags/1.1.0-tentative [deleted] c67153282


Git Push Summary

2012-07-03 Thread eevans
Updated Tags:  refs/tags/BeforeRebase20Jan [deleted] eba6c6fd5


[jira] [Updated] (CASSANDRA-4408) Don't log mx4j stuff at info

2012-07-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4408:
--

Priority: Trivial  (was: Major)

 Don't log mx4j stuff at info
 

 Key: CASSANDRA-4408
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4408
 Project: Cassandra
  Issue Type: Improvement
Reporter: Nick Bailey
Assignee: Nick Bailey
Priority: Trivial
 Fix For: 1.1.3

 Attachments: 0001-Log-mx4j-info-at-debug.patch


 MX4J is an optional dependency. AFAIK not many people use it, but I often see 
 people confused at the log message we output saying Will not load mx4j...
 Just going to change the log message to debug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4407) timout excpetion in client side and AssertionError in cassandra log

2012-07-03 Thread Nare Gasparyan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406117#comment-13406117
 ] 

Nare Gasparyan commented on CASSANDRA-4407:
---

Oh, sorry. Please tell me where exactly to write.

 timout excpetion in client side and AssertionError in cassandra log
 ---

 Key: CASSANDRA-4407
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4407
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core
Affects Versions: 1.0.8, 1.0.10
 Environment: Debian linux
Reporter: Nare Gasparyan
Priority: Critical

 Hello.
 I moved my cassandra from one sever to another. Cassandra's version on the 
 first server was 1.0.8 on the new one it is 1.0.10. I moved also data by 
 copying all files under /var/lib/cassandra/data/mykeyspace.
 and now it seems to me that some data has been corrupted during migration as 
 I get the following exceptions when getting data for some keys.
 In cassandra client I get
 me.prettyprint.hector.api.exceptions.HTimedOutException: TimedOutException()
 at 
 me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:42)
 at 
 me.prettyprint.cassandra.service.KeyspaceServiceImpl$11.execute(KeyspaceServiceImpl.java:432)
 at 
 me.prettyprint.cassandra.service.KeyspaceServiceImpl$11.execute(KeyspaceServiceImpl.java:416)
 at 
 me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:103)
 In cassandra system.log I get
 ERROR [ReadStage:492] 2012-07-02 21:50:50,280 AbstractCassandraDaemon.java 
 (line 139) Fatal exception in thread Thread[ReadStage:492,5,main]
 java.lang.AssertionError: 820255506 vs 518814720
 at 
 org.apache.cassandra.io.util.MmappedSegmentedFile.floor(MmappedSegmentedFile.java:65)
 at 
 org.apache.cassandra.io.util.MmappedSegmentedFile.getSegment(MmappedSegmentedFile.java:80)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:830)
 Is this really data corruption problem and how can I fix it?
 I have only one node so nodetool repair won't help.
 Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[1/2] git commit: log mx4j absence at debug

2012-07-03 Thread jbellis
Updated Branches:
  refs/heads/trunk d2b60f289 - bbfab669f


log mx4j absence at debug


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bbfab669
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bbfab669
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bbfab669

Branch: refs/heads/trunk
Commit: bbfab669f68a0be312c7200ba4db2025f537e46b
Parents: 587ed05
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Jul 3 17:14:13 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Jul 3 17:23:43 2012 -0500

--
 src/java/org/apache/cassandra/utils/Mx4jTool.java |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bbfab669/src/java/org/apache/cassandra/utils/Mx4jTool.java
--
diff --git a/src/java/org/apache/cassandra/utils/Mx4jTool.java 
b/src/java/org/apache/cassandra/utils/Mx4jTool.java
index 904dc94..e8fdb29 100644
--- a/src/java/org/apache/cassandra/utils/Mx4jTool.java
+++ b/src/java/org/apache/cassandra/utils/Mx4jTool.java
@@ -65,7 +65,7 @@ public class Mx4jTool
 }
 catch (ClassNotFoundException e)
 {
-logger.info(Will not load MX4J, mx4j-tools.jar is not in the 
classpath);
+logger.debug(Will not load MX4J, mx4j-tools.jar is not in the 
classpath);
 }
 catch(Exception e)
 {



[2/2] git commit: implement addAllWithSizeDelta for ThreadSafeSortedColumns (used in Memtable for supercolumns); see #4399

2012-07-03 Thread jbellis
implement addAllWithSizeDelta for ThreadSafeSortedColumns (used in Memtable for 
supercolumns); see #4399


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/587ed053
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/587ed053
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/587ed053

Branch: refs/heads/trunk
Commit: 587ed053e9159dd3660283df2aa5a89e7f2464d8
Parents: d2b60f2
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Jul 3 12:29:24 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Jul 3 17:23:42 2012 -0500

--
 .../db/AbstractThreadUnsafeSortedColumns.java  |   10 +
 .../cassandra/db/ArrayBackedSortedColumns.java |3 +-
 .../apache/cassandra/db/AtomicSortedColumns.java   |   19 ++---
 .../cassandra/db/ThreadSafeSortedColumns.java  |   30 ---
 .../cassandra/db/TreeMapBackedSortedColumns.java   |3 +-
 5 files changed, 33 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/587ed053/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
--
diff --git 
a/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java 
b/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
index 90fa9b4..0e71cb3 100644
--- a/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
@@ -90,21 +90,13 @@ public abstract class AbstractThreadUnsafeSortedColumns 
implements ISortedColumn
 }
 }
 
-// Implementations should implement this rather than addAll to avoid
-// having to care about the deletion infos
-protected abstract void addAllColumns(ISortedColumns columns, Allocator 
allocator, FunctionIColumn, IColumn transformation);
-
 public long addAllWithSizeDelta(ISortedColumns cm, Allocator allocator, 
FunctionIColumn, IColumn transformation)
 {
 // sizeDelta is only needed by memtable updates which should not be 
using thread-unsafe containers
 throw new UnsupportedOperationException();
 }
 
-public void addAll(ISortedColumns columns, Allocator allocator, 
FunctionIColumn, IColumn transformation)
-{
-addAllColumns(columns, allocator, transformation);
-delete(columns.getDeletionInfo());
-}
+public abstract void addAll(ISortedColumns columns, Allocator allocator, 
FunctionIColumn, IColumn transformation);
 
 public boolean isEmpty()
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/587ed053/src/java/org/apache/cassandra/db/ArrayBackedSortedColumns.java
--
diff --git a/src/java/org/apache/cassandra/db/ArrayBackedSortedColumns.java 
b/src/java/org/apache/cassandra/db/ArrayBackedSortedColumns.java
index 4dc1e3e..d76b443 100644
--- a/src/java/org/apache/cassandra/db/ArrayBackedSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/ArrayBackedSortedColumns.java
@@ -207,8 +207,9 @@ public class ArrayBackedSortedColumns extends 
AbstractThreadUnsafeSortedColumns
 return -mid - (result  0 ? 1 : 2);
 }
 
-protected void addAllColumns(ISortedColumns cm, Allocator allocator, 
FunctionIColumn, IColumn transformation)
+public void addAll(ISortedColumns cm, Allocator allocator, 
FunctionIColumn, IColumn transformation)
 {
+delete(cm.getDeletionInfo());
 if (cm.isEmpty())
 return;
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/587ed053/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
--
diff --git a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java 
b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
index c8f5f3a..f1629ab 100644
--- a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
@@ -338,41 +338,30 @@ public class AtomicSortedColumns implements ISortedColumns
 long addColumn(IColumn column, Allocator allocator)
 {
 ByteBuffer name = column.name();
-IColumn oldColumn;
-long sizeDelta = 0;
 while (true)
 {
-oldColumn = map.putIfAbsent(name, column);
+IColumn oldColumn = map.putIfAbsent(name, column);
 if (oldColumn == null)
-{
-sizeDelta += column.dataSize();
-break;
-}
+return column.dataSize();
 
 if (oldColumn instanceof SuperColumn)
 {
 assert 

[jira] [Commented] (CASSANDRA-4407) timout excpetion in client side and AssertionError in cassandra log

2012-07-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406121#comment-13406121
 ] 

Jonathan Ellis commented on CASSANDRA-4407:
---

look under Mailing lists on http://cassandra.apache.org/

 timout excpetion in client side and AssertionError in cassandra log
 ---

 Key: CASSANDRA-4407
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4407
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core
Affects Versions: 1.0.8, 1.0.10
 Environment: Debian linux
Reporter: Nare Gasparyan
Priority: Critical

 Hello.
 I moved my cassandra from one sever to another. Cassandra's version on the 
 first server was 1.0.8 on the new one it is 1.0.10. I moved also data by 
 copying all files under /var/lib/cassandra/data/mykeyspace.
 and now it seems to me that some data has been corrupted during migration as 
 I get the following exceptions when getting data for some keys.
 In cassandra client I get
 me.prettyprint.hector.api.exceptions.HTimedOutException: TimedOutException()
 at 
 me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:42)
 at 
 me.prettyprint.cassandra.service.KeyspaceServiceImpl$11.execute(KeyspaceServiceImpl.java:432)
 at 
 me.prettyprint.cassandra.service.KeyspaceServiceImpl$11.execute(KeyspaceServiceImpl.java:416)
 at 
 me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:103)
 In cassandra system.log I get
 ERROR [ReadStage:492] 2012-07-02 21:50:50,280 AbstractCassandraDaemon.java 
 (line 139) Fatal exception in thread Thread[ReadStage:492,5,main]
 java.lang.AssertionError: 820255506 vs 518814720
 at 
 org.apache.cassandra.io.util.MmappedSegmentedFile.floor(MmappedSegmentedFile.java:65)
 at 
 org.apache.cassandra.io.util.MmappedSegmentedFile.getSegment(MmappedSegmentedFile.java:80)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:830)
 Is this really data corruption problem and how can I fix it?
 I have only one node so nodetool repair won't help.
 Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4407) timout excpetion in client side and AssertionError in cassandra log

2012-07-03 Thread Nare Gasparyan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406125#comment-13406125
 ] 

Nare Gasparyan commented on CASSANDRA-4407:
---

thanks

 timout excpetion in client side and AssertionError in cassandra log
 ---

 Key: CASSANDRA-4407
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4407
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core
Affects Versions: 1.0.8, 1.0.10
 Environment: Debian linux
Reporter: Nare Gasparyan
Priority: Critical

 Hello.
 I moved my cassandra from one sever to another. Cassandra's version on the 
 first server was 1.0.8 on the new one it is 1.0.10. I moved also data by 
 copying all files under /var/lib/cassandra/data/mykeyspace.
 and now it seems to me that some data has been corrupted during migration as 
 I get the following exceptions when getting data for some keys.
 In cassandra client I get
 me.prettyprint.hector.api.exceptions.HTimedOutException: TimedOutException()
 at 
 me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:42)
 at 
 me.prettyprint.cassandra.service.KeyspaceServiceImpl$11.execute(KeyspaceServiceImpl.java:432)
 at 
 me.prettyprint.cassandra.service.KeyspaceServiceImpl$11.execute(KeyspaceServiceImpl.java:416)
 at 
 me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:103)
 In cassandra system.log I get
 ERROR [ReadStage:492] 2012-07-02 21:50:50,280 AbstractCassandraDaemon.java 
 (line 139) Fatal exception in thread Thread[ReadStage:492,5,main]
 java.lang.AssertionError: 820255506 vs 518814720
 at 
 org.apache.cassandra.io.util.MmappedSegmentedFile.floor(MmappedSegmentedFile.java:65)
 at 
 org.apache.cassandra.io.util.MmappedSegmentedFile.getSegment(MmappedSegmentedFile.java:80)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:830)
 Is this really data corruption problem and how can I fix it?
 I have only one node so nodetool repair won't help.
 Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4292) Per-disk I/O queues

2012-07-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4292:
--

 Reviewer: jbellis
 Priority: Major  (was: Minor)
Fix Version/s: 1.2
 Assignee: Yuki Morishita

 Per-disk I/O queues
 ---

 Key: CASSANDRA-4292
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4292
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Yuki Morishita
 Fix For: 1.2


 As noted in CASSANDRA-809, we have a certain amount of flush (and compaction) 
 threads, which mix and match disk volumes indiscriminately.  It may be worth 
 creating a tight thread - disk affinity, to prevent unnecessary conflict at 
 that level.
 OTOH as SSDs become more prevalent this becomes a non-issue.  Unclear how 
 much pain this actually causes in practice in the meantime.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4340) Cassandra upgrade to 1.1.1 resulted in slow query issue

2012-07-03 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406206#comment-13406206
 ] 

Pavel Yaskevich commented on CASSANDRA-4340:


Ivan, is everything alright after rebuild of the CF after a work day?

 Cassandra upgrade to 1.1.1 resulted in slow query issue
 ---

 Key: CASSANDRA-4340
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4340
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
 Environment: Ubuntu Linux, Java 7, Hector 1.0-1
Reporter: Ivan Ganza
Assignee: Pavel Yaskevich
 Fix For: 1.1.3

 Attachments: CassandraIssue.java


 We have recently introduced Cassandra at the Globe and Mail here in Toronto, 
 Canada.  We are processing and storing the North American stock-market feed.  
 We have found it to work very quickly and things have been looking very good.
 Recently we upgraded to version 1.1.1 and then we have noticed some issues 
 occurring.
 I will try to describe it for you here.  Basically one operation that we very 
 often perform and is very critical is the ability to 'get the latest quote'.  
 This would return to you the latest Quote adjusted against exchange delay 
 rules.  With Cassandra version 1.0.3 we could get a Quote in around 2ms.  
 After update we are looking at time of at least 2-3 seconds.
 The way we query the quote is using a REVERSED SuperSliceQuery  with 
 start=now, end=00:00:00.000 (beginning of day) LIMITED to 1.
 Our investigation leads us to suspect that, since upgrade, Cassandra seems to 
 be reading the sstable from disk even when we request a small range of day 
 only 5 seconds back.  If you look at the output below you can see that the 
 query does NOT get slower as the lookback increases from 5  sec, 60 sec, 15 
 min, 60 min, and 24 hours.
 We also noticed that the query was very fast for the first five minutes of 
 trading, apparently until the first sstable was flushed to disk.  After that 
 we go into query times of 1-2 seconds or so.
 Query time[lookback=5]:[1711ms]
 Query time[lookback=60]:[1592ms]
 Query time[lookback=900]:[1520ms]
 Query time[lookback=3600]:[1294ms]
 Query time[lookback=86400]:[1391ms]
 We would really appreciate input or help on this.
 Cassandra version: 1.1.1
 Hector version: 1.0-1
 ---
 public void testCassandraIssue() {
 try {
   int[] seconds = new int[]{ 5, 60, 60 * 15, 60 * 60, 60 * 60 
 * 24};
   for(int sec : seconds) {
 DateTime start = new DateTime();
 SuperSliceQueryString, String, String, String 
 superSliceQuery = HFactory.createSuperSliceQuery(keyspaceOperator, 
 StringSerializer.get(), StringSerializer.get(), StringSerializer.get(), 
 StringSerializer.get());
 superSliceQuery.setKey(101390 + . + 
 testFormatter.print(start));
 superSliceQuery.setColumnFamily(Quotes);
 
 superSliceQuery.setRange(superKeyFormatter.print(start),
 
 superKeyFormatter.print(start.minusSeconds(sec)),
 true,
 1);
 long theStart = System.currentTimeMillis();
 QueryResultSuperSliceString, String, String 
 result = superSliceQuery.execute();
 long end = System.currentTimeMillis();
 System.out.println(Query time[lookback= + sec + 
 ]:[ + (end - theStart) + ms]);
   }
 } catch(Exception e) {
   e.printStackTrace();
   fail(e.getMessage());
 }
   }
 ---
 create column family Quotes
 with column_type = Super
 and  comparator = BytesType
 and subcomparator = BytesType
 and keys_cached = 7000
 and rows_cached = 0
 and row_cache_save_period = 0
 and key_cache_save_period = 3600
 and memtable_throughput = 255
 and memtable_operations = 0.29
 AND compression_options={sstable_compression:SnappyCompressor, 
 chunk_length_kb:64};

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4179) Add more general support for composites (to row key, column value)

2012-07-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406216#comment-13406216
 ] 

Jonathan Ellis commented on CASSANDRA-4179:
---

Composite in the key seems a lot more useful to me moving forward...  I'd lean 
towards saying declare your value a blob and destructure it client-side for 
values -- which is what upgraders will be doing already, so it's not a big deal 
in that respect.

 Add more general support for composites (to row key, column value)
 --

 Key: CASSANDRA-4179
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4179
 Project: Cassandra
  Issue Type: Sub-task
  Components: API
Reporter: Sylvain Lebresne
Priority: Minor

 Currently CQL3 have a nice syntax for using composites in the column name 
 (it's more than that in fact, it creates a whole new abstraction but let's 
 say I'm talking implementation here). There is however 2 other place where 
 composites could be used (again implementation wise): the row key and the 
 column value. This ticket proposes to explore which of those make sense for 
 CQL3 and how.
 For the row key, I really think that CQL support makes sense. It's very 
 common (and useful) to want to stuff composite information in a row key. 
 Sharding a time serie (CASSANDRA-4176) is probably the best example but there 
 is other.
 For the column value it is less clear. CQL3 makes it very transparent and 
 convenient to store multiple related values into multiple columns so maybe 
 composites in a column value is much less needed. I do still see two cases 
 for which it could be handy:
 # to save some disk/memory space, if you do know it makes no sense to 
 insert/read two value separatly.
 # if you want to enforce that two values should not be inserted separatly. 
 I.e. to enforce a form of constraint to avoid programatic error.
 Those are not widely useful things, but my reasoning is that if whatever 
 syntax we come up for grouping row key in a composite trivially extends to 
 column values, why not support it.
 As for syntax I have 3 suggestions (that are just that, suggestions):
 # If we only care about allowing grouping for row keys:
 {noformat}
 CREATE TABLE timeline (
 name text,
 month int,
 ts timestamp,
 value text,
 PRIMARY KEY ((name, month), ts)
 )
 {noformat}
 # A syntax that could work for both grouping in row key and colum value:
 {noformat}
 CREATE TABLE timeline (
 name text,
 month int,
 ts timestamp,
 value1 text,
 value2 text,
 GROUP (name, month) as key,
 GROUP (value1, value2),
 PRIMARY KEY (key, ts)
 )
 {noformat}
 # An alternative to the preceding one:
 {noformat}
 CREATE TABLE timeline (
 name text,
 month int,
 ts timestamp,
 value1 text,
 value2 text,
 GROUP (name, month) as key,
 GROUP (value1, value2),
 PRIMARY KEY (key, ts)
 ) WITH GROUP (name, month) AS key
AND GROUP (value1, value2)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4285) Atomic, eventually-consistent batches

2012-07-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406221#comment-13406221
 ] 

Jonathan Ellis commented on CASSANDRA-4285:
---

{code}
CREATE TABLE batchlog (
  coordinator inet,
  shard   int,
  id  uuid,
  datablob,
  PRIMARY KEY ((coordinator, shard))
);
{code}

(Using CASSANDRA-4179 syntax for composite-partition-key.)  As discussed in 
CASSANDRA-1337, this is going to be a very tombstone-heavy CF since the 
workload looks like

# insert batchlog entry
# replicate batch
# remove batchlog entry

So we're going to want to shard each coordinator's entries to avoid the 
problems attendant to Very Wide Rows.  Unlike most such workloads, we don't 
actually need to time-order our entries; since batches are idempotent, replay 
order won't matter.  Thus, we can just pick a random shard id (in a known 
range, say 0 to 63) to use for each entry, and on replay we will ready from 
each shard.

Other notes:
- I think we can cheat in the replication strategy by knowing that part of the 
partition key is the coordinator address, to avoid replicating to itself
- default RF will be 1; operators can increase if desired
- operators can also disable [local] commitlog on the batchlog CF, if desired
- gcgs can be safely set to zero in all cases; worst that happens is we replay 
a write a second time which is not a problem
- Currently we always write tombstones to sstables in Memtable flush.  Should 
add a check for gcgs=0 to do an extra removeDeleted pass, which would make the 
actual sstable contents for batchlog almost nothing (since the normal, 
everything-is-working case will be that it gets deleted out while still in the 
memtable).

 Atomic, eventually-consistent batches
 -

 Key: CASSANDRA-4285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4285
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis

 I discussed this in the context of triggers (CASSANDRA-1311) but it's useful 
 as a standalone feature as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4176) Support for sharding wide rows in CQL 3.0

2012-07-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406222#comment-13406222
 ] 

Jonathan Ellis commented on CASSANDRA-4176:
---

CASSANDRA-4285 is a use case for this.  I suggest implementing that the hard 
way first and then seeing what re-usable patterns we can extract.

 Support for sharding wide rows in CQL 3.0
 -

 Key: CASSANDRA-4176
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4176
 Project: Cassandra
  Issue Type: Sub-task
  Components: API
Reporter: Nick Bailey
 Fix For: 1.2


 CQL 3.0 currently has support for defining wide rows by declaring a composite 
 primary key. For example:
 {noformat}
 CREATE TABLE timeline (
 user_id varchar,
 tweet_id uuid,
 author varchar,
 body varchar,
 PRIMARY KEY (user_id, tweet_id)
 );
 {noformat}
 It would also be useful to manage sharding a wide row through the cql schema. 
 This would require being able to split up the actual row key in the schema 
 definition. In the above example you might want to make the row key a 
 combination of user_id and day_of_tweet, in order to shard timelines by day. 
 This might look something like:
 {noformat}
 CREATE TABLE timeline (
 user_id varchar,
 day_of_tweet date,
 tweet_id uuid,
 author varchar,
 body varchar,
 PRIMARY KEY (user_id REQUIRED, day_of_tweet REQUIRED, tweet_id)
 );
 {noformat}
 Thats probably a terrible attempt at how to structure that in CQL. But I 
 think I've gotten the point across. I tagged this for cql 3.0, but I'm 
 honestly not sure how much work it might be. As far as I know built in 
 support for composite keys is limited.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4179) Add more general support for composites (to row key, column value)

2012-07-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406223#comment-13406223
 ] 

Jonathan Ellis commented on CASSANDRA-4179:
---

I'd like to use this for CASSANDRA-4285, and I'm fine with the tuple syntax 
for partition key compositing.  Is this reasonable to do in the next week or 
two?

 Add more general support for composites (to row key, column value)
 --

 Key: CASSANDRA-4179
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4179
 Project: Cassandra
  Issue Type: Sub-task
  Components: API
Reporter: Sylvain Lebresne
Priority: Minor

 Currently CQL3 have a nice syntax for using composites in the column name 
 (it's more than that in fact, it creates a whole new abstraction but let's 
 say I'm talking implementation here). There is however 2 other place where 
 composites could be used (again implementation wise): the row key and the 
 column value. This ticket proposes to explore which of those make sense for 
 CQL3 and how.
 For the row key, I really think that CQL support makes sense. It's very 
 common (and useful) to want to stuff composite information in a row key. 
 Sharding a time serie (CASSANDRA-4176) is probably the best example but there 
 is other.
 For the column value it is less clear. CQL3 makes it very transparent and 
 convenient to store multiple related values into multiple columns so maybe 
 composites in a column value is much less needed. I do still see two cases 
 for which it could be handy:
 # to save some disk/memory space, if you do know it makes no sense to 
 insert/read two value separatly.
 # if you want to enforce that two values should not be inserted separatly. 
 I.e. to enforce a form of constraint to avoid programatic error.
 Those are not widely useful things, but my reasoning is that if whatever 
 syntax we come up for grouping row key in a composite trivially extends to 
 column values, why not support it.
 As for syntax I have 3 suggestions (that are just that, suggestions):
 # If we only care about allowing grouping for row keys:
 {noformat}
 CREATE TABLE timeline (
 name text,
 month int,
 ts timestamp,
 value text,
 PRIMARY KEY ((name, month), ts)
 )
 {noformat}
 # A syntax that could work for both grouping in row key and colum value:
 {noformat}
 CREATE TABLE timeline (
 name text,
 month int,
 ts timestamp,
 value1 text,
 value2 text,
 GROUP (name, month) as key,
 GROUP (value1, value2),
 PRIMARY KEY (key, ts)
 )
 {noformat}
 # An alternative to the preceding one:
 {noformat}
 CREATE TABLE timeline (
 name text,
 month int,
 ts timestamp,
 value1 text,
 value2 text,
 GROUP (name, month) as key,
 GROUP (value1, value2),
 PRIMARY KEY (key, ts)
 ) WITH GROUP (name, month) AS key
AND GROUP (value1, value2)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (CASSANDRA-4285) Atomic, eventually-consistent batches

2012-07-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406221#comment-13406221
 ] 

Jonathan Ellis edited comment on CASSANDRA-4285 at 7/4/12 1:12 AM:
---

Here's the data model I'm leaning towards:

{code}
CREATE TABLE batchlog (
  coordinator inet,
  shard   int,
  id  uuid,
  datablob,
  PRIMARY KEY ((coordinator, shard))
);
{code}

(Using CASSANDRA-4179 syntax for composite-partition-key.)  As discussed in 
CASSANDRA-1311, this is going to be a very tombstone-heavy CF since the 
workload looks like

# insert batchlog entry
# replicate batch
# remove batchlog entry

So we're going to want to shard each coordinator's entries to avoid the 
problems attendant to Very Wide Rows.  Unlike most such workloads, we don't 
actually need to time-order our entries; since batches are idempotent, replay 
order won't matter.  Thus, we can just pick a random shard id (in a known 
range, say 0 to 63) to use for each entry, and on replay we will ready from 
each shard.

Other notes:
- I think we can cheat in the replication strategy by knowing that part of the 
partition key is the coordinator address, to avoid replicating to itself
- default RF will be 1; operators can increase if desired
- operators can also disable [local] commitlog on the batchlog CF, if desired
- gcgs can be safely set to zero in all cases; worst that happens is we replay 
a write a second time which is not a problem
- Currently we always write tombstones to sstables in Memtable flush.  Should 
add a check for gcgs=0 to do an extra removeDeleted pass, which would make the 
actual sstable contents for batchlog almost nothing (since the normal, 
everything-is-working case will be that it gets deleted out while still in the 
memtable).

  was (Author: jbellis):
{code}
CREATE TABLE batchlog (
  coordinator inet,
  shard   int,
  id  uuid,
  datablob,
  PRIMARY KEY ((coordinator, shard))
);
{code}

(Using CASSANDRA-4179 syntax for composite-partition-key.)  As discussed in 
CASSANDRA-1337, this is going to be a very tombstone-heavy CF since the 
workload looks like

# insert batchlog entry
# replicate batch
# remove batchlog entry

So we're going to want to shard each coordinator's entries to avoid the 
problems attendant to Very Wide Rows.  Unlike most such workloads, we don't 
actually need to time-order our entries; since batches are idempotent, replay 
order won't matter.  Thus, we can just pick a random shard id (in a known 
range, say 0 to 63) to use for each entry, and on replay we will ready from 
each shard.

Other notes:
- I think we can cheat in the replication strategy by knowing that part of the 
partition key is the coordinator address, to avoid replicating to itself
- default RF will be 1; operators can increase if desired
- operators can also disable [local] commitlog on the batchlog CF, if desired
- gcgs can be safely set to zero in all cases; worst that happens is we replay 
a write a second time which is not a problem
- Currently we always write tombstones to sstables in Memtable flush.  Should 
add a check for gcgs=0 to do an extra removeDeleted pass, which would make the 
actual sstable contents for batchlog almost nothing (since the normal, 
everything-is-working case will be that it gets deleted out while still in the 
memtable).
  
 Atomic, eventually-consistent batches
 -

 Key: CASSANDRA-4285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4285
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis

 I discussed this in the context of triggers (CASSANDRA-1311) but it's useful 
 as a standalone feature as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (CASSANDRA-4285) Atomic, eventually-consistent batches

2012-07-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-4285:
-

Assignee: Jonathan Ellis

 Atomic, eventually-consistent batches
 -

 Key: CASSANDRA-4285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4285
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis

 I discussed this in the context of triggers (CASSANDRA-1311) but it's useful 
 as a standalone feature as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (CASSANDRA-4285) Atomic, eventually-consistent batches

2012-07-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406221#comment-13406221
 ] 

Jonathan Ellis edited comment on CASSANDRA-4285 at 7/4/12 1:23 AM:
---

Here's the data model I'm leaning towards:

{code}
CREATE TABLE batchlog (
  coordinator inet,
  shard   int,
  id  uuid,
  datablob,
  PRIMARY KEY ((coordinator, shard))
);
{code}

(Using CASSANDRA-4179 syntax for composite-partition-key.)  As discussed in 
CASSANDRA-1311, this is going to be a very tombstone-heavy CF since the 
workload looks like

# insert batchlog entry
# replicate batch
# remove batchlog entry

So we're going to want to shard each coordinator's entries to avoid the 
problems attendant to Very Wide Rows.  Unlike most such workloads, we don't 
actually need to time-order our entries; since batches are idempotent, replay 
order won't matter.  Thus, we can just pick a random shard id (in a known 
range, say 0 to 63) to use for each entry, and on replay we will ready from 
each shard.

Other notes:
- I think we can cheat in the replication strategy by knowing that part of the 
partition key is the coordinator address, to avoid replicating to itself
- default RF will be 1; operators can increase if desired
- operators can also disable [local] commitlog on the batchlog CF, if desired
- gcgs can be safely set to zero in all cases; worst that happens is we replay 
a write a second time which is not a problem
- Currently we always write tombstones to sstables in Memtable flush.  Should 
add a check for gcgs=0 to do an extra removeDeleted pass, which would make the 
actual sstable contents for batchlog almost nothing (since the normal, 
everything-is-working case will be that it gets deleted out while still in the 
memtable).
- I think we do want to use inetaddr instead of node uuid as the coordinator id 
here -- this gives us a replacement node (w/ the same IP) taking over for a 
dead one automatic ownership of the dead node's batchlog.

  was (Author: jbellis):
Here's the data model I'm leaning towards:

{code}
CREATE TABLE batchlog (
  coordinator inet,
  shard   int,
  id  uuid,
  datablob,
  PRIMARY KEY ((coordinator, shard))
);
{code}

(Using CASSANDRA-4179 syntax for composite-partition-key.)  As discussed in 
CASSANDRA-1311, this is going to be a very tombstone-heavy CF since the 
workload looks like

# insert batchlog entry
# replicate batch
# remove batchlog entry

So we're going to want to shard each coordinator's entries to avoid the 
problems attendant to Very Wide Rows.  Unlike most such workloads, we don't 
actually need to time-order our entries; since batches are idempotent, replay 
order won't matter.  Thus, we can just pick a random shard id (in a known 
range, say 0 to 63) to use for each entry, and on replay we will ready from 
each shard.

Other notes:
- I think we can cheat in the replication strategy by knowing that part of the 
partition key is the coordinator address, to avoid replicating to itself
- default RF will be 1; operators can increase if desired
- operators can also disable [local] commitlog on the batchlog CF, if desired
- gcgs can be safely set to zero in all cases; worst that happens is we replay 
a write a second time which is not a problem
- Currently we always write tombstones to sstables in Memtable flush.  Should 
add a check for gcgs=0 to do an extra removeDeleted pass, which would make the 
actual sstable contents for batchlog almost nothing (since the normal, 
everything-is-working case will be that it gets deleted out while still in the 
memtable).
  
 Atomic, eventually-consistent batches
 -

 Key: CASSANDRA-4285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4285
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis

 I discussed this in the context of triggers (CASSANDRA-1311) but it's useful 
 as a standalone feature as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2116) Separate out filesystem errors from generic IOErrors

2012-07-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-2116:
--

   Reviewer: yukim
Component/s: Core
   Priority: Major  (was: Minor)
   Assignee: Aleksey Yeschenko

 Separate out filesystem errors from generic IOErrors
 

 Key: CASSANDRA-2116
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2116
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Chris Goffinet
Assignee: Aleksey Yeschenko
 Fix For: 1.2

 Attachments: 
 0001-Separate-out-filesystem-errors-from-generic-IOErrors.patch


 We throw IOErrors everywhere today in the codebase. We should separate out 
 specific errors such as (reading, writing) from filesystem into FSReadError 
 and FSWriteError. This makes it possible in the next ticket to allow certain 
 failure modes (kill the server if reads or writes fail to disk).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2118) Provide failure modes if issues with the underlying filesystem of a node

2012-07-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-2118:
--

 Reviewer: yukim
  Component/s: Core
Affects Version/s: (was: 0.8 beta 1)
Fix Version/s: 1.2
 Assignee: Aleksey Yeschenko  (was: Chris Goffinet)

 Provide failure modes if issues with the underlying filesystem of a node
 

 Key: CASSANDRA-2118
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2118
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Chris Goffinet
Assignee: Aleksey Yeschenko
 Fix For: 1.2

 Attachments: 
 0001-Provide-failure-modes-if-issues-with-the-underlying-.patch, 
 0001-Provide-failure-modes-if-issues-with-the-underlying-v2.patch, 
 0001-Provide-failure-modes-if-issues-with-the-underlying-v3.patch


 CASSANDRA-2116 introduces the ability to detect FS errors. Let's provide a 
 mode in cassandra.yaml so operators can decide that in the event of failure 
 what to do:
 1) standard - means continue on all errors (default)
 2) read - means only stop  gossip/rpc server if 'reads' fail from drive, 
 writes can fail but not kill gossip/rpc server
 3) readwrite - means stop gossip/rpc server if any read or write errors.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4324) Implement Lucene FST in for key index

2012-07-03 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406240#comment-13406240
 ] 

Yuki Morishita commented on CASSANDRA-4324:
---

Jason,

Thanks for the patch.
Current IndexSummary has list of DecoratedKeys and list of positions, but 
search is done against KeyBound as well. Both DecoratedKey and KeyBound are 
subclass of RowPosition and are compared using their Tokens. So, I think you 
have to construct FST against Token.
For implementation, it would be better to keep all lucene FST related classes 
inside IndexSummary and not expose them directory to SSTableReader etc.

Also, can you provide micro benchmark (memory, cpu time...) of IndexSummary 
between current implementation and FST?

 Implement Lucene FST in for key index
 -

 Key: CASSANDRA-4324
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4324
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jason Rutherglen
Assignee: Jason Rutherglen
Priority: Minor
 Fix For: 1.2

 Attachments: CASSANDRA-4324.patch


 The Lucene FST data structure offers a compact and fast system for indexing 
 Cassandra keys.  More keys may be loaded which in turn should seeks faster.
 * Update the IndexSummary class to make use of the Lucene FST, overriding the 
 serialization mechanism.
 * Alter SSTableReader to make use of the FST seek mechanism

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4321) stackoverflow building interval tree possible sstable corruptions

2012-07-03 Thread Anton Winter (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406254#comment-13406254
 ] 

Anton Winter commented on CASSANDRA-4321:
-

I have repeatedly run sstablescrub across all my nodes and the exceptions do 
not occur as frequently now, however, the integrity check still throw 
exceptions.  compactionstats shows a large number of pending tasks but no 
progression after this error.

Should this ticket be reopened or a new one raised?

{code}
ERROR [CompactionExecutor:912] 2012-07-04 01:07:16,470 
AbstractCassandraDaemon.java (line 134) Exception in thread 
Thread[CompactionExecutor:912,1,main]
java.lang.AssertionError
at 
org.apache.cassandra.db.compaction.LeveledManifest.promote(LeveledManifest.java:214)
at 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:158)
at 
org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:531)
at 
org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:254)
at 
org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:978)
at 
org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:200)
at 
org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(LeveledCompactionTask.java:50)
at 
org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(CompactionManager.java:150)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
{code}


 stackoverflow building interval tree  possible sstable corruptions
 ---

 Key: CASSANDRA-4321
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4321
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
Reporter: Anton Winter
Assignee: Sylvain Lebresne
 Fix For: 1.1.2

 Attachments: 0001-Fix-overlapping-computation-v7.txt, 
 0002-Scrub-detects-and-repair-outOfOrder-rows-v7.txt, 
 0003-Create-standalone-scrub-v7.txt, 
 0004-Add-manifest-integrity-check-v7.txt, cleanup.txt, 
 ooyala-hastur-stacktrace.txt


 After upgrading to 1.1.1 (from 1.1.0) I have started experiencing 
 StackOverflowError's resulting in compaction backlog and failure to restart. 
 The ring currently consists of 6 DC's and 22 nodes using LCS  compression.  
 This issue was first noted on 2 nodes in one DC and then appears to have 
 spread to various other nodes in the other DC's.  
 When the first occurrence of this was found I restarted the instance but it 
 failed to start so I cleared its data and treated it as a replacement node 
 for the token it was previously responsible for.  This node successfully 
 streamed all the relevant data back but failed again a number of hours later 
 with the same StackOverflowError and again was unable to restart. 
 The initial stack overflow error on a running instance looks like this:
 ERROR [CompactionExecutor:314] 2012-06-07 09:59:43,017 
 AbstractCassandraDaemon.java (line 134) Exception in thread 
 Thread[CompactionExecutor:314,1,main]
 java.lang.StackOverflowError
 at java.util.Arrays.mergeSort(Arrays.java:1157)
 at java.util.Arrays.sort(Arrays.java:1092)
 at java.util.Collections.sort(Collections.java:134)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.findMinMedianMax(IntervalNode.java:114)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:49)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 [snip - this repeats until stack overflow.  Compactions stop from this point 
 onwards]
 I restarted this failing instance with DEBUG logging enabled and it throws 
 the following exception part way through startup:
 ERROR 11:37:51,046 Exception in thread Thread[OptionalTasks:1,5,main]
 java.lang.StackOverflowError
 at 
 org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:307)
 at 
 

[jira] [Comment Edited] (CASSANDRA-4321) stackoverflow building interval tree possible sstable corruptions

2012-07-03 Thread Anton Winter (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406254#comment-13406254
 ] 

Anton Winter edited comment on CASSANDRA-4321 at 7/4/12 3:28 AM:
-

I have repeatedly run sstablescrub across all my nodes and the exceptions do 
not occur as frequently now, however, the integrity check still throws 
exceptions and compactionstats shows a large number of pending tasks but no 
progression afterwards.

Should this ticket be reopened or a new one raised?

{code}
ERROR [CompactionExecutor:912] 2012-07-04 01:07:16,470 
AbstractCassandraDaemon.java (line 134) Exception in thread 
Thread[CompactionExecutor:912,1,main]
java.lang.AssertionError
at 
org.apache.cassandra.db.compaction.LeveledManifest.promote(LeveledManifest.java:214)
at 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:158)
at 
org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:531)
at 
org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:254)
at 
org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:978)
at 
org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:200)
at 
org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(LeveledCompactionTask.java:50)
at 
org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(CompactionManager.java:150)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
{code}


  was (Author: awinter):
I have repeatedly run sstablescrub across all my nodes and the exceptions 
do not occur as frequently now, however, the integrity check still throw 
exceptions.  compactionstats shows a large number of pending tasks but no 
progression after this error.

Should this ticket be reopened or a new one raised?

{code}
ERROR [CompactionExecutor:912] 2012-07-04 01:07:16,470 
AbstractCassandraDaemon.java (line 134) Exception in thread 
Thread[CompactionExecutor:912,1,main]
java.lang.AssertionError
at 
org.apache.cassandra.db.compaction.LeveledManifest.promote(LeveledManifest.java:214)
at 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:158)
at 
org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:531)
at 
org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:254)
at 
org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:978)
at 
org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:200)
at 
org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(LeveledCompactionTask.java:50)
at 
org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(CompactionManager.java:150)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
{code}

  
 stackoverflow building interval tree  possible sstable corruptions
 ---

 Key: CASSANDRA-4321
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4321
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
Reporter: Anton Winter
Assignee: Sylvain Lebresne
 Fix For: 1.1.2

 Attachments: 0001-Fix-overlapping-computation-v7.txt, 
 0002-Scrub-detects-and-repair-outOfOrder-rows-v7.txt, 
 0003-Create-standalone-scrub-v7.txt, 
 0004-Add-manifest-integrity-check-v7.txt, cleanup.txt, 
 ooyala-hastur-stacktrace.txt


 After upgrading to 1.1.1 (from 1.1.0) I have started experiencing 
 StackOverflowError's resulting in compaction backlog and failure to restart. 
 

[jira] [Commented] (CASSANDRA-4397) Schema changes not working

2012-07-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406279#comment-13406279
 ] 

Stefan Häck commented on CASSANDRA-4397:


Update to 1.1.2 increased the problem. All indexes were gone. Rebuild didn't 
work. 
After a cluster wide reset (deleting data-/commit-/cache-folders), cassandra 
seems to work again normally.

 Schema changes not working 
 ---

 Key: CASSANDRA-4397
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4397
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.1
 Environment: [cqlsh 2.2.0 | Cassandra 1.1.1 | CQL spec 2.0.0 | Thrift 
 protocol 19.32.0]
Reporter: Stefan Häck
 Attachments: output.txt, system.txt


 No schema change is possible in several keyspaces. 
 No error message appears, everything seems to be o.k., but the schema change 
 isn't adopted. 
 I'll already tried a nodetool repair, nodetool cleanup, nodetool repair_index.
 Both nodes are connected to a local ntp-server. 
 Here's the cqlsh output:
 {code:none}
 cqlsh:bite_personalmanager DESCRIBE COLUMNFAMILY rm_cover_pages ;
 CREATE TABLE rm_cover_pages (
   KEY text PRIMARY KEY,
   css text,
   description text,
   language text,
   html text,
   company_id varint,
   title text
 ) WITH
   comment='Different cover pages for each company.' AND
   comparator=text AND
   read_repair_chance=1.00 AND
   gc_grace_seconds=864000 AND
   default_validation=text AND
   min_compaction_threshold=4 AND
   max_compaction_threshold=32 AND
   replicate_on_write='true' AND
   compaction_strategy_class='SizeTieredCompactionStrategy' AND
   compression_parameters:sstable_compression='SnappyCompressor';
 CREATE INDEX rm_cover_pages_language ON rm_cover_pages (language);
 CREATE INDEX rm_cover_pages_company_id ON rm_cover_pages (company_id);
 cqlsh:bite_personalmanager DROP COLUMNFAMILY rm_cover_pages ;
 cqlsh:bite_personalmanager DESCRIBE COLUMNFAMILY rm_cover_pages ;
 CREATE TABLE rm_cover_pages (
   KEY text PRIMARY KEY,
   css text,
   description text,
   language text,
   html text,
   company_id varint,
   title text
 ) WITH
   comment='Different cover pages for each company.' AND
   comparator=text AND
   read_repair_chance=1.00 AND
   gc_grace_seconds=864000 AND
   default_validation=text AND
   min_compaction_threshold=4 AND
   max_compaction_threshold=32 AND
   replicate_on_write='true' AND
   compaction_strategy_class='SizeTieredCompactionStrategy' AND
   compression_parameters:sstable_compression='SnappyCompressor';
 CREATE INDEX rm_cover_pages_language ON rm_cover_pages (language);
 CREATE INDEX rm_cover_pages_company_id ON rm_cover_pages (company_id);
 cqlsh:bite_personalmanager 
 {code}
 In the attachments are the system.log and the output.log.
 The CassandraCli is also not working.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4397) Schema changes not working

2012-07-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Häck resolved CASSANDRA-4397.


Resolution: Unresolved

 Schema changes not working 
 ---

 Key: CASSANDRA-4397
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4397
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.1
 Environment: [cqlsh 2.2.0 | Cassandra 1.1.1 | CQL spec 2.0.0 | Thrift 
 protocol 19.32.0]
Reporter: Stefan Häck
 Attachments: output.txt, system.txt


 No schema change is possible in several keyspaces. 
 No error message appears, everything seems to be o.k., but the schema change 
 isn't adopted. 
 I'll already tried a nodetool repair, nodetool cleanup, nodetool repair_index.
 Both nodes are connected to a local ntp-server. 
 Here's the cqlsh output:
 {code:none}
 cqlsh:bite_personalmanager DESCRIBE COLUMNFAMILY rm_cover_pages ;
 CREATE TABLE rm_cover_pages (
   KEY text PRIMARY KEY,
   css text,
   description text,
   language text,
   html text,
   company_id varint,
   title text
 ) WITH
   comment='Different cover pages for each company.' AND
   comparator=text AND
   read_repair_chance=1.00 AND
   gc_grace_seconds=864000 AND
   default_validation=text AND
   min_compaction_threshold=4 AND
   max_compaction_threshold=32 AND
   replicate_on_write='true' AND
   compaction_strategy_class='SizeTieredCompactionStrategy' AND
   compression_parameters:sstable_compression='SnappyCompressor';
 CREATE INDEX rm_cover_pages_language ON rm_cover_pages (language);
 CREATE INDEX rm_cover_pages_company_id ON rm_cover_pages (company_id);
 cqlsh:bite_personalmanager DROP COLUMNFAMILY rm_cover_pages ;
 cqlsh:bite_personalmanager DESCRIBE COLUMNFAMILY rm_cover_pages ;
 CREATE TABLE rm_cover_pages (
   KEY text PRIMARY KEY,
   css text,
   description text,
   language text,
   html text,
   company_id varint,
   title text
 ) WITH
   comment='Different cover pages for each company.' AND
   comparator=text AND
   read_repair_chance=1.00 AND
   gc_grace_seconds=864000 AND
   default_validation=text AND
   min_compaction_threshold=4 AND
   max_compaction_threshold=32 AND
   replicate_on_write='true' AND
   compaction_strategy_class='SizeTieredCompactionStrategy' AND
   compression_parameters:sstable_compression='SnappyCompressor';
 CREATE INDEX rm_cover_pages_language ON rm_cover_pages (language);
 CREATE INDEX rm_cover_pages_company_id ON rm_cover_pages (company_id);
 cqlsh:bite_personalmanager 
 {code}
 In the attachments are the system.log and the output.log.
 The CassandraCli is also not working.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira