[jira] [Commented] (CASSANDRA-5521) move IndexSummary off heap
[ https://issues.apache.org/jira/browse/CASSANDRA-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646410#comment-13646410 ] Pavel Yaskevich commented on CASSANDRA-5521: I see v2 does byte[] allocation on every getKey(int) call which would be happening very frequently due to binary search which happens on very index lookup. So I don't think there is any real benefit in terms of GC friendliness from moving off-heap in this case as we have to copy data over multiple times anyway. As an alternative to Unsafe we can try hybrid approach - identify if JNA is present and put summaries off-heap (using JNA's Memory) in combination with Pointer.getByteBuffer() which doesn't copy any data around but instead creates direct ByteBuffer, otherwise have IndexSummary on-heap but split byte[][] and long[] into pages so we don't have to allocate contiguous space for big SSTables which would be much GC friendlier. move IndexSummary off heap -- Key: CASSANDRA-5521 URL: https://issues.apache.org/jira/browse/CASSANDRA-5521 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jonathan Ellis Assignee: Vijay Fix For: 2.0 IndexSummary can still use a lot of heap for narrow-row sstables. (It can also contribute to memory fragmentation because of the large arrays it creates.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5513) java.lang.ClassCastException: org.apache.cassandra.db.DeletedColumn cannot be cast to java.math.BigInteger
[ https://issues.apache.org/jira/browse/CASSANDRA-5513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-5513: Labels: (was: exception) java.lang.ClassCastException: org.apache.cassandra.db.DeletedColumn cannot be cast to java.math.BigInteger -- Key: CASSANDRA-5513 URL: https://issues.apache.org/jira/browse/CASSANDRA-5513 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.2 Environment: Linux XYZ 3.5.0-27-generic #46~precise1-Ubuntu SMP Tue Mar 26 19:33:21 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Reporter: Jouni Kontinen ERROR 18:30:16,044 Exception in thread Thread[ReplicateOnWriteStage:24,5,main] java.lang.RuntimeException: java.lang.NullPointerException at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.NullPointerException at org.apache.cassandra.dht.BigIntegerToken.compareTo(BigIntegerToken.java:38) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:85) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:36) at java.util.Collections.indexedBinarySearch(Unknown Source) at java.util.Collections.binarySearch(Unknown Source) at org.apache.cassandra.io.sstable.SSTableReader.getIndexScanPosition(SSTableReader.java:482) at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:755) at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:717) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:43) at org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:101) at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:275) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1363) at org.apache.cassandra.db.ColumnFamilyStore.getThroughCache(ColumnFamilyStore.java:1176) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1132) at org.apache.cassandra.db.Table.getRow(Table.java:355) at org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64) at org.apache.cassandra.db.CounterMutation.makeReplicationMutation(CounterMutation.java:90) at org.apache.cassandra.service.StorageProxy$7$1.runMayThrow(StorageProxy.java:796) at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1578) ... 3 more ERROR 18:30:16,044 Exception in thread Thread[ReadStage:77,5,main] java.lang.RuntimeException: java.lang.NullPointerException at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.NullPointerException at org.apache.cassandra.dht.BigIntegerToken.compareTo(BigIntegerToken.java:38) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:85) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:36) at java.util.Collections.indexedBinarySearch(Unknown Source) at java.util.Collections.binarySearch(Unknown Source) at org.apache.cassandra.io.sstable.SSTableReader.getIndexScanPosition(SSTableReader.java:482) at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:755) at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:717) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:43) at org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:101) at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68) at
[jira] [Commented] (CASSANDRA-5529) ColumnFamilyRecordReader fails for large datasets
[ https://issues.apache.org/jira/browse/CASSANDRA-5529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646434#comment-13646434 ] Rob Timpe commented on CASSANDRA-5529: -- Thanks for the patch and the quick turnaround. Verified on the 1.1 branch that it fixes my problem. I'm not really familiar with this api, hence my notes about TBinaryProtocol. You solution makes way more sense :) ColumnFamilyRecordReader fails for large datasets - Key: CASSANDRA-5529 URL: https://issues.apache.org/jira/browse/CASSANDRA-5529 Project: Cassandra Issue Type: Bug Components: API, Hadoop Affects Versions: 0.6 Reporter: Rob Timpe Assignee: Jonathan Ellis Fix For: 1.1.12, 1.2.5 Attachments: 5529-1.1.txt, 5529.txt When running mapreduce jobs that read directly from cassandra, the job will sometimes fail with an exception like this: java.lang.RuntimeException: com.rockmelt.org.apache.thrift.TException: Message length exceeded: 40 at org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:400) at org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:406) at org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:329) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.hadoop.ColumnFamilyRecordReader.getProgress(ColumnFamilyRecordReader.java:109) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.getProgress(MapTask.java:522) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:547) at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:771) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:375) at org.apache.hadoop.mapred.Child$4.run(Child.java:255) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132) at org.apache.hadoop.mapred.Child.main(Child.java:249) Caused by: com.rockmelt.org.apache.thrift.TException: Message length exceeded: 40 at com.rockmelt.org.apache.thrift.protocol.TBinaryProtocol.checkReadLength(TBinaryProtocol.java:393) at com.rockmelt.org.apache.thrift.protocol.TBinaryProtocol.readBinary(TBinaryProtocol.java:363) at org.apache.cassandra.thrift.Column.read(Column.java:528) at org.apache.cassandra.thrift.ColumnOrSuperColumn.read(ColumnOrSuperColumn.java:507) at org.apache.cassandra.thrift.KeySlice.read(KeySlice.java:408) at org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12422) at com.rockmelt.org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) at org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:696) at org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:680) at org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:362) ... 16 more In ColumnFamilyRecordReader#initialize, a TBinaryProtocol is created as follows: TTransport transport = ConfigHelper.getInputTransportFactory(conf).openTransport(socket, conf); TBinaryProtocol binaryProtocol = new TBinaryProtocol(transport, ConfigHelper.getThriftMaxMessageLength(conf)); client = new Cassandra.Client(binaryProtocol); But each time a call to cassandra is made, checkReadLength(int length) is called in TBinaryProtocol, which includes this: readLength_ -= length; if (readLength_ 0) { throw new TException(Message length exceeded: + length); } The result is that readLength_ is decreased each time, until it goes negative and exception is thrown. This will only happen if you're reading a lot of data and your split size is large (which is maybe why people haven't noticed it earlier). This happens regardless of whether you use wide row support. I'm not sure what the right fix is. It seems like you could either reset the length of TBinaryProtocol after each call or just use a new TBinaryProtocol each time. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators
[jira] [Commented] (CASSANDRA-5484) Support custom secondary indexes in CQL
[ https://issues.apache.org/jira/browse/CASSANDRA-5484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646445#comment-13646445 ] Sylvain Lebresne commented on CASSANDRA-5484: - bq. I'd like options to be a map (and replace 'class_name' key with 'class' internally, also for consistency) Make sense to me. bq. just treating CREATE INDEX with non-null options as custom, implicitly That would work now, but that slightly frighten me for the future because: * what if we add some other type of non custom indexes, like say bitmap indexes. * what if we want to add options for non custom indexes (while this is nice to avoid option when we can, it's not hard to imagine that future improvements to the 2ndary index code might require tweaking knobs for instance). bq. every time we add a keyword to CQL, even an unreserved one, a kitten dies somewhere I agree we should avoid new keyword when possible. But that being said, when we add unreserved ones I think there is no real downside for clients. Yes it add some marginal delta to the parser and it's definitively sad for the kitten, but typically I'm not sold that it's worth taking the risk of being blocked if we want to add options to non-custom index later just to avoid adding an unreserved keyword now. Support custom secondary indexes in CQL --- Key: CASSANDRA-5484 URL: https://issues.apache.org/jira/browse/CASSANDRA-5484 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.4 Reporter: Benjamin Coverston Assignee: Aleksey Yeschenko Through thrift users can add custom secondary indexes to the column metadata. The following syntax is used in PLSQL, and I think we could use something similar. CREATE INDEX NAME ON TABLE (COLUMN) [INDEXTYPE IS (TYPENAME) [PARAMETERS (PARAM[, PARAM])] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5489) Fix 2.0 key and column aliases serialization and cqlsh DESC SCHEMA
[ https://issues.apache.org/jira/browse/CASSANDRA-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646451#comment-13646451 ] Sylvain Lebresne commented on CASSANDRA-5489: - bq. unless you are suggesting to serialize key/column aliases lists as '[]' in cases where there have been no renames and not '[null, null, null]' as it is now in trunk Yes, that is what I'm suggesting (basically we'll be in a state where either all or none of your aliases are set, and so we'd just keep an empty list for the 'none' case). But that will require a few tweaking on trunk, so my plan was to first commit the simple patch to 1.2 asap if we're good with that (so it gets release with 1.2.5 for instance) and merge it to trunk. Then, I'll update trunk using the new all-or-nothing assumption (and making sure we don't serialize nulls in particular). And I'll fix the ALTER with metadata-less while I'm at it. bq. Also, we still need to apply the cqlsh part of the v1 patch Absolutely. Again, the patch for trunk I'm suggesting is not written yet. I just wanted to get the fix for 1.2 out first, as that's somewhat more urgent. bq. it can actually to to 1.2, with an added benefit that 1.2 cqlsh will be able to properly describe 2.0 schema That's a good idea. Though on that part, shouldn't we handle the case where the schema don't have a 'type' column (since 'type' is only there in trunk)? Fix 2.0 key and column aliases serialization and cqlsh DESC SCHEMA -- Key: CASSANDRA-5489 URL: https://issues.apache.org/jira/browse/CASSANDRA-5489 Project: Cassandra Issue Type: Bug Components: Core, Tools Affects Versions: 2.0 Reporter: Aleksey Yeschenko Assignee: Sylvain Lebresne Priority: Minor Fix For: 2.0 Attachments: 5489-1.2.txt, 5489.txt CASSANDRA-5125 made a slight change to how key_aliases and column_aliases are serialized in schema. Prior to that we never kept nulls in the the json pseudo-lists. This does break cqlsh and probably breaks 1.2 nodes receiving such migrations as well. The patch reverts this behavior and also slightly modifies cqlsh itself to ignore non-regular columns from system.schema_columns table. This patch breaks nothing, since 2.0 already handles 1.2 non-null padded alias lists. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5520) Query tracing session info inconsistent with events info
[ https://issues.apache.org/jira/browse/CASSANDRA-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646456#comment-13646456 ] Sylvain Lebresne commented on CASSANDRA-5520: - To be fair to Ilya, maybe we could add a line in the trace log saying that we've timeouted, how many response we were waiting for, and which node answered. Could also be useful to record with the session info the result of the operation (i.e. success or exception), so it's easy to filter out the one that have thrown (without having to compare the total duration to the rpc timeout, which can be painful if you've change the rpc timeout a few time, and besides it's not like our timeout are very precise so I think it's possible (though very unlikely) for a query to have a total duration slightly greater than the rpc timeout without having timeouted). I'ts improvement, not bugs, but still likely nice to have. Query tracing session info inconsistent with events info Key: CASSANDRA-5520 URL: https://issues.apache.org/jira/browse/CASSANDRA-5520 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.4 Environment: Linux Reporter: Ilya Kirnos Session info for a trace is showing that a query took 10 seconds (it timed out). cqlsh:system_traces select session_id, duration, request from sessions where session_id = c7e36a30-af3a-11e2-9ec9-772ec39805fe; session_id | duration | request c7e36a30-af3a-11e2-9ec9-772ec39805fe | 1230 | multiget_slice However, the event-level breakdown shows no such large duration: cqlsh:system_traces select * from events where session_id = c7e36a30-af3a-11e2-9ec9-772ec39805fe; session_id | event_id | activity | source | source_elapsed | thread -+--- c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a30-af3a-11e2-9480-e9d811e0fc18 | Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 19 | Thread-57 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a31-af3a-11e2-9ec9-772ec39805fe | Sending message to /xxx.xxx.153.16 | xxx.xxx.90.147 | 246 | WRITE-/xxx.xxx.4.16 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9480-e9d811e0fc18 | Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 259 | Thread-57 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9ec9-772ec39805fe | Sending message to /10.248.106.37 | xxx.xxx.90.147 | 253 | WRITE-/xxx.xxx.79.52 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-b8dc-a7032a583115 | Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 25 | Thread-94 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9480-e9d811e0fc18 | Executing single-partition query on CardHash | xxx.xxx.4.16 | 421 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9ec9-772ec39805fe | Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 310 | WRITE-/xxx.xxx.213.136 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-b8dc-a7032a583115 | Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 106 | Thread-94 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9480-e9d811e0fc18 | Acquiring sstable references | xxx.xxx.4.16 | 444 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9ec9-772ec39805fe | Sending message to /10.248.106.37 | xxx.xxx.90.147 | 352 | WRITE-/xxx.xxx.79.52 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-b8dc-a7032a583115 | Executing single-partition query on CardHash | xxx.xxx.213.136 | 144 | ReadStage:11 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9480-e9d811e0fc18 | Merging memtable contents | xxx.xxx.4.16 | 472 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9ec9-772ec39805fe | Sending message to /10.248.95.237 | xxx.xxx.90.147 | 362 | WRITE-/xxx.xxx.201.218 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-b8dc-a7032a583115 | Acquiring sstable references | xxx.xxx.213.136 | 164 | ReadStage:11 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-9480-e9d811e0fc18 | Merging data from memtables and 0 sstables | xxx.xxx.4.16 | 510 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-9ec9-772ec39805fe | Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 376 | WRITE-/xxx.xxx.213.136 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-b8dc-a7032a583115 | Merging memtable contents | xxx.xxx.213.136 | 195 | ReadStage:11 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-9480-e9d811e0fc18 | Read 0 live cells and 0 tombstoned | xxx.xxx.4.16 | 530 |
[jira] [Commented] (CASSANDRA-5527) Deletion by Secondary Key
[ https://issues.apache.org/jira/browse/CASSANDRA-5527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646477#comment-13646477 ] Sylvain Lebresne commented on CASSANDRA-5527: - I agree with Jonathan, I don't see how we could make that efficient without having to read all the secondary key tombstones each time you read the row, which doesn't sound fun. But as an aside, I'll note that another option for this is to use a secondary index. Now I know it's not read-free, but provided you do provide the partition key in the query, this will not be horribly inefficient either. And you'll exchange slightly slower writes for no hit whatsoever on reads, which I would suspect is a better trade-off more often than not for that kind of operation. Deletion by Secondary Key - Key: CASSANDRA-5527 URL: https://issues.apache.org/jira/browse/CASSANDRA-5527 Project: Cassandra Issue Type: Improvement Reporter: Rick Branson Given Cassandra's popularity as a time ordered list store, the inability to do deletes by anything other than the primary key without re-implementing tombstones in the application is a bit of an achilles heel for many use cases. It's a data modeling problem that seems to come up quite often, and given that we now have the CQL3 abstraction layer sitting on top of the storage engine, I think there's an opportunity to take this burden off of the application layer. I've spent several weeks thinking about this problem within the context of Cassandra, and I think I've come up with a reasonable proposal. It would involve addition of a secondary key facility to CQL3 tables: CREATE TABLE timeline ( timeline_id uuid, entry_id timeuuid, entry_key blob, entry_payload blob, PRIMARY KEY (timeline_id, entry_id), KEY (timeline_id, entry_key) ); Secondary keys would be required to share the same partition key with the primary key. They would be included to support deletion by secondary key operations: DELETE FROM timeline WHERE timeline_id = X and entry_key = Y; Underneath, the storage engine row would contain additional secondary key tombstones. Secondary key deletion would be read-free, requiring a single tombstone write. The cost of reads would necessarily go up. Queries would need to be modified to perform an additional step to find any matching secondary key tombstones and perform the regular convergence process. The secondary key tombstones should be cleaned up by the regular tombstone GC process. While I didn't want to complicate this idea too much, it might be also worth having a discussion around supporting secondary key queries as well, or at least making the schema compatible with potential future support (maybe rename KEY to DELETABLE KEY or something). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5304) Support 2ndary indexed columns in UPDATE and DELETE
[ https://issues.apache.org/jira/browse/CASSANDRA-5304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646478#comment-13646478 ] Sylvain Lebresne commented on CASSANDRA-5304: - bq. Bulk-modification is really best left to things like Hadoop As a side note, that kind of modification-by-2ndary index could also be used for updating/deleting selected CQL3 rows within one partition (within a wide row), in which case Hadoop is definitively overkill (and likely way more inefficient), and at least some have suggested that this was common-ish (CASSANDRA-5527). Not to suggest that we re-open this necessarily, I still somewhat stand by maybe it's better to let user do it client-side and be aware of what that involves, but just wanted to add some color. Support 2ndary indexed columns in UPDATE and DELETE --- Key: CASSANDRA-5304 URL: https://issues.apache.org/jira/browse/CASSANDRA-5304 Project: Cassandra Issue Type: Wish Components: Core Affects Versions: 1.2.2 Reporter: Joachim Haagen Skeie Priority: Minor I have a Column Family with the following index: CREATE INDEX live_stat_is_calculated ON live_statistics (iscalculated) Then, I would like to delete records based on this index via CQL3 query: delete from live_statistics where iscalculated = true; But Cassandra returns the following error: PRIMARY KEY part iscalculated found in SET part -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5520) Query tracing session info inconsistent with events info
[ https://issues.apache.org/jira/browse/CASSANDRA-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646506#comment-13646506 ] Brandon Williams commented on CASSANDRA-5520: - +1 on all of that. Query tracing session info inconsistent with events info Key: CASSANDRA-5520 URL: https://issues.apache.org/jira/browse/CASSANDRA-5520 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.4 Environment: Linux Reporter: Ilya Kirnos Session info for a trace is showing that a query took 10 seconds (it timed out). cqlsh:system_traces select session_id, duration, request from sessions where session_id = c7e36a30-af3a-11e2-9ec9-772ec39805fe; session_id | duration | request c7e36a30-af3a-11e2-9ec9-772ec39805fe | 1230 | multiget_slice However, the event-level breakdown shows no such large duration: cqlsh:system_traces select * from events where session_id = c7e36a30-af3a-11e2-9ec9-772ec39805fe; session_id | event_id | activity | source | source_elapsed | thread -+--- c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a30-af3a-11e2-9480-e9d811e0fc18 | Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 19 | Thread-57 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a31-af3a-11e2-9ec9-772ec39805fe | Sending message to /xxx.xxx.153.16 | xxx.xxx.90.147 | 246 | WRITE-/xxx.xxx.4.16 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9480-e9d811e0fc18 | Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 259 | Thread-57 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9ec9-772ec39805fe | Sending message to /10.248.106.37 | xxx.xxx.90.147 | 253 | WRITE-/xxx.xxx.79.52 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-b8dc-a7032a583115 | Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 25 | Thread-94 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9480-e9d811e0fc18 | Executing single-partition query on CardHash | xxx.xxx.4.16 | 421 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9ec9-772ec39805fe | Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 310 | WRITE-/xxx.xxx.213.136 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-b8dc-a7032a583115 | Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 106 | Thread-94 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9480-e9d811e0fc18 | Acquiring sstable references | xxx.xxx.4.16 | 444 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9ec9-772ec39805fe | Sending message to /10.248.106.37 | xxx.xxx.90.147 | 352 | WRITE-/xxx.xxx.79.52 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-b8dc-a7032a583115 | Executing single-partition query on CardHash | xxx.xxx.213.136 | 144 | ReadStage:11 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9480-e9d811e0fc18 | Merging memtable contents | xxx.xxx.4.16 | 472 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9ec9-772ec39805fe | Sending message to /10.248.95.237 | xxx.xxx.90.147 | 362 | WRITE-/xxx.xxx.201.218 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-b8dc-a7032a583115 | Acquiring sstable references | xxx.xxx.213.136 | 164 | ReadStage:11 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-9480-e9d811e0fc18 | Merging data from memtables and 0 sstables | xxx.xxx.4.16 | 510 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-9ec9-772ec39805fe | Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 376 | WRITE-/xxx.xxx.213.136 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-b8dc-a7032a583115 | Merging memtable contents | xxx.xxx.213.136 | 195 | ReadStage:11 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-9480-e9d811e0fc18 | Read 0 live cells and 0 tombstoned | xxx.xxx.4.16 | 530 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-9ec9-772ec39805fe | Sending message to /10.248.95.237 | xxx.xxx.90.147 | 401 | WRITE-/xxx.xxx.201.218 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-b8dc-a7032a583115 | Executing single-partition query on CardHash | xxx.xxx.213.136 | 202 | ReadStage:41 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39146-af3a-11e2-9480-e9d811e0fc18 | Enqueuing response to /xxx.xxx.90.147 | xxx.xxx.4.16 | 572 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39146-af3a-11e2-9ec9-772ec39805fe | Sending message to /xxx.xxx.153.16 | xxx.xxx.90.147 | 489 | WRITE-/xxx.xxx.4.16 c7e36a30-af3a-11e2-9ec9-772ec39805fe
[jira] [Reopened] (CASSANDRA-5520) Query tracing session info inconsistent with events info
[ https://issues.apache.org/jira/browse/CASSANDRA-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams reopened CASSANDRA-5520: - Assignee: Aleksey Yeschenko Query tracing session info inconsistent with events info Key: CASSANDRA-5520 URL: https://issues.apache.org/jira/browse/CASSANDRA-5520 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.4 Environment: Linux Reporter: Ilya Kirnos Assignee: Aleksey Yeschenko Session info for a trace is showing that a query took 10 seconds (it timed out). cqlsh:system_traces select session_id, duration, request from sessions where session_id = c7e36a30-af3a-11e2-9ec9-772ec39805fe; session_id | duration | request c7e36a30-af3a-11e2-9ec9-772ec39805fe | 1230 | multiget_slice However, the event-level breakdown shows no such large duration: cqlsh:system_traces select * from events where session_id = c7e36a30-af3a-11e2-9ec9-772ec39805fe; session_id | event_id | activity | source | source_elapsed | thread -+--- c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a30-af3a-11e2-9480-e9d811e0fc18 | Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 19 | Thread-57 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a31-af3a-11e2-9ec9-772ec39805fe | Sending message to /xxx.xxx.153.16 | xxx.xxx.90.147 | 246 | WRITE-/xxx.xxx.4.16 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9480-e9d811e0fc18 | Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 259 | Thread-57 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9ec9-772ec39805fe | Sending message to /10.248.106.37 | xxx.xxx.90.147 | 253 | WRITE-/xxx.xxx.79.52 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-b8dc-a7032a583115 | Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 25 | Thread-94 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9480-e9d811e0fc18 | Executing single-partition query on CardHash | xxx.xxx.4.16 | 421 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9ec9-772ec39805fe | Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 310 | WRITE-/xxx.xxx.213.136 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-b8dc-a7032a583115 | Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 106 | Thread-94 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9480-e9d811e0fc18 | Acquiring sstable references | xxx.xxx.4.16 | 444 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9ec9-772ec39805fe | Sending message to /10.248.106.37 | xxx.xxx.90.147 | 352 | WRITE-/xxx.xxx.79.52 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-b8dc-a7032a583115 | Executing single-partition query on CardHash | xxx.xxx.213.136 | 144 | ReadStage:11 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9480-e9d811e0fc18 | Merging memtable contents | xxx.xxx.4.16 | 472 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9ec9-772ec39805fe | Sending message to /10.248.95.237 | xxx.xxx.90.147 | 362 | WRITE-/xxx.xxx.201.218 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-b8dc-a7032a583115 | Acquiring sstable references | xxx.xxx.213.136 | 164 | ReadStage:11 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-9480-e9d811e0fc18 | Merging data from memtables and 0 sstables | xxx.xxx.4.16 | 510 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-9ec9-772ec39805fe | Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 376 | WRITE-/xxx.xxx.213.136 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-b8dc-a7032a583115 | Merging memtable contents | xxx.xxx.213.136 | 195 | ReadStage:11 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-9480-e9d811e0fc18 | Read 0 live cells and 0 tombstoned | xxx.xxx.4.16 | 530 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-9ec9-772ec39805fe | Sending message to /10.248.95.237 | xxx.xxx.90.147 | 401 | WRITE-/xxx.xxx.201.218 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-b8dc-a7032a583115 | Executing single-partition query on CardHash | xxx.xxx.213.136 | 202 | ReadStage:41 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39146-af3a-11e2-9480-e9d811e0fc18 | Enqueuing response to /xxx.xxx.90.147 | xxx.xxx.4.16 | 572 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39146-af3a-11e2-9ec9-772ec39805fe | Sending message to /xxx.xxx.153.16 | xxx.xxx.90.147 | 489 | WRITE-/xxx.xxx.4.16 c7e36a30-af3a-11e2-9ec9-772ec39805fe |
[jira] [Updated] (CASSANDRA-5530) Switch from THSHAServer to TThreadedSelectorServer
[ https://issues.apache.org/jira/browse/CASSANDRA-5530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-5530: Tester: enigmacurry Switch from THSHAServer to TThreadedSelectorServer -- Key: CASSANDRA-5530 URL: https://issues.apache.org/jira/browse/CASSANDRA-5530 Project: Cassandra Issue Type: Improvement Components: Core Reporter: T Jake Luciani Assignee: T Jake Luciani Fix For: 2.0 Attachments: 5530.txt TThreadedSelectorServer is new in Thrift 0.9. It builds on HSHA by allowing for a set of threads for IO and a set work request processing... I've attached the performance numbers below. It's a lot closer to TThreadedServer. ThreadedServer (Default) {code} Write Averages from the middle 80% of values: interval_op_rate : 14811 interval_key_rate : 14811 latency median: 1.7 latency 95th percentile : 5.3 latency 99.9th percentile : 142.6 Total operation time : 00:01:16 END Read Averages from the middle 80% of values: interval_op_rate : 16898 interval_key_rate : 16898 latency median: 2.2 latency 95th percentile : 8.5 latency 99.9th percentile : 165.7 Total operation time : 00:01:05 END {code} HSHA (CURRENT) {code} Write Averages from the middle 80% of values: interval_op_rate : 8939 interval_key_rate : 8939 latency median: 5.0 latency 95th percentile : 10.1 latency 99.9th percentile : 105.4 Total operation time : 00:01:56 END Read Averages from the middle 80% of values: interval_op_rate : 9608 interval_key_rate : 9608 latency median: 5.1 latency 95th percentile : 7.7 latency 99.9th percentile : 51.6 Total operation time : 00:01:49 END {code} TThreadedSelectorServer (NEW) {code} Write Averages from the middle 80% of values: interval_op_rate : 11640 interval_key_rate : 11640 latency median: 3.1 latency 95th percentile : 10.6 latency 99.9th percentile : 135.9 Total operation time : 00:01:30 END Read Averages from the middle 80% of values: interval_op_rate : 15247 interval_key_rate : 15247 latency median: 2.8 latency 95th percentile : 7.1 latency 99.9th percentile : 40.3 Total operation time : 00:01:06 END {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5530) Switch from THSHAServer to TThreadedSelectorServer
[ https://issues.apache.org/jira/browse/CASSANDRA-5530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646509#comment-13646509 ] Brandon Williams commented on CASSANDRA-5530: - Go ahead and set this to testing after commit, because it's a little strange to me that reads were faster than writes in all the tests. Switch from THSHAServer to TThreadedSelectorServer -- Key: CASSANDRA-5530 URL: https://issues.apache.org/jira/browse/CASSANDRA-5530 Project: Cassandra Issue Type: Improvement Components: Core Reporter: T Jake Luciani Assignee: T Jake Luciani Fix For: 2.0 Attachments: 5530.txt TThreadedSelectorServer is new in Thrift 0.9. It builds on HSHA by allowing for a set of threads for IO and a set work request processing... I've attached the performance numbers below. It's a lot closer to TThreadedServer. ThreadedServer (Default) {code} Write Averages from the middle 80% of values: interval_op_rate : 14811 interval_key_rate : 14811 latency median: 1.7 latency 95th percentile : 5.3 latency 99.9th percentile : 142.6 Total operation time : 00:01:16 END Read Averages from the middle 80% of values: interval_op_rate : 16898 interval_key_rate : 16898 latency median: 2.2 latency 95th percentile : 8.5 latency 99.9th percentile : 165.7 Total operation time : 00:01:05 END {code} HSHA (CURRENT) {code} Write Averages from the middle 80% of values: interval_op_rate : 8939 interval_key_rate : 8939 latency median: 5.0 latency 95th percentile : 10.1 latency 99.9th percentile : 105.4 Total operation time : 00:01:56 END Read Averages from the middle 80% of values: interval_op_rate : 9608 interval_key_rate : 9608 latency median: 5.1 latency 95th percentile : 7.7 latency 99.9th percentile : 51.6 Total operation time : 00:01:49 END {code} TThreadedSelectorServer (NEW) {code} Write Averages from the middle 80% of values: interval_op_rate : 11640 interval_key_rate : 11640 latency median: 3.1 latency 95th percentile : 10.6 latency 99.9th percentile : 135.9 Total operation time : 00:01:30 END Read Averages from the middle 80% of values: interval_op_rate : 15247 interval_key_rate : 15247 latency median: 2.8 latency 95th percentile : 7.1 latency 99.9th percentile : 40.3 Total operation time : 00:01:06 END {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5432) Repair Freeze/Gossip Invisibility Issues 1.2.4
[ https://issues.apache.org/jira/browse/CASSANDRA-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646519#comment-13646519 ] Brandon Williams commented on CASSANDRA-5432: - I never thought CASSANDRA-5171 was a really big gain anyway, but it looked innocuous enough at the time. +1 on reverting it. Repair Freeze/Gossip Invisibility Issues 1.2.4 -- Key: CASSANDRA-5432 URL: https://issues.apache.org/jira/browse/CASSANDRA-5432 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.4 Environment: Ubuntu 10.04.1 LTS C* 1.2.3 Sun Java 6 u43 JNA Enabled Not using VNodes Reporter: Arya Goudarzi Assignee: Vijay Priority: Critical Attachments: 0001-CASSANDRA-5432.patch Read comment 6. This description summarizes the repair issue only, but I believe there is a bigger problem going on with networking as described on that comment. Since I have upgraded our sandbox cluster, I am unable to run repair on any node and I am reaching our gc_grace seconds this weekend. Please help. So far, I have tried the following suggestions: - nodetool scrub - offline scrub - running repair on each CF separately. Didn't matter. All got stuck the same way. The repair command just gets stuck and the machine is idling. Only the following logs are printed for repair job: INFO [Thread-42214] 2013-04-05 23:30:27,785 StorageService.java (line 2379) Starting repair command #4, repairing 1 ranges for keyspace cardspring_production INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,789 AntiEntropyService.java (line 652) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] new session: will sync /X.X.X.190, /X.X.X.43, /X.X.X.56 on range (1808575600,42535295865117307932921825930779602032] for keyspace_production.[comma separated list of CFs] INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,790 AntiEntropyService.java (line 858) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] requesting merkle trees for BusinessConnectionIndicesEntries (to [/X.X.X.43, /X.X.X.56, /X.X.X.190]) INFO [AntiEntropyStage:1] 2013-04-05 23:30:28,086 AntiEntropyService.java (line 214) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] Received merkle tree for ColumnFamilyName from /X.X.X.43 INFO [AntiEntropyStage:1] 2013-04-05 23:30:28,147 AntiEntropyService.java (line 214) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] Received merkle tree for ColumnFamilyName from /X.X.X.56 Please advise. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5518) Clean out token range bisection on bootstrap
[ https://issues.apache.org/jira/browse/CASSANDRA-5518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646523#comment-13646523 ] Brandon Williams commented on CASSANDRA-5518: - Is the this is broken comment in Gossiper because there could be a token collision? I typically don't like comments like this in there without knowing why ;) Clean out token range bisection on bootstrap Key: CASSANDRA-5518 URL: https://issues.apache.org/jira/browse/CASSANDRA-5518 Project: Cassandra Issue Type: Task Components: Core Reporter: Jonathan Ellis Assignee: Jonathan Ellis Priority: Minor Fix For: 2.0 Bootstrapping a node by bisecting an existing node's range has never been very useful, and with vnodes it's thoroughly obsolete. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5508) Expose whether jna is enabled and memory is locked via JMX
[ https://issues.apache.org/jira/browse/CASSANDRA-5508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646526#comment-13646526 ] Brandon Williams commented on CASSANDRA-5508: - We can probably keep a boolean or two somewhere for this, but it's not clear to me where a good place is since almost nothing is initialized at that point in time. Expose whether jna is enabled and memory is locked via JMX -- Key: CASSANDRA-5508 URL: https://issues.apache.org/jira/browse/CASSANDRA-5508 Project: Cassandra Issue Type: Wish Reporter: Jeremy Hanna Priority: Trivial This may not be possible, but it would be very useful. Currently the only definitive way to determine whether JNA is enabled and that it's able to lock the memory it needs is to look at the startup log. It would be great if there was a way to store whether it is enabled so that jmx (or nodetool) could easily tell if JNA was enabled and whether it was able to lock the memory. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5488) CassandraStorage throws NullPointerException (NPE) when widerows is set to 'true'
[ https://issues.apache.org/jira/browse/CASSANDRA-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646531#comment-13646531 ] Brandon Williams commented on CASSANDRA-5488: - Can you add a test to examples/pig/test/test_storage.pig that demonstrates the problem? CassandraStorage throws NullPointerException (NPE) when widerows is set to 'true' - Key: CASSANDRA-5488 URL: https://issues.apache.org/jira/browse/CASSANDRA-5488 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.2.4 Environment: Ubuntu 12.04.1 x64, Cassandra 1.2.4 Reporter: Sheetal Gosrani Priority: Minor Labels: cassandra, hadoop, pig Fix For: 1.2.5 Attachments: 5488.txt CassandraStorage throws NPE when widerows is set to 'true'. 2 problems in getNextWide: 1. Creation of tuple without specifying size 2. Calling addKeyToTuple on lastKey instead of key java.lang.NullPointerException at org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:167) at org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:124) at org.apache.cassandra.cql.jdbc.JdbcUTF8.getString(JdbcUTF8.java:73) at org.apache.cassandra.cql.jdbc.JdbcUTF8.compose(JdbcUTF8.java:93) at org.apache.cassandra.db.marshal.UTF8Type.compose(UTF8Type.java:34) at org.apache.cassandra.db.marshal.UTF8Type.compose(UTF8Type.java:26) at org.apache.cassandra.hadoop.pig.CassandraStorage.addKeyToTuple(CassandraStorage.java:313) at org.apache.cassandra.hadoop.pig.CassandraStorage.getNextWide(CassandraStorage.java:196) at org.apache.cassandra.hadoop.pig.CassandraStorage.getNext(CassandraStorage.java:224) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:194) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532) at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370) at org.apache.hadoop.mapred.Child$4.run(Child.java:255) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at org.apache.hadoop.mapred.Child.main(Child.java:249) 2013-04-16 12:28:03,671 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5489) Fix 2.0 key and column aliases serialization and cqlsh DESC SCHEMA
[ https://issues.apache.org/jira/browse/CASSANDRA-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646533#comment-13646533 ] Aleksey Yeschenko commented on CASSANDRA-5489: -- Agreed then. {quote} Though on that part, shouldn't we handle the case where the schema don't have a 'type' column (since 'type' is only there in trunk)? {quote} Ahead of you here - v1 patch handles that already (in c.get('type', 'regular') 'regular' is the default value to use if 'type' attribute is absent). Fix 2.0 key and column aliases serialization and cqlsh DESC SCHEMA -- Key: CASSANDRA-5489 URL: https://issues.apache.org/jira/browse/CASSANDRA-5489 Project: Cassandra Issue Type: Bug Components: Core, Tools Affects Versions: 2.0 Reporter: Aleksey Yeschenko Assignee: Sylvain Lebresne Priority: Minor Fix For: 2.0 Attachments: 5489-1.2.txt, 5489.txt CASSANDRA-5125 made a slight change to how key_aliases and column_aliases are serialized in schema. Prior to that we never kept nulls in the the json pseudo-lists. This does break cqlsh and probably breaks 1.2 nodes receiving such migrations as well. The patch reverts this behavior and also slightly modifies cqlsh itself to ignore non-regular columns from system.schema_columns table. This patch breaks nothing, since 2.0 already handles 1.2 non-null padded alias lists. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5484) Support custom secondary indexes in CQL
[ https://issues.apache.org/jira/browse/CASSANDRA-5484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646535#comment-13646535 ] Aleksey Yeschenko commented on CASSANDRA-5484: -- Makes sense. Support custom secondary indexes in CQL --- Key: CASSANDRA-5484 URL: https://issues.apache.org/jira/browse/CASSANDRA-5484 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.4 Reporter: Benjamin Coverston Assignee: Aleksey Yeschenko Through thrift users can add custom secondary indexes to the column metadata. The following syntax is used in PLSQL, and I think we could use something similar. CREATE INDEX NAME ON TABLE (COLUMN) [INDEXTYPE IS (TYPENAME) [PARAMETERS (PARAM[, PARAM])] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5489) Fix 2.0 key and column aliases serialization and cqlsh DESC SCHEMA
[ https://issues.apache.org/jira/browse/CASSANDRA-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646540#comment-13646540 ] Sylvain Lebresne commented on CASSANDRA-5489: - bq. v1 patch handles that already Oh, that's a python skill fail for me. So are we good if I commit the 1.2 patch for now with the cqlsh part of your patch (so it gets into 1.2.5 in particular). And then I'll work on fixing the trunk parts. Fix 2.0 key and column aliases serialization and cqlsh DESC SCHEMA -- Key: CASSANDRA-5489 URL: https://issues.apache.org/jira/browse/CASSANDRA-5489 Project: Cassandra Issue Type: Bug Components: Core, Tools Affects Versions: 2.0 Reporter: Aleksey Yeschenko Assignee: Sylvain Lebresne Priority: Minor Fix For: 2.0 Attachments: 5489-1.2.txt, 5489.txt CASSANDRA-5125 made a slight change to how key_aliases and column_aliases are serialized in schema. Prior to that we never kept nulls in the the json pseudo-lists. This does break cqlsh and probably breaks 1.2 nodes receiving such migrations as well. The patch reverts this behavior and also slightly modifies cqlsh itself to ignore non-regular columns from system.schema_columns table. This patch breaks nothing, since 2.0 already handles 1.2 non-null padded alias lists. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5489) Fix 2.0 key and column aliases serialization and cqlsh DESC SCHEMA
[ https://issues.apache.org/jira/browse/CASSANDRA-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646541#comment-13646541 ] Aleksey Yeschenko commented on CASSANDRA-5489: -- Yep. Fix 2.0 key and column aliases serialization and cqlsh DESC SCHEMA -- Key: CASSANDRA-5489 URL: https://issues.apache.org/jira/browse/CASSANDRA-5489 Project: Cassandra Issue Type: Bug Components: Core, Tools Affects Versions: 2.0 Reporter: Aleksey Yeschenko Assignee: Sylvain Lebresne Priority: Minor Fix For: 2.0 Attachments: 5489-1.2.txt, 5489.txt CASSANDRA-5125 made a slight change to how key_aliases and column_aliases are serialized in schema. Prior to that we never kept nulls in the the json pseudo-lists. This does break cqlsh and probably breaks 1.2 nodes receiving such migrations as well. The patch reverts this behavior and also slightly modifies cqlsh itself to ignore non-regular columns from system.schema_columns table. This patch breaks nothing, since 2.0 already handles 1.2 non-null padded alias lists. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5530) Switch from THSHAServer to TThreadedSelectorServer
[ https://issues.apache.org/jira/browse/CASSANDRA-5530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646588#comment-13646588 ] Jonathan Ellis commented on CASSANDRA-5530: --- 2.0 is just that awesome. :) Switch from THSHAServer to TThreadedSelectorServer -- Key: CASSANDRA-5530 URL: https://issues.apache.org/jira/browse/CASSANDRA-5530 Project: Cassandra Issue Type: Improvement Components: Core Reporter: T Jake Luciani Assignee: T Jake Luciani Fix For: 2.0 Attachments: 5530.txt TThreadedSelectorServer is new in Thrift 0.9. It builds on HSHA by allowing for a set of threads for IO and a set work request processing... I've attached the performance numbers below. It's a lot closer to TThreadedServer. ThreadedServer (Default) {code} Write Averages from the middle 80% of values: interval_op_rate : 14811 interval_key_rate : 14811 latency median: 1.7 latency 95th percentile : 5.3 latency 99.9th percentile : 142.6 Total operation time : 00:01:16 END Read Averages from the middle 80% of values: interval_op_rate : 16898 interval_key_rate : 16898 latency median: 2.2 latency 95th percentile : 8.5 latency 99.9th percentile : 165.7 Total operation time : 00:01:05 END {code} HSHA (CURRENT) {code} Write Averages from the middle 80% of values: interval_op_rate : 8939 interval_key_rate : 8939 latency median: 5.0 latency 95th percentile : 10.1 latency 99.9th percentile : 105.4 Total operation time : 00:01:56 END Read Averages from the middle 80% of values: interval_op_rate : 9608 interval_key_rate : 9608 latency median: 5.1 latency 95th percentile : 7.7 latency 99.9th percentile : 51.6 Total operation time : 00:01:49 END {code} TThreadedSelectorServer (NEW) {code} Write Averages from the middle 80% of values: interval_op_rate : 11640 interval_key_rate : 11640 latency median: 3.1 latency 95th percentile : 10.6 latency 99.9th percentile : 135.9 Total operation time : 00:01:30 END Read Averages from the middle 80% of values: interval_op_rate : 15247 interval_key_rate : 15247 latency median: 2.8 latency 95th percentile : 7.1 latency 99.9th percentile : 40.3 Total operation time : 00:01:06 END {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5520) Query tracing session info inconsistent with events info
[ https://issues.apache.org/jira/browse/CASSANDRA-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646587#comment-13646587 ] Jonathan Ellis commented on CASSANDRA-5520: --- As I said on IRC, I think the fix is as simple as changing {{logger.debug(... timed out)}} to trace calls. (More precisely, we should trace it if tracing is enabled, otherwise logger.debug it, for symmetry with our open-trace-session code.) Query tracing session info inconsistent with events info Key: CASSANDRA-5520 URL: https://issues.apache.org/jira/browse/CASSANDRA-5520 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.4 Environment: Linux Reporter: Ilya Kirnos Assignee: Aleksey Yeschenko Session info for a trace is showing that a query took 10 seconds (it timed out). cqlsh:system_traces select session_id, duration, request from sessions where session_id = c7e36a30-af3a-11e2-9ec9-772ec39805fe; session_id | duration | request c7e36a30-af3a-11e2-9ec9-772ec39805fe | 1230 | multiget_slice However, the event-level breakdown shows no such large duration: cqlsh:system_traces select * from events where session_id = c7e36a30-af3a-11e2-9ec9-772ec39805fe; session_id | event_id | activity | source | source_elapsed | thread -+--- c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a30-af3a-11e2-9480-e9d811e0fc18 | Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 19 | Thread-57 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a31-af3a-11e2-9ec9-772ec39805fe | Sending message to /xxx.xxx.153.16 | xxx.xxx.90.147 | 246 | WRITE-/xxx.xxx.4.16 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9480-e9d811e0fc18 | Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 259 | Thread-57 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9ec9-772ec39805fe | Sending message to /10.248.106.37 | xxx.xxx.90.147 | 253 | WRITE-/xxx.xxx.79.52 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-b8dc-a7032a583115 | Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 25 | Thread-94 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9480-e9d811e0fc18 | Executing single-partition query on CardHash | xxx.xxx.4.16 | 421 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9ec9-772ec39805fe | Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 310 | WRITE-/xxx.xxx.213.136 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-b8dc-a7032a583115 | Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 106 | Thread-94 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9480-e9d811e0fc18 | Acquiring sstable references | xxx.xxx.4.16 | 444 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9ec9-772ec39805fe | Sending message to /10.248.106.37 | xxx.xxx.90.147 | 352 | WRITE-/xxx.xxx.79.52 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-b8dc-a7032a583115 | Executing single-partition query on CardHash | xxx.xxx.213.136 | 144 | ReadStage:11 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9480-e9d811e0fc18 | Merging memtable contents | xxx.xxx.4.16 | 472 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9ec9-772ec39805fe | Sending message to /10.248.95.237 | xxx.xxx.90.147 | 362 | WRITE-/xxx.xxx.201.218 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-b8dc-a7032a583115 | Acquiring sstable references | xxx.xxx.213.136 | 164 | ReadStage:11 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-9480-e9d811e0fc18 | Merging data from memtables and 0 sstables | xxx.xxx.4.16 | 510 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-9ec9-772ec39805fe | Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 376 | WRITE-/xxx.xxx.213.136 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-b8dc-a7032a583115 | Merging memtable contents | xxx.xxx.213.136 | 195 | ReadStage:11 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-9480-e9d811e0fc18 | Read 0 live cells and 0 tombstoned | xxx.xxx.4.16 | 530 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-9ec9-772ec39805fe | Sending message to /10.248.95.237 | xxx.xxx.90.147 | 401 | WRITE-/xxx.xxx.201.218 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-b8dc-a7032a583115 | Executing single-partition query on CardHash | xxx.xxx.213.136 | 202 | ReadStage:41 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39146-af3a-11e2-9480-e9d811e0fc18 | Enqueuing
[jira] [Comment Edited] (CASSANDRA-5520) Query tracing session info inconsistent with events info
[ https://issues.apache.org/jira/browse/CASSANDRA-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646587#comment-13646587 ] Jonathan Ellis edited comment on CASSANDRA-5520 at 5/1/13 1:54 PM: --- As I said on IRC, I think the fix is as simple as changing {{logger.debug(... timed out)}} to trace calls in CassandraServer. (More precisely, we should trace it if tracing is enabled, otherwise logger.debug it, for symmetry with our open-trace-session code.) If we're going to gold-plate it, including the result in the trace (but not the debug call, we only want to construct it if requested, because creating Strings is expensive) is a good idea too. :) was (Author: jbellis): As I said on IRC, I think the fix is as simple as changing {{logger.debug(... timed out)}} to trace calls. (More precisely, we should trace it if tracing is enabled, otherwise logger.debug it, for symmetry with our open-trace-session code.) Query tracing session info inconsistent with events info Key: CASSANDRA-5520 URL: https://issues.apache.org/jira/browse/CASSANDRA-5520 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.4 Environment: Linux Reporter: Ilya Kirnos Assignee: Aleksey Yeschenko Session info for a trace is showing that a query took 10 seconds (it timed out). cqlsh:system_traces select session_id, duration, request from sessions where session_id = c7e36a30-af3a-11e2-9ec9-772ec39805fe; session_id | duration | request c7e36a30-af3a-11e2-9ec9-772ec39805fe | 1230 | multiget_slice However, the event-level breakdown shows no such large duration: cqlsh:system_traces select * from events where session_id = c7e36a30-af3a-11e2-9ec9-772ec39805fe; session_id | event_id | activity | source | source_elapsed | thread -+--- c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a30-af3a-11e2-9480-e9d811e0fc18 | Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 19 | Thread-57 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a31-af3a-11e2-9ec9-772ec39805fe | Sending message to /xxx.xxx.153.16 | xxx.xxx.90.147 | 246 | WRITE-/xxx.xxx.4.16 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9480-e9d811e0fc18 | Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 259 | Thread-57 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9ec9-772ec39805fe | Sending message to /10.248.106.37 | xxx.xxx.90.147 | 253 | WRITE-/xxx.xxx.79.52 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-b8dc-a7032a583115 | Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 25 | Thread-94 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9480-e9d811e0fc18 | Executing single-partition query on CardHash | xxx.xxx.4.16 | 421 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9ec9-772ec39805fe | Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 310 | WRITE-/xxx.xxx.213.136 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-b8dc-a7032a583115 | Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 106 | Thread-94 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9480-e9d811e0fc18 | Acquiring sstable references | xxx.xxx.4.16 | 444 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9ec9-772ec39805fe | Sending message to /10.248.106.37 | xxx.xxx.90.147 | 352 | WRITE-/xxx.xxx.79.52 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-b8dc-a7032a583115 | Executing single-partition query on CardHash | xxx.xxx.213.136 | 144 | ReadStage:11 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9480-e9d811e0fc18 | Merging memtable contents | xxx.xxx.4.16 | 472 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9ec9-772ec39805fe | Sending message to /10.248.95.237 | xxx.xxx.90.147 | 362 | WRITE-/xxx.xxx.201.218 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-b8dc-a7032a583115 | Acquiring sstable references | xxx.xxx.213.136 | 164 | ReadStage:11 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-9480-e9d811e0fc18 | Merging data from memtables and 0 sstables | xxx.xxx.4.16 | 510 | ReadStage:5329 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-9ec9-772ec39805fe | Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 376 | WRITE-/xxx.xxx.213.136 c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-b8dc-a7032a583115 | Merging memtable contents | xxx.xxx.213.136 | 195 | ReadStage:11
[jira] [Commented] (CASSANDRA-5521) move IndexSummary off heap
[ https://issues.apache.org/jira/browse/CASSANDRA-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646583#comment-13646583 ] Jonathan Ellis commented on CASSANDRA-5521: --- As Pavel notes, the most important use of getKey is the one in binarySearch. But here we only care about the comparison, we don't actually need the artifact of a ByteBuffer. So why not compare directly without creating a buffer first? No buffer at all is even cheaper than a native buffer. (This would also mean that we only need to look at as many bytes as it takes before the first difference is found.) move IndexSummary off heap -- Key: CASSANDRA-5521 URL: https://issues.apache.org/jira/browse/CASSANDRA-5521 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jonathan Ellis Assignee: Vijay Fix For: 2.0 IndexSummary can still use a lot of heap for narrow-row sstables. (It can also contribute to memory fragmentation because of the large arrays it creates.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-5513) java.lang.ClassCastException: org.apache.cassandra.db.DeletedColumn cannot be cast to java.math.BigInteger
[ https://issues.apache.org/jira/browse/CASSANDRA-5513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646586#comment-13646586 ] Jonathan Ellis edited comment on CASSANDRA-5513 at 5/1/13 1:50 PM: --- # Can you provide instructions to reproduce the ClassCastException? # The JVM crash is its way of telling you, quit using openjdk. If you are already using the latest version of Oracle Java6, add -XX:-UseCompressedOops to the JVM options in cassandra-env.sh. was (Author: jbellis): # Can you provide instructions to reproduce the ClassCastException? # The JVM crash is its way of telling you, quit using openjdk. If you are already using the Oracle JDK, add -XX:-UseCompressedOops to the JVM options in cassandra-env.sh. java.lang.ClassCastException: org.apache.cassandra.db.DeletedColumn cannot be cast to java.math.BigInteger -- Key: CASSANDRA-5513 URL: https://issues.apache.org/jira/browse/CASSANDRA-5513 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.2 Environment: Linux XYZ 3.5.0-27-generic #46~precise1-Ubuntu SMP Tue Mar 26 19:33:21 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Reporter: Jouni Kontinen ERROR 18:30:16,044 Exception in thread Thread[ReplicateOnWriteStage:24,5,main] java.lang.RuntimeException: java.lang.NullPointerException at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.NullPointerException at org.apache.cassandra.dht.BigIntegerToken.compareTo(BigIntegerToken.java:38) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:85) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:36) at java.util.Collections.indexedBinarySearch(Unknown Source) at java.util.Collections.binarySearch(Unknown Source) at org.apache.cassandra.io.sstable.SSTableReader.getIndexScanPosition(SSTableReader.java:482) at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:755) at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:717) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:43) at org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:101) at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:275) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1363) at org.apache.cassandra.db.ColumnFamilyStore.getThroughCache(ColumnFamilyStore.java:1176) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1132) at org.apache.cassandra.db.Table.getRow(Table.java:355) at org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64) at org.apache.cassandra.db.CounterMutation.makeReplicationMutation(CounterMutation.java:90) at org.apache.cassandra.service.StorageProxy$7$1.runMayThrow(StorageProxy.java:796) at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1578) ... 3 more ERROR 18:30:16,044 Exception in thread Thread[ReadStage:77,5,main] java.lang.RuntimeException: java.lang.NullPointerException at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.NullPointerException at org.apache.cassandra.dht.BigIntegerToken.compareTo(BigIntegerToken.java:38) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:85) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:36) at java.util.Collections.indexedBinarySearch(Unknown Source) at java.util.Collections.binarySearch(Unknown Source)
[jira] [Commented] (CASSANDRA-5513) java.lang.ClassCastException: org.apache.cassandra.db.DeletedColumn cannot be cast to java.math.BigInteger
[ https://issues.apache.org/jira/browse/CASSANDRA-5513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646586#comment-13646586 ] Jonathan Ellis commented on CASSANDRA-5513: --- # Can you provide instructions to reproduce the ClassCastException? # The JVM crash is its way of telling you, quit using openjdk. If you are already using the Oracle JDK, add -XX:-UseCompressedOops to the JVM options in cassandra-env.sh. java.lang.ClassCastException: org.apache.cassandra.db.DeletedColumn cannot be cast to java.math.BigInteger -- Key: CASSANDRA-5513 URL: https://issues.apache.org/jira/browse/CASSANDRA-5513 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.2 Environment: Linux XYZ 3.5.0-27-generic #46~precise1-Ubuntu SMP Tue Mar 26 19:33:21 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Reporter: Jouni Kontinen ERROR 18:30:16,044 Exception in thread Thread[ReplicateOnWriteStage:24,5,main] java.lang.RuntimeException: java.lang.NullPointerException at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.NullPointerException at org.apache.cassandra.dht.BigIntegerToken.compareTo(BigIntegerToken.java:38) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:85) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:36) at java.util.Collections.indexedBinarySearch(Unknown Source) at java.util.Collections.binarySearch(Unknown Source) at org.apache.cassandra.io.sstable.SSTableReader.getIndexScanPosition(SSTableReader.java:482) at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:755) at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:717) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:43) at org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:101) at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:275) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1363) at org.apache.cassandra.db.ColumnFamilyStore.getThroughCache(ColumnFamilyStore.java:1176) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1132) at org.apache.cassandra.db.Table.getRow(Table.java:355) at org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64) at org.apache.cassandra.db.CounterMutation.makeReplicationMutation(CounterMutation.java:90) at org.apache.cassandra.service.StorageProxy$7$1.runMayThrow(StorageProxy.java:796) at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1578) ... 3 more ERROR 18:30:16,044 Exception in thread Thread[ReadStage:77,5,main] java.lang.RuntimeException: java.lang.NullPointerException at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.NullPointerException at org.apache.cassandra.dht.BigIntegerToken.compareTo(BigIntegerToken.java:38) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:85) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:36) at java.util.Collections.indexedBinarySearch(Unknown Source) at java.util.Collections.binarySearch(Unknown Source) at org.apache.cassandra.io.sstable.SSTableReader.getIndexScanPosition(SSTableReader.java:482) at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:755) at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:717) at
[jira] [Commented] (CASSANDRA-5518) Clean out token range bisection on bootstrap
[ https://issues.apache.org/jira/browse/CASSANDRA-5518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646593#comment-13646593 ] Jonathan Ellis commented on CASSANDRA-5518: --- What that code is doing is saying, If I don't know what token I'm supposed to assassinate, pick a random one. (Prior to this patch it was saying, pick a bootstrap token, which is just as broken.) So basically that comment is there to remind someone, probably you, to fix it better, but I consider it out of scope for this ticket since removing the bisect garbage makes it no worse. :) Clean out token range bisection on bootstrap Key: CASSANDRA-5518 URL: https://issues.apache.org/jira/browse/CASSANDRA-5518 Project: Cassandra Issue Type: Task Components: Core Reporter: Jonathan Ellis Assignee: Jonathan Ellis Priority: Minor Fix For: 2.0 Bootstrapping a node by bisecting an existing node's range has never been very useful, and with vnodes it's thoroughly obsolete. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5518) Clean out token range bisection on bootstrap
[ https://issues.apache.org/jira/browse/CASSANDRA-5518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646594#comment-13646594 ] Jonathan Ellis commented on CASSANDRA-5518: --- (Probably the Right Thing is to just give up and acknowledge we're screwed, but again, the original code was going to great lengths to avoid this and I wanted to make this semantically neutral.) Clean out token range bisection on bootstrap Key: CASSANDRA-5518 URL: https://issues.apache.org/jira/browse/CASSANDRA-5518 Project: Cassandra Issue Type: Task Components: Core Reporter: Jonathan Ellis Assignee: Jonathan Ellis Priority: Minor Fix For: 2.0 Bootstrapping a node by bisecting an existing node's range has never been very useful, and with vnodes it's thoroughly obsolete. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5426) Redesign repair messages
[ https://issues.apache.org/jira/browse/CASSANDRA-5426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646605#comment-13646605 ] Sylvain Lebresne commented on CASSANDRA-5426: - The approach looks good to me: I definitively like the idea of having a common message/header for all repair message. Same for breaking down ARS in separate files. One thing I'm not sure of is that it seems that when we get an error, we log it but we doesn't error out the repair session itself. Maybe we should, otherwise I fear most people won't notice something went wrong. Also, when we fail, maybe we could send an error message (typically the exception message) for easier debugging/reporting. I also wonder if maybe we should have more of a fail-fast policy when there is errors. For instance, if one node fail it's validation phase, maybe it might be worth failing right away and let the user re-trigger a repair once he has fixed whatever was the source of the error, rather than still differencing/syncing the other nodes (but I admit that both solutions are possible). Going a bit further, I think we should add 2 messages to interrupt the validation and sync phase. If only because that could be useful to users if they need to stop a repair for some reason, but also, if we get an error during validation from one node, we could use that to interrupt the other nodes and thus fail fast while minimizing the amount of work done uselessly. But anyway, I guess that part can be done in a follow up ticket. Other than that, a few remarks/nits on the refactor.: - In RepairMessageType, if gossip is any proof, then it could be wise to add more FUTURE type, say 4 or 5 just in case. As an aside, I tend to not be a fan of relying on an enum ordinal for serialization since it's extra fragile (you should not reorder stuffs for instance, which could easily slip by mistake imo). I personally prefer assigning the ordinal manually (like in transport.Message.Type for instance) even if that's a bit more verbose. Anyway, if people like it the way it is, so be it, but wanted to mention it nonetheless. - For the hashCode methods (Differencer, NodePair, RepairJobDesc,...), I'd prefer using guava's Objects.hashcode() (and Objects.equal() for equals() when there is null). - Do we really need RepairMessageHeader? What about making RepairMessage a RepairJobDesc, a RepairMessageType and a body, rather than creating yet another class? - In RepairMessage, not sure it's a good idea to allow a {{null}} body, especially since RepairMessageVerbHander doesn't handle it really. I'd rather assert it's not {{null}} and assert we do always have a body serializer in RepairMessage serializer (since that's really a programing error if we don't). - The code to create the repair messages feels a bit verbose. What about adding a static helper in RepairMessage: {noformat} public static MessageOutRepairMessageT createMessage(RepairJobDesc desc, RepairMessageType type, T body); {noformat} or even maybe one helper for each RepairMessageType? - I would move the gossiper/failure registration in ARS.addToActiveSessions. - I'd remove Validator.rangeToValidate and just inline desc.range. - Out of curiosity, what do you mean by the TODO in the comment of Validator.add(). What is there todo typically? Cause MT has some notion of valid/invalid ranges but that's historical and not used. Validator is really just a MT builder. So feels to me that mentioning cases 2 and 4 to later say we don't consider them will be more confusing than helpful for people looking at the code for the first time. As a side note, I think we could simplify the hell out of the MerkleTree class, but that's another story. - For MerkleTree.fullRange, maybe it's time to add it to the MT serializer rather than restoring it manually, which is ugly and error prone. Aslo, for the partitioner, let's maybe have MT uses DatabaseDescriptor.getPartitioner() directly rather than restoring them manually in Differencer.run(). I also noted that we can remove all the old compat stuff since we don't have backward compatibility issues with repair, but you already told me you had started doing it :). Redesign repair messages Key: CASSANDRA-5426 URL: https://issues.apache.org/jira/browse/CASSANDRA-5426 Project: Cassandra Issue Type: Improvement Reporter: Yuki Morishita Assignee: Yuki Morishita Priority: Minor Labels: repair Fix For: 2.0 Many people have been reporting 'repair hang' when something goes wrong. Two major causes of hang are 1) validation failure and 2) streaming failure. Currently, when those failures happen, the failed node would not respond back to the repair initiator. The goal of this ticket is to redesign
[jira] [Created] (CASSANDRA-5531) Disallow renaming columns one at a time when when the table don't have CQL3 metadata yet
Sylvain Lebresne created CASSANDRA-5531: --- Summary: Disallow renaming columns one at a time when when the table don't have CQL3 metadata yet Key: CASSANDRA-5531 URL: https://issues.apache.org/jira/browse/CASSANDRA-5531 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.0 Reporter: Sylvain Lebresne Assignee: Sylvain Lebresne Priority: Minor Fix For: 1.2.5 As noted in CASSANDRA-5489, if you have a thrift CF, say: {noformat} [default@ks] create column family test with comparator='CompositeType(Int32Type, Int32Type, Int32Type)' and key_validation_class=UTF8Type and default_validation_class=UTF8Type; {noformat} And that trying to use it in CQL3 you rename the columns one at a time, you can get: {noformat} cqlsh:ks DESC COLUMNFAMILY test; CREATE TABLE test ( key text, column1 int, column2 int, column3 int, value text, PRIMARY KEY (key, column1, column2, column3) ) WITH COMPACT STORAGE ... cqlsh:ks ALTER TABLE test RENAME column2 TO foo; TSocket read 0 bytes {noformat} No, it happens that renaming the columns one at a time is a bad idea anyway as it can confuse the CQL3 code in some cases. So I suggest to disallow that and to force renaming all columns in one request the first you use a thrift CF from CQL3. To be clear, you will still be able to rename columns one at a time in general, it's just for the first time you rename on a metadata-less CF. So overall that's a very small limitation and it simplify our lives code-wise. See CASSANDRA-5489 for a bit more context here. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5518) Clean out token range bisection on bootstrap
[ https://issues.apache.org/jira/browse/CASSANDRA-5518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646616#comment-13646616 ] Brandon Williams commented on CASSANDRA-5518: - Hey, it's called 'unsafe' for a reason, but it tries to be as robust as possible in case of a gossip bug where you have no way to evict something without a full ring restart. Anyway, lgtm, +1. Clean out token range bisection on bootstrap Key: CASSANDRA-5518 URL: https://issues.apache.org/jira/browse/CASSANDRA-5518 Project: Cassandra Issue Type: Task Components: Core Reporter: Jonathan Ellis Assignee: Jonathan Ellis Priority: Minor Fix For: 2.0 Bootstrapping a node by bisecting an existing node's range has never been very useful, and with vnodes it's thoroughly obsolete. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: Disallow renaming columns one at a time when when the table don't have CQL3 metadata yet
Updated Branches: refs/heads/cassandra-1.2 60f09f012 - 199cd0b78 Disallow renaming columns one at a time when when the table don't have CQL3 metadata yet patch by slebresne; reviewed by iamaleksey for CASSANDRA-5531 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/199cd0b7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/199cd0b7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/199cd0b7 Branch: refs/heads/cassandra-1.2 Commit: 199cd0b785a73393f451f526930cb17f67706462 Parents: 60f09f0 Author: Sylvain Lebresne sylv...@datastax.com Authored: Wed May 1 16:23:40 2013 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Wed May 1 16:23:40 2013 +0200 -- CHANGES.txt|2 + pylib/cqlshlib/cql3handling.py |6 +++- .../cql3/statements/AlterTableStatement.java | 29 +++ 3 files changed, 36 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/199cd0b7/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index bfece4f..0045e04 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -16,6 +16,8 @@ * Set isRunning flag later in binary protocol server (CASSANDRA-5467) * Fix use of CQL3 functions with descencind clustering order (CASSANDRA-5472) * Prevent repair when protocol version does not match (CASSANDRA-5523) + * Disallow renaming columns one at a time for thrift table in CQL3 + (CASSANDRA-5531) Merged from 1.1 * Add retry mechanism to OTC for non-droppable_verbs (CASSANDRA-5393) * Use allocator information to improve memtable memory usage estimate http://git-wip-us.apache.org/repos/asf/cassandra/blob/199cd0b7/pylib/cqlshlib/cql3handling.py -- diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py index 00e2d0f..15f7c54 100644 --- a/pylib/cqlshlib/cql3handling.py +++ b/pylib/cqlshlib/cql3handling.py @@ -1469,7 +1469,7 @@ class CqlTableDef: cf.partition_key_validator = lookup_casstype(cf.key_validator) cf.comparator = lookup_casstype(cf.comparator) cf.default_validator = lookup_casstype(cf.default_validator) -cf.coldefs = coldefs +cf.coldefs = cf.filter_regular_coldefs(coldefs) cf.compact_storage = cf.is_compact_storage() cf.key_aliases = cf.get_key_aliases() cf.partition_key_components = cf.key_aliases @@ -1478,6 +1478,10 @@ class CqlTableDef: cf.columns = cf.get_columns() return cf +def filter_regular_coldefs(self, cols): +return [ c for c in cols if c.get('type', 'regular') == 'regular' ] + + # not perfect, but good enough; please read CFDefinition constructor comments # returns False if we are dealing with a CQL3 table, True otherwise. # 'compact' here means 'needs WITH COMPACT STORAGE option for CREATE TABLE in CQL3'. http://git-wip-us.apache.org/repos/asf/cassandra/blob/199cd0b7/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java index b07a8a8..c6af2a0 100644 --- a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java @@ -22,6 +22,10 @@ import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.Set; + +import com.google.common.base.Predicate; +import com.google.common.collect.Sets; import org.apache.cassandra.auth.Permission; import org.apache.cassandra.config.CFMetaData; @@ -187,6 +191,12 @@ public class AlterTableStatement extends SchemaAlteringStatement cfProps.applyToCFMetadata(cfm); break; case RENAME: + +if (cfm.getKeyAliases().size() cfDef.keys.size() !renamesAllAliases(cfDef, renames.keySet(), CFDefinition.Name.Kind.KEY_ALIAS, cfDef.keys.size())) +throw new InvalidRequestException(When upgrading from Thrift, all the columns of the (composite) partition key must be renamed together.); +if (cfm.getColumnAliases().size() cfDef.columns.size() !renamesAllAliases(cfDef, renames.keySet(), CFDefinition.Name.Kind.COLUMN_ALIAS, cfDef.columns.size())) +throw new InvalidRequestException(When upgrading from Thrift, all the columns of the (composite) clustering key must be renamed together.); + for
git commit: Swap THshaServer for TThreadedSelectorServer Patch by tjake; reviewed by jbellis for CASSANDRA-5530
Updated Branches: refs/heads/trunk 559674593 - 5dad16045 Swap THshaServer for TThreadedSelectorServer Patch by tjake; reviewed by jbellis for CASSANDRA-5530 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5dad1604 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5dad1604 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5dad1604 Branch: refs/heads/trunk Commit: 5dad16045eaa71240b4d190ee9166ef7b1db2788 Parents: 5596745 Author: Jake Luciani j...@apache.org Authored: Wed May 1 09:58:58 2013 -0400 Committer: Jake Luciani j...@apache.org Committed: Wed May 1 09:58:58 2013 -0400 -- CHANGES.txt|1 + .../apache/cassandra/thrift/CustomTHsHaServer.java |6 -- 2 files changed, 5 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5dad1604/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 31054e3..f73349a 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -38,6 +38,7 @@ * Add an official way to disable compactions (CASSANDRA-5074) * Reenable ALTER TABLE DROP with new semantics (CASSANDRA-3919) * Add binary protocol versioning (CASSANDRA-5436) + * Swap THshaServer for TThreadedSelectorServer (CASSANDRA-5530) 1.2.5 * fix compaction throttling bursty-ness (CASSANDRA-4316) http://git-wip-us.apache.org/repos/asf/cassandra/blob/5dad1604/src/java/org/apache/cassandra/thrift/CustomTHsHaServer.java -- diff --git a/src/java/org/apache/cassandra/thrift/CustomTHsHaServer.java b/src/java/org/apache/cassandra/thrift/CustomTHsHaServer.java index 557a5d8..411e082 100644 --- a/src/java/org/apache/cassandra/thrift/CustomTHsHaServer.java +++ b/src/java/org/apache/cassandra/thrift/CustomTHsHaServer.java @@ -22,6 +22,7 @@ import java.util.concurrent.ExecutorService; import java.util.concurrent.SynchronousQueue; import java.util.concurrent.TimeUnit; +import org.apache.thrift.server.TThreadedSelectorServer; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -41,7 +42,7 @@ import org.apache.thrift.transport.TTransportException; * it is spread across multiple threads. Number of selector thread can be the * number of CPU available. */ -public class CustomTHsHaServer extends THsHaServer +public class CustomTHsHaServer extends TThreadedSelectorServer { private static final Logger LOGGER = LoggerFactory.getLogger(CustomTHsHaServer.class.getName()); @@ -89,11 +90,12 @@ public class CustomTHsHaServer extends THsHaServer TimeUnit.SECONDS, new SynchronousQueueRunnable(), new NamedThreadFactory(RPC-Thread), RPC-THREAD-POOL); - THsHaServer.Args serverArgs = new THsHaServer.Args(serverTransport).inputTransportFactory(args.inTransportFactory) + TThreadedSelectorServer.Args serverArgs = new TThreadedSelectorServer.Args(serverTransport).inputTransportFactory(args.inTransportFactory) .outputTransportFactory(args.outTransportFactory) .inputProtocolFactory(args.tProtocolFactory) .outputProtocolFactory(args.tProtocolFactory) .processor(args.processor) + .selectorThreads(Runtime.getRuntime().availableProcessors()) .executorService(executorService); // Check for available processors in the system which will be equal to the IO Threads. return new CustomTHsHaServer(serverArgs);
[jira] [Updated] (CASSANDRA-5531) Disallow renaming columns one at a time when when the table don't have CQL3 metadata yet
[ https://issues.apache.org/jira/browse/CASSANDRA-5531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-5531: Attachment: 5531.txt To be clear, that ticket is just to track the bits that go into 1.2 discussed on CASSANDRA-5489. I'm attaching the patch committed for info. Disallow renaming columns one at a time when when the table don't have CQL3 metadata yet Key: CASSANDRA-5531 URL: https://issues.apache.org/jira/browse/CASSANDRA-5531 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.0 Reporter: Sylvain Lebresne Assignee: Sylvain Lebresne Priority: Minor Fix For: 1.2.5 Attachments: 5531.txt As noted in CASSANDRA-5489, if you have a thrift CF, say: {noformat} [default@ks] create column family test with comparator='CompositeType(Int32Type, Int32Type, Int32Type)' and key_validation_class=UTF8Type and default_validation_class=UTF8Type; {noformat} And that trying to use it in CQL3 you rename the columns one at a time, you can get: {noformat} cqlsh:ks DESC COLUMNFAMILY test; CREATE TABLE test ( key text, column1 int, column2 int, column3 int, value text, PRIMARY KEY (key, column1, column2, column3) ) WITH COMPACT STORAGE ... cqlsh:ks ALTER TABLE test RENAME column2 TO foo; TSocket read 0 bytes {noformat} No, it happens that renaming the columns one at a time is a bad idea anyway as it can confuse the CQL3 code in some cases. So I suggest to disallow that and to force renaming all columns in one request the first you use a thrift CF from CQL3. To be clear, you will still be able to rename columns one at a time in general, it's just for the first time you rename on a metadata-less CF. So overall that's a very small limitation and it simplify our lives code-wise. See CASSANDRA-5489 for a bit more context here. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5530) Switch from THSHAServer to TThreadedSelectorServer
[ https://issues.apache.org/jira/browse/CASSANDRA-5530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646618#comment-13646618 ] T Jake Luciani commented on CASSANDRA-5530: --- Committed and set to testing :) Switch from THSHAServer to TThreadedSelectorServer -- Key: CASSANDRA-5530 URL: https://issues.apache.org/jira/browse/CASSANDRA-5530 Project: Cassandra Issue Type: Improvement Components: Core Reporter: T Jake Luciani Assignee: T Jake Luciani Fix For: 2.0 Attachments: 5530.txt TThreadedSelectorServer is new in Thrift 0.9. It builds on HSHA by allowing for a set of threads for IO and a set work request processing... I've attached the performance numbers below. It's a lot closer to TThreadedServer. ThreadedServer (Default) {code} Write Averages from the middle 80% of values: interval_op_rate : 14811 interval_key_rate : 14811 latency median: 1.7 latency 95th percentile : 5.3 latency 99.9th percentile : 142.6 Total operation time : 00:01:16 END Read Averages from the middle 80% of values: interval_op_rate : 16898 interval_key_rate : 16898 latency median: 2.2 latency 95th percentile : 8.5 latency 99.9th percentile : 165.7 Total operation time : 00:01:05 END {code} HSHA (CURRENT) {code} Write Averages from the middle 80% of values: interval_op_rate : 8939 interval_key_rate : 8939 latency median: 5.0 latency 95th percentile : 10.1 latency 99.9th percentile : 105.4 Total operation time : 00:01:56 END Read Averages from the middle 80% of values: interval_op_rate : 9608 interval_key_rate : 9608 latency median: 5.1 latency 95th percentile : 7.7 latency 99.9th percentile : 51.6 Total operation time : 00:01:49 END {code} TThreadedSelectorServer (NEW) {code} Write Averages from the middle 80% of values: interval_op_rate : 11640 interval_key_rate : 11640 latency median: 3.1 latency 95th percentile : 10.6 latency 99.9th percentile : 135.9 Total operation time : 00:01:30 END Read Averages from the middle 80% of values: interval_op_rate : 15247 interval_key_rate : 15247 latency median: 2.8 latency 95th percentile : 7.1 latency 99.9th percentile : 40.3 Total operation time : 00:01:06 END {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5489) Fix 2.0 key and column aliases serialization and cqlsh DESC SCHEMA
[ https://issues.apache.org/jira/browse/CASSANDRA-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646619#comment-13646619 ] Sylvain Lebresne commented on CASSANDRA-5489: - Alright, I've committed the bits mentioned above but for the sake of JIRA tracking I've created CASSANDRA-5531 for that committed part. I'll focus on fixing the remaining parts in trunk on this issue. Fix 2.0 key and column aliases serialization and cqlsh DESC SCHEMA -- Key: CASSANDRA-5489 URL: https://issues.apache.org/jira/browse/CASSANDRA-5489 Project: Cassandra Issue Type: Bug Components: Core, Tools Affects Versions: 2.0 Reporter: Aleksey Yeschenko Assignee: Sylvain Lebresne Priority: Minor Fix For: 2.0 Attachments: 5489-1.2.txt, 5489.txt CASSANDRA-5125 made a slight change to how key_aliases and column_aliases are serialized in schema. Prior to that we never kept nulls in the the json pseudo-lists. This does break cqlsh and probably breaks 1.2 nodes receiving such migrations as well. The patch reverts this behavior and also slightly modifies cqlsh itself to ignore non-regular columns from system.schema_columns table. This patch breaks nothing, since 2.0 already handles 1.2 non-null padded alias lists. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (CASSANDRA-5531) Disallow renaming columns one at a time when when the table don't have CQL3 metadata yet
[ https://issues.apache.org/jira/browse/CASSANDRA-5531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne resolved CASSANDRA-5531. - Resolution: Fixed Disallow renaming columns one at a time when when the table don't have CQL3 metadata yet Key: CASSANDRA-5531 URL: https://issues.apache.org/jira/browse/CASSANDRA-5531 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.0 Reporter: Sylvain Lebresne Assignee: Sylvain Lebresne Priority: Minor Fix For: 1.2.5 Attachments: 5531.txt As noted in CASSANDRA-5489, if you have a thrift CF, say: {noformat} [default@ks] create column family test with comparator='CompositeType(Int32Type, Int32Type, Int32Type)' and key_validation_class=UTF8Type and default_validation_class=UTF8Type; {noformat} And that trying to use it in CQL3 you rename the columns one at a time, you can get: {noformat} cqlsh:ks DESC COLUMNFAMILY test; CREATE TABLE test ( key text, column1 int, column2 int, column3 int, value text, PRIMARY KEY (key, column1, column2, column3) ) WITH COMPACT STORAGE ... cqlsh:ks ALTER TABLE test RENAME column2 TO foo; TSocket read 0 bytes {noformat} No, it happens that renaming the columns one at a time is a bad idea anyway as it can confuse the CQL3 code in some cases. So I suggest to disallow that and to force renaming all columns in one request the first you use a thrift CF from CQL3. To be clear, you will still be able to rename columns one at a time in general, it's just for the first time you rename on a metadata-less CF. So overall that's a very small limitation and it simplify our lives code-wise. See CASSANDRA-5489 for a bit more context here. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4180) Single-pass compaction for LCR
[ https://issues.apache.org/jira/browse/CASSANDRA-4180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-4180: -- Reviewer: krummas (was: jasobrown) Rebased again, to https://github.com/jbellis/cassandra/commits/4180-5. [~krummas], can you review? Single-pass compaction for LCR -- Key: CASSANDRA-4180 URL: https://issues.apache.org/jira/browse/CASSANDRA-4180 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Sylvain Lebresne Assignee: Jonathan Ellis Labels: compaction Fix For: 2.0 Attachments: scrub-error.txt LazilyCompactedRow reads all data twice to compact a row which is obviously inefficient. The main reason we do that is to compute the row header. However, CASSANDRA-2319 have removed the main part of that row header. What remains is the size in bytes and the number of columns, but it should be relatively simple to remove those, which would then remove the need for the two-phase compaction. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5528) CLUSTERING ORDER BY support for cqlsh's DESCRIBE
[ https://issues.apache.org/jira/browse/CASSANDRA-5528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-5528: --- Attachment: 5528-clustering-order-v2.txt v2 patch makes the suggested simplifications and omits the CLUSTERING ORDER clause if there aren't any DESC items. {quote} also, I don't fully understand why {code} if layout.compact_storage and not issubclass(layout.comparator, CompositeType) {code} is not just {code} if not issubclass(layout.comparator, CompositeType) {code} {quote} I think this was needed for some 1.1 cases where a composite comparator wasn't used, but I presume we don't care about that for a 1.2 change to cqlsh, is that correct? CLUSTERING ORDER BY support for cqlsh's DESCRIBE Key: CASSANDRA-5528 URL: https://issues.apache.org/jira/browse/CASSANDRA-5528 Project: Cassandra Issue Type: Improvement Components: Tools Reporter: Tyler Hobbs Assignee: Tyler Hobbs Priority: Minor Attachments: 5528-clustering-order-v1.txt, 5528-clustering-order-v2.txt, cql3_test_cases cqlsh currently does not output any sort of {{CLUSTERING ORDER BY}} options with {{DESCRIBE}} and, furthermore, {{DESC}} orderings will result in bad column type definitions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: cqlsh: add CLUSTERING ORDER BY support to DESCRIBE
Updated Branches: refs/heads/cassandra-1.2 199cd0b78 - 24f6387bc cqlsh: add CLUSTERING ORDER BY support to DESCRIBE patch by Tyler Hobbs; reviewed by Aleksey Yeschenko for CASSANDRA-5528 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/24f6387b Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/24f6387b Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/24f6387b Branch: refs/heads/cassandra-1.2 Commit: 24f6387bcddc72856569e86a7b3e7a9da86d0037 Parents: 199cd0b Author: Aleksey Yeschenko alek...@apache.org Authored: Wed May 1 19:34:20 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Wed May 1 19:42:55 2013 +0300 -- CHANGES.txt |1 + bin/cqlsh | 40 +++- 2 files changed, 36 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/24f6387b/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 0045e04..7429401 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -18,6 +18,7 @@ * Prevent repair when protocol version does not match (CASSANDRA-5523) * Disallow renaming columns one at a time for thrift table in CQL3 (CASSANDRA-5531) + * cqlsh: add CLUSTERING ORDER BY support to DESCRIBE (CASSANDRA-5528) Merged from 1.1 * Add retry mechanism to OTC for non-droppable_verbs (CASSANDRA-5393) * Use allocator information to improve memtable memory usage estimate http://git-wip-us.apache.org/repos/asf/cassandra/blob/24f6387b/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 5292d5e..853e3fd 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -32,7 +32,7 @@ exit 1 from __future__ import with_statement description = CQL Shell for Apache Cassandra -version = 2.3.0 +version = 3.0.0 from StringIO import StringIO from itertools import groupby @@ -103,7 +103,7 @@ except ImportError, e: import cql.decoders from cql.cursor import _VOID_DESCRIPTION from cql.cqltypes import (cql_types, cql_typename, lookup_casstype, lookup_cqltype, - CassandraType) + CassandraType, ReversedType, CompositeType) # cqlsh should run correctly when run out of a Cassandra source tree, # out of an unpacked Cassandra tarball, and after a proper package install. @@ -1308,7 +1308,13 @@ class Shell(cmd.Cmd): indexed_columns = [] for col in layout.columns[1:]: colname = self.cql_protect_name(col.name) -out.write(,\n %s %s % (colname, col.cqltype.cql_parameterized_type())) +coltype = col.cqltype + +# Reversed types only matter for clustering order, not column definitions +if issubclass(coltype, ReversedType): +coltype = coltype.subtypes[0] + +out.write(,\n %s %s % (colname, coltype.cql_parameterized_type())) if col.index_name is not None: indexed_columns.append(col) @@ -1329,8 +1335,32 @@ class Shell(cmd.Cmd): out.write(' WITH COMPACT STORAGE') joiner = 'AND' -# TODO: this should display CLUSTERING ORDER BY information too. -# work out how to determine that from a layout. +# check if we need a CLUSTERING ORDER BY clause +if layout.column_aliases: +# get a list of clustering component types +if issubclass(layout.comparator, CompositeType): +clustering_types = layout.comparator.subtypes +else: +clustering_types = [layout.comparator] + +# only write CLUSTERING ORDER clause of we have = 1 DESC item +if any(issubclass(t, ReversedType) for t in clustering_types): +if layout.compact_storage: +out.write(' AND\n ') +else: +out.write(' WITH') +out.write(' CLUSTERING ORDER BY (') + +clustering_names = self.cql_protect_names(layout.column_aliases) + +inner = [] +for colname, coltype in zip(clustering_names, clustering_types): +ordering = DESC if issubclass(coltype, ReversedType) else ASC +inner.append(%s %s % (colname, ordering)) +out.write(, .join(inner)) + +out.write()) +joiner = AND cf_opts = [] compaction_strategy = trim_if_present(getattr(layout, 'compaction_strategy_class'),
[3/3] git commit: Merge branch 'cassandra-1.2' into trunk
Merge branch 'cassandra-1.2' into trunk Conflicts: bin/cqlsh src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/59172810 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/59172810 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/59172810 Branch: refs/heads/trunk Commit: 5917281000bf51d10ffade5ac357f9cbf81daee6 Parents: 5dad160 24f6387 Author: Aleksey Yeschenko alek...@apache.org Authored: Wed May 1 20:03:26 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Wed May 1 20:03:26 2013 +0300 -- CHANGES.txt|3 + bin/cqlsh | 41 +-- pylib/cqlshlib/cql3handling.py |6 ++- .../cql3/statements/AlterTableStatement.java | 27 ++ 4 files changed, 71 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/59172810/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/59172810/bin/cqlsh -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/59172810/pylib/cqlshlib/cql3handling.py -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/59172810/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java -- diff --cc src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java index 945d202,c6af2a0..03e1b4b --- a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java @@@ -186,18 -191,81 +190,41 @@@ public class AlterTableStatement extend cfProps.applyToCFMetadata(cfm); break; case RENAME: - -if (cfm.getKeyAliases().size() cfDef.keys.size() !renamesAllAliases(cfDef, renames.keySet(), CFDefinition.Name.Kind.KEY_ALIAS, cfDef.keys.size())) ++if (cfm.partitionKeyColumns().size() cfDef.keys.size() !renamesAllAliases(cfDef, renames.keySet(), CFDefinition.Name.Kind.KEY_ALIAS, cfDef.keys.size())) + throw new InvalidRequestException(When upgrading from Thrift, all the columns of the (composite) partition key must be renamed together.); -if (cfm.getColumnAliases().size() cfDef.columns.size() !renamesAllAliases(cfDef, renames.keySet(), CFDefinition.Name.Kind.COLUMN_ALIAS, cfDef.columns.size())) ++if (cfm.clusteringKeyColumns().size() cfDef.columns.size() !renamesAllAliases(cfDef, renames.keySet(), CFDefinition.Name.Kind.COLUMN_ALIAS, cfDef.columns.size())) + throw new InvalidRequestException(When upgrading from Thrift, all the columns of the (composite) clustering key must be renamed together.); + for (Map.EntryColumnIdentifier, ColumnIdentifier entry : renames.entrySet()) { -CFDefinition.Name from = cfDef.get(entry.getKey()); +ColumnIdentifier from = entry.getKey(); ColumnIdentifier to = entry.getValue(); -if (from == null) -throw new InvalidRequestException(String.format(Column %s was not found in table %s, entry.getKey(), columnFamily())); - -CFDefinition.Name exists = cfDef.get(to); -if (exists != null) -throw new InvalidRequestException(String.format(Cannot rename column %s in table %s to %s; another column of that name already exist, from, columnFamily(), to)); - -switch (from.kind) -{ -case KEY_ALIAS: -cfm.keyAliases(rename(from.position, to, cfm.getKeyAliases())); -break; -case COLUMN_ALIAS: -cfm.columnAliases(rename(from.position, to, cfm.getColumnAliases())); -break; -case VALUE_ALIAS: -cfm.valueAlias(to.key); -break; -case COLUMN_METADATA: -throw new InvalidRequestException(String.format(Cannot rename non PRIMARY KEY part %s, from)); -} +cfm.renameColumn(from.key, from.toString(), to.key, to.toString()); }
[1/3] git commit: Disallow renaming columns one at a time when when the table don't have CQL3 metadata yet
Updated Branches: refs/heads/trunk 5dad16045 - 591728100 Disallow renaming columns one at a time when when the table don't have CQL3 metadata yet patch by slebresne; reviewed by iamaleksey for CASSANDRA-5531 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/199cd0b7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/199cd0b7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/199cd0b7 Branch: refs/heads/trunk Commit: 199cd0b785a73393f451f526930cb17f67706462 Parents: 60f09f0 Author: Sylvain Lebresne sylv...@datastax.com Authored: Wed May 1 16:23:40 2013 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Wed May 1 16:23:40 2013 +0200 -- CHANGES.txt|2 + pylib/cqlshlib/cql3handling.py |6 +++- .../cql3/statements/AlterTableStatement.java | 29 +++ 3 files changed, 36 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/199cd0b7/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index bfece4f..0045e04 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -16,6 +16,8 @@ * Set isRunning flag later in binary protocol server (CASSANDRA-5467) * Fix use of CQL3 functions with descencind clustering order (CASSANDRA-5472) * Prevent repair when protocol version does not match (CASSANDRA-5523) + * Disallow renaming columns one at a time for thrift table in CQL3 + (CASSANDRA-5531) Merged from 1.1 * Add retry mechanism to OTC for non-droppable_verbs (CASSANDRA-5393) * Use allocator information to improve memtable memory usage estimate http://git-wip-us.apache.org/repos/asf/cassandra/blob/199cd0b7/pylib/cqlshlib/cql3handling.py -- diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py index 00e2d0f..15f7c54 100644 --- a/pylib/cqlshlib/cql3handling.py +++ b/pylib/cqlshlib/cql3handling.py @@ -1469,7 +1469,7 @@ class CqlTableDef: cf.partition_key_validator = lookup_casstype(cf.key_validator) cf.comparator = lookup_casstype(cf.comparator) cf.default_validator = lookup_casstype(cf.default_validator) -cf.coldefs = coldefs +cf.coldefs = cf.filter_regular_coldefs(coldefs) cf.compact_storage = cf.is_compact_storage() cf.key_aliases = cf.get_key_aliases() cf.partition_key_components = cf.key_aliases @@ -1478,6 +1478,10 @@ class CqlTableDef: cf.columns = cf.get_columns() return cf +def filter_regular_coldefs(self, cols): +return [ c for c in cols if c.get('type', 'regular') == 'regular' ] + + # not perfect, but good enough; please read CFDefinition constructor comments # returns False if we are dealing with a CQL3 table, True otherwise. # 'compact' here means 'needs WITH COMPACT STORAGE option for CREATE TABLE in CQL3'. http://git-wip-us.apache.org/repos/asf/cassandra/blob/199cd0b7/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java index b07a8a8..c6af2a0 100644 --- a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java @@ -22,6 +22,10 @@ import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.Set; + +import com.google.common.base.Predicate; +import com.google.common.collect.Sets; import org.apache.cassandra.auth.Permission; import org.apache.cassandra.config.CFMetaData; @@ -187,6 +191,12 @@ public class AlterTableStatement extends SchemaAlteringStatement cfProps.applyToCFMetadata(cfm); break; case RENAME: + +if (cfm.getKeyAliases().size() cfDef.keys.size() !renamesAllAliases(cfDef, renames.keySet(), CFDefinition.Name.Kind.KEY_ALIAS, cfDef.keys.size())) +throw new InvalidRequestException(When upgrading from Thrift, all the columns of the (composite) partition key must be renamed together.); +if (cfm.getColumnAliases().size() cfDef.columns.size() !renamesAllAliases(cfDef, renames.keySet(), CFDefinition.Name.Kind.COLUMN_ALIAS, cfDef.columns.size())) +throw new InvalidRequestException(When upgrading from Thrift, all the columns of the (composite) clustering key must be renamed together.); + for (Map.EntryColumnIdentifier,
[2/3] git commit: cqlsh: add CLUSTERING ORDER BY support to DESCRIBE
cqlsh: add CLUSTERING ORDER BY support to DESCRIBE patch by Tyler Hobbs; reviewed by Aleksey Yeschenko for CASSANDRA-5528 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/24f6387b Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/24f6387b Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/24f6387b Branch: refs/heads/trunk Commit: 24f6387bcddc72856569e86a7b3e7a9da86d0037 Parents: 199cd0b Author: Aleksey Yeschenko alek...@apache.org Authored: Wed May 1 19:34:20 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Wed May 1 19:42:55 2013 +0300 -- CHANGES.txt |1 + bin/cqlsh | 40 +++- 2 files changed, 36 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/24f6387b/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 0045e04..7429401 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -18,6 +18,7 @@ * Prevent repair when protocol version does not match (CASSANDRA-5523) * Disallow renaming columns one at a time for thrift table in CQL3 (CASSANDRA-5531) + * cqlsh: add CLUSTERING ORDER BY support to DESCRIBE (CASSANDRA-5528) Merged from 1.1 * Add retry mechanism to OTC for non-droppable_verbs (CASSANDRA-5393) * Use allocator information to improve memtable memory usage estimate http://git-wip-us.apache.org/repos/asf/cassandra/blob/24f6387b/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 5292d5e..853e3fd 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -32,7 +32,7 @@ exit 1 from __future__ import with_statement description = CQL Shell for Apache Cassandra -version = 2.3.0 +version = 3.0.0 from StringIO import StringIO from itertools import groupby @@ -103,7 +103,7 @@ except ImportError, e: import cql.decoders from cql.cursor import _VOID_DESCRIPTION from cql.cqltypes import (cql_types, cql_typename, lookup_casstype, lookup_cqltype, - CassandraType) + CassandraType, ReversedType, CompositeType) # cqlsh should run correctly when run out of a Cassandra source tree, # out of an unpacked Cassandra tarball, and after a proper package install. @@ -1308,7 +1308,13 @@ class Shell(cmd.Cmd): indexed_columns = [] for col in layout.columns[1:]: colname = self.cql_protect_name(col.name) -out.write(,\n %s %s % (colname, col.cqltype.cql_parameterized_type())) +coltype = col.cqltype + +# Reversed types only matter for clustering order, not column definitions +if issubclass(coltype, ReversedType): +coltype = coltype.subtypes[0] + +out.write(,\n %s %s % (colname, coltype.cql_parameterized_type())) if col.index_name is not None: indexed_columns.append(col) @@ -1329,8 +1335,32 @@ class Shell(cmd.Cmd): out.write(' WITH COMPACT STORAGE') joiner = 'AND' -# TODO: this should display CLUSTERING ORDER BY information too. -# work out how to determine that from a layout. +# check if we need a CLUSTERING ORDER BY clause +if layout.column_aliases: +# get a list of clustering component types +if issubclass(layout.comparator, CompositeType): +clustering_types = layout.comparator.subtypes +else: +clustering_types = [layout.comparator] + +# only write CLUSTERING ORDER clause of we have = 1 DESC item +if any(issubclass(t, ReversedType) for t in clustering_types): +if layout.compact_storage: +out.write(' AND\n ') +else: +out.write(' WITH') +out.write(' CLUSTERING ORDER BY (') + +clustering_names = self.cql_protect_names(layout.column_aliases) + +inner = [] +for colname, coltype in zip(clustering_names, clustering_types): +ordering = DESC if issubclass(coltype, ReversedType) else ASC +inner.append(%s %s % (colname, ordering)) +out.write(, .join(inner)) + +out.write()) +joiner = AND cf_opts = [] compaction_strategy = trim_if_present(getattr(layout, 'compaction_strategy_class'),
[jira] [Commented] (CASSANDRA-5528) CLUSTERING ORDER BY support for cqlsh's DESCRIBE
[ https://issues.apache.org/jira/browse/CASSANDRA-5528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646716#comment-13646716 ] Aleksey Yeschenko commented on CASSANDRA-5528: -- Committed in 24f6387bcddc72856569e86a7b3e7a9da86d0037, thanks. CLUSTERING ORDER BY support for cqlsh's DESCRIBE Key: CASSANDRA-5528 URL: https://issues.apache.org/jira/browse/CASSANDRA-5528 Project: Cassandra Issue Type: Improvement Components: Tools Reporter: Tyler Hobbs Assignee: Tyler Hobbs Priority: Minor Attachments: 5528-clustering-order-v1.txt, 5528-clustering-order-v2.txt, cql3_test_cases cqlsh currently does not output any sort of {{CLUSTERING ORDER BY}} options with {{DESCRIBE}} and, furthermore, {{DESC}} orderings will result in bad column type definitions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: Ninja-fix test_cqlsh_output.py
Updated Branches: refs/heads/cassandra-1.2 24f6387bc - 1c86fa3c1 Ninja-fix test_cqlsh_output.py Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1c86fa3c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1c86fa3c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1c86fa3c Branch: refs/heads/cassandra-1.2 Commit: 1c86fa3c1871c1315df67647cb1d46762e43c279 Parents: 24f6387 Author: Aleksey Yeschenko alek...@apache.org Authored: Wed May 1 22:04:45 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Wed May 1 22:04:45 2013 +0300 -- pylib/cqlshlib/test/test_cqlsh_output.py |4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c86fa3c/pylib/cqlshlib/test/test_cqlsh_output.py -- diff --git a/pylib/cqlshlib/test/test_cqlsh_output.py b/pylib/cqlshlib/test/test_cqlsh_output.py index 8127adf..e67cc9c 100644 --- a/pylib/cqlshlib/test/test_cqlsh_output.py +++ b/pylib/cqlshlib/test/test_cqlsh_output.py @@ -737,8 +737,8 @@ class TestCqlshOutput(BaseTestCase): asciicol ascii, blobcol blob, varcharcol text, - textcol text, - varintcol varint + varintcol varint, + textcol text ) WITH comment='' AND comparator=text AND
[1/2] git commit: Ninja-fix test_cqlsh_output.py
Updated Branches: refs/heads/trunk 591728100 - 5db484f60 Ninja-fix test_cqlsh_output.py Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1c86fa3c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1c86fa3c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1c86fa3c Branch: refs/heads/trunk Commit: 1c86fa3c1871c1315df67647cb1d46762e43c279 Parents: 24f6387 Author: Aleksey Yeschenko alek...@apache.org Authored: Wed May 1 22:04:45 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Wed May 1 22:04:45 2013 +0300 -- pylib/cqlshlib/test/test_cqlsh_output.py |4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c86fa3c/pylib/cqlshlib/test/test_cqlsh_output.py -- diff --git a/pylib/cqlshlib/test/test_cqlsh_output.py b/pylib/cqlshlib/test/test_cqlsh_output.py index 8127adf..e67cc9c 100644 --- a/pylib/cqlshlib/test/test_cqlsh_output.py +++ b/pylib/cqlshlib/test/test_cqlsh_output.py @@ -737,8 +737,8 @@ class TestCqlshOutput(BaseTestCase): asciicol ascii, blobcol blob, varcharcol text, - textcol text, - varintcol varint + varintcol varint, + textcol text ) WITH comment='' AND comparator=text AND
[jira] [Created] (CASSANDRA-5532) Maven package installation broken by recent build changes
Sam Tunnicliffe created CASSANDRA-5532: -- Summary: Maven package installation broken by recent build changes Key: CASSANDRA-5532 URL: https://issues.apache.org/jira/browse/CASSANDRA-5532 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 2.0 Reporter: Sam Tunnicliffe Assignee: Sam Tunnicliffe Priority: Minor Attachments: 0001-Remove-antcall-around-m-a-t-retrieve-build.patch CASSANDRA-3818 provides the ability to disable maven ant tests during the build. Part of the change is to refactor the maven-ant-tasks-retrieve-build target to wrapped by an antcall, with a guard condition to check if m-a-t should be used. The use of antcall causes the target those targets it depends on to be called in a separate scope to the main build, which unfortunately means that the repository refs and ant macros which get defined there are not available once the antcall is completed. The final effect is that the mvn-install task fails as the install macro is not defined in its scope, and the artifacts task fails due to the repository refs being similarly undefined. I haven't tried it, but I suspect the publish task would be affected in the same way. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5532) Maven package installation broken by recent build changes
[ https://issues.apache.org/jira/browse/CASSANDRA-5532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sam Tunnicliffe updated CASSANDRA-5532: --- Attachment: 0001-Remove-antcall-around-m-a-t-retrieve-build.patch Patch to revert the antcall wrapping around maven-ant-tasks-retrieve-build. I've added the check on without.maven directly to the target which seems to me to have the desired effect as the original patch Maven package installation broken by recent build changes - Key: CASSANDRA-5532 URL: https://issues.apache.org/jira/browse/CASSANDRA-5532 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 2.0 Reporter: Sam Tunnicliffe Assignee: Sam Tunnicliffe Priority: Minor Attachments: 0001-Remove-antcall-around-m-a-t-retrieve-build.patch CASSANDRA-3818 provides the ability to disable maven ant tests during the build. Part of the change is to refactor the maven-ant-tasks-retrieve-build target to wrapped by an antcall, with a guard condition to check if m-a-t should be used. The use of antcall causes the target those targets it depends on to be called in a separate scope to the main build, which unfortunately means that the repository refs and ant macros which get defined there are not available once the antcall is completed. The final effect is that the mvn-install task fails as the install macro is not defined in its scope, and the artifacts task fails due to the repository refs being similarly undefined. I haven't tried it, but I suspect the publish task would be affected in the same way. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5532) Maven package installation broken by recent build changes
[ https://issues.apache.org/jira/browse/CASSANDRA-5532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-5532: -- Reviewer: dbrosius Maven package installation broken by recent build changes - Key: CASSANDRA-5532 URL: https://issues.apache.org/jira/browse/CASSANDRA-5532 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 2.0 Reporter: Sam Tunnicliffe Assignee: Sam Tunnicliffe Priority: Minor Attachments: 0001-Remove-antcall-around-m-a-t-retrieve-build.patch CASSANDRA-3818 provides the ability to disable maven ant tests during the build. Part of the change is to refactor the maven-ant-tasks-retrieve-build target to wrapped by an antcall, with a guard condition to check if m-a-t should be used. The use of antcall causes the target those targets it depends on to be called in a separate scope to the main build, which unfortunately means that the repository refs and ant macros which get defined there are not available once the antcall is completed. The final effect is that the mvn-install task fails as the install macro is not defined in its scope, and the artifacts task fails due to the repository refs being similarly undefined. I haven't tried it, but I suspect the publish task would be affected in the same way. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: Removed token range bisection patch by jbellis; reviewed by brandonwilliams for CASSANDRA-5518
Updated Branches: refs/heads/trunk 5db484f60 - e26b726cc Removed token range bisection patch by jbellis; reviewed by brandonwilliams for CASSANDRA-5518 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e26b726c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e26b726c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e26b726c Branch: refs/heads/trunk Commit: e26b726ccbc140f3971ce24371bd555db497e94e Parents: 5db484f Author: Jonathan Ellis jbel...@apache.org Authored: Fri Apr 26 23:54:03 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Wed May 1 14:44:33 2013 -0500 -- CHANGES.txt|1 + NEWS.txt |4 + .../org/apache/cassandra/dht/BootStrapper.java | 139 +-- src/java/org/apache/cassandra/gms/Gossiper.java| 27 +-- .../org/apache/cassandra/net/MessagingService.java |3 +- .../apache/cassandra/service/StorageService.java | 45 +- .../cassandra/service/StorageServiceMBean.java |5 - .../org/apache/cassandra/dht/BootStrapperTest.java | 111 8 files changed, 23 insertions(+), 312 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e26b726c/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 0774d90..d880c4d 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0 + * Removed token range bisection (CASSANDRA-5518) * Removed compatibility with pre-1.2.5 sstables and network messages (CASSANDRA-5511) * removed PBSPredictor (CASSANDRA-5455) http://git-wip-us.apache.org/repos/asf/cassandra/blob/e26b726c/NEWS.txt -- diff --git a/NEWS.txt b/NEWS.txt index 1a1425c..1e86cab 100644 --- a/NEWS.txt +++ b/NEWS.txt @@ -20,6 +20,10 @@ Upgrading - Replication and strategy options do not accept unknown options anymore. This was already the case for CQL3 in 1.2 but this is now the case for thrift too. +- auto_bootstrap of a single-token node with no initial_token will + now pick a random token instead of bisecting an existing token + range. We recommend upgrading to vnodes; failing that, we + recommend specifying initial_token. - reduce_cache_sizes_at, reduce_cache_capacity_to, and flush_largest_memtables_at options have been removed from cassandra.yaml. - CacheServiceMBean.reduceCacheSizes() has been removed. http://git-wip-us.apache.org/repos/asf/cassandra/blob/e26b726c/src/java/org/apache/cassandra/dht/BootStrapper.java -- diff --git a/src/java/org/apache/cassandra/dht/BootStrapper.java b/src/java/org/apache/cassandra/dht/BootStrapper.java index a1dfce8..f354b08 100644 --- a/src/java/org/apache/cassandra/dht/BootStrapper.java +++ b/src/java/org/apache/cassandra/dht/BootStrapper.java @@ -22,29 +22,21 @@ import java.io.DataOutput; import java.io.IOException; import java.net.InetAddress; import java.util.*; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.locks.Condition; import org.slf4j.Logger; import org.slf4j.LoggerFactory; -import org.apache.cassandra.config.Schema; -import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.config.DatabaseDescriptor; +import org.apache.cassandra.config.Schema; import org.apache.cassandra.db.Table; +import org.apache.cassandra.db.TypeSizes; +import org.apache.cassandra.exceptions.ConfigurationException; +import org.apache.cassandra.gms.FailureDetector; +import org.apache.cassandra.io.IVersionedSerializer; import org.apache.cassandra.locator.AbstractReplicationStrategy; import org.apache.cassandra.locator.TokenMetadata; -import org.apache.cassandra.net.IAsyncCallback; -import org.apache.cassandra.net.IVerbHandler; -import org.apache.cassandra.net.MessagingService; import org.apache.cassandra.service.StorageService; import org.apache.cassandra.streaming.OperationType; -import org.apache.cassandra.utils.FBUtilities; -import org.apache.cassandra.utils.SimpleCondition; -import org.apache.cassandra.db.TypeSizes; -import org.apache.cassandra.gms.FailureDetector; -import org.apache.cassandra.io.IVersionedSerializer; -import org.apache.cassandra.net.*; public class BootStrapper { @@ -111,8 +103,9 @@ public class BootStrapper int numTokens = DatabaseDescriptor.getNumTokens(); if (numTokens 1) throw new ConfigurationException(num_tokens must be = 1); + if (numTokens == 1) -return Collections.singleton(getBalancedToken(metadata, load)); +logger.warn(Picking random token
[jira] [Reopened] (CASSANDRA-5378) Fat Client: No longer works in 1.2
[ https://issues.apache.org/jira/browse/CASSANDRA-5378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani reopened CASSANDRA-5378: --- The fix for the test breaks the fat client 5378-v2.txt It can no longer get the schema from the non-fat clients and instead throws: {code} java.lang.NullPointerException at org.apache.cassandra.service.MigrationManager.maybeScheduleSchemaPull(MigrationManager.java:123) at org.apache.cassandra.service.MigrationManager.onAlive(MigrationManager.java:98) at org.apache.cassandra.gms.Gossiper.markAlive(Gossiper.java:773) at org.apache.cassandra.gms.Gossiper.handleMajorStateChange(Gossiper.java:816) at org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:901) at org.apache.cassandra.gms.GossipDigestAck2VerbHandler.doVerb(GossipDigestAck2VerbHandler.java:50) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {code} Can we put the change back in or somehow fix this NPE? Fat Client: No longer works in 1.2 -- Key: CASSANDRA-5378 URL: https://issues.apache.org/jira/browse/CASSANDRA-5378 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.0 Reporter: Carl Yeksigian Assignee: Carl Yeksigian Labels: client Fix For: 1.2.4 Attachments: 5378-1.2.txt, 5378.txt, 5378-v2.txt The current client only example doesn't compile. After doing some updates, the fat client still won't work, mainly because the schema is not being pushed to the fat client. I've made changes to the client to support CQL3 commands, to the ServiceManager to wait until a migration has completed before starting the client, and to the MigrationManager to not try to pull schemas from a fat client. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4316) Compaction Throttle too bursty with large rows
[ https://issues.apache.org/jira/browse/CASSANDRA-4316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan McGuire updated CASSANDRA-4316: Tester: enigmacurry Compaction Throttle too bursty with large rows -- Key: CASSANDRA-4316 URL: https://issues.apache.org/jira/browse/CASSANDRA-4316 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.8.0 Reporter: Wayne Lewis Assignee: Jonathan Ellis Fix For: 1.2.5 Attachments: 4316-1.2.txt, 4316-1.2.txt, 4316-1.2-v2.txt, 4316-v3.txt In org.apache.cassandra.db.compaction.CompactionIterable the check for compaction throttling occurs once every 1000 rows. In our workload this is much too large as we have many large rows (16 - 100 MB). With a 100 MB row, about 100 GB is read (and possibly written) before the compaction throttle sleeps. This causes bursts of essentially unthrottled compaction IO followed by a long sleep which yields inconsistence performance and high error rates during the bursts. We applied a workaround to check throttle every row which solved our performance and error issues: line 116 in org.apache.cassandra.db.compaction.CompactionIterable: if ((row++ % 1000) == 0) replaced with if ((row++ % 1) == 0) I think the better solution is to calculate how often throttle should be checked based on the throttle rate to apply sleeps more consistently. E.g. if 16MB/sec is the limit then check for sleep after every 16MB is read so sleeps are spaced out about every second. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5506) Reduce memory consumption of IndexSummary
[ https://issues.apache.org/jira/browse/CASSANDRA-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan McGuire updated CASSANDRA-5506: Tester: enigmacurry Reduce memory consumption of IndexSummary - Key: CASSANDRA-5506 URL: https://issues.apache.org/jira/browse/CASSANDRA-5506 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Nick Puz Assignee: Jonathan Ellis Fix For: 1.2.5 I am evaluating cassandra for a use case with many tiny rows which would result in a node with 1-3TB of storage having billions of rows. Before loading that much data I am hitting GC issues and when looking at the heap dump I noticed that 70+% of the memory was used by IndexSummaries. The two major issues seem to be: 1) that the positions are stored as an ArrayListLong which results in each position taking 24 bytes (class + flags + 8 byte long). This might make sense when the file is initially written but once it has been serialized it would be a lot more memory efficient to just have an long[] (really a int[] would be fine unless 2GB sstables are allowed). 2) The DecoratedKey for a byte[16] key takes 195 bytes -- this is for the overhead of the ByteBuffer in the key and overhead in the token. To somewhat work around the problem I have increased index_sample but will this many rows that didn't really help starts to have diminishing returns. NOTE: This heap dump was from linux with a 64bit oracle vm. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5513) java.lang.ClassCastException: org.apache.cassandra.db.DeletedColumn cannot be cast to java.math.BigInteger
[ https://issues.apache.org/jira/browse/CASSANDRA-5513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13647053#comment-13647053 ] Jouni Kontinen commented on CASSANDRA-5513: --- Cannot reproduce as it happens randomly. I have identical nodes but I've this is only happening on one node. After first oops SIGSEGV, I added that env option but still yesterday It crashed again. This time no stack trace in output.log, only SIGSEGV with: # V [libjvm.so+0x3cfadc] Par_MarkFromRootsClosure::scan_oops_in_oop(HeapWord*)+0x21c -- Looks like another oops.. java.lang.ClassCastException: org.apache.cassandra.db.DeletedColumn cannot be cast to java.math.BigInteger -- Key: CASSANDRA-5513 URL: https://issues.apache.org/jira/browse/CASSANDRA-5513 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.2 Environment: Linux XYZ 3.5.0-27-generic #46~precise1-Ubuntu SMP Tue Mar 26 19:33:21 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Reporter: Jouni Kontinen ERROR 18:30:16,044 Exception in thread Thread[ReplicateOnWriteStage:24,5,main] java.lang.RuntimeException: java.lang.NullPointerException at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.NullPointerException at org.apache.cassandra.dht.BigIntegerToken.compareTo(BigIntegerToken.java:38) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:85) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:36) at java.util.Collections.indexedBinarySearch(Unknown Source) at java.util.Collections.binarySearch(Unknown Source) at org.apache.cassandra.io.sstable.SSTableReader.getIndexScanPosition(SSTableReader.java:482) at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:755) at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:717) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:43) at org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:101) at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:275) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1363) at org.apache.cassandra.db.ColumnFamilyStore.getThroughCache(ColumnFamilyStore.java:1176) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1132) at org.apache.cassandra.db.Table.getRow(Table.java:355) at org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64) at org.apache.cassandra.db.CounterMutation.makeReplicationMutation(CounterMutation.java:90) at org.apache.cassandra.service.StorageProxy$7$1.runMayThrow(StorageProxy.java:796) at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1578) ... 3 more ERROR 18:30:16,044 Exception in thread Thread[ReadStage:77,5,main] java.lang.RuntimeException: java.lang.NullPointerException at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.NullPointerException at org.apache.cassandra.dht.BigIntegerToken.compareTo(BigIntegerToken.java:38) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:85) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:36) at java.util.Collections.indexedBinarySearch(Unknown Source) at java.util.Collections.binarySearch(Unknown Source) at org.apache.cassandra.io.sstable.SSTableReader.getIndexScanPosition(SSTableReader.java:482) at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:755) at
[jira] [Commented] (CASSANDRA-5521) move IndexSummary off heap
[ https://issues.apache.org/jira/browse/CASSANDRA-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13647094#comment-13647094 ] Pavel Yaskevich commented on CASSANDRA-5521: if we do so we would have to change partitioner interface to decorateKey and getToken, DecoratedKey to have two key fields and change at least MurmurHash to accept both BB and M at the same time. I'm my opinion it's just too many changes just because we don't require JNA but with hybrid approach we don't have to do any of that work. Besides mentioned users would want to run with JNA in production anyway. move IndexSummary off heap -- Key: CASSANDRA-5521 URL: https://issues.apache.org/jira/browse/CASSANDRA-5521 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jonathan Ellis Assignee: Vijay Fix For: 2.0 IndexSummary can still use a lot of heap for narrow-row sstables. (It can also contribute to memory fragmentation because of the large arrays it creates.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5532) Maven package installation broken by recent build changes
[ https://issues.apache.org/jira/browse/CASSANDRA-5532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13647137#comment-13647137 ] Dave Brosius commented on CASSANDRA-5532: - +1 Maven package installation broken by recent build changes - Key: CASSANDRA-5532 URL: https://issues.apache.org/jira/browse/CASSANDRA-5532 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 2.0 Reporter: Sam Tunnicliffe Assignee: Sam Tunnicliffe Priority: Minor Attachments: 0001-Remove-antcall-around-m-a-t-retrieve-build.patch CASSANDRA-3818 provides the ability to disable maven ant tests during the build. Part of the change is to refactor the maven-ant-tasks-retrieve-build target to wrapped by an antcall, with a guard condition to check if m-a-t should be used. The use of antcall causes the target those targets it depends on to be called in a separate scope to the main build, which unfortunately means that the repository refs and ant macros which get defined there are not available once the antcall is completed. The final effect is that the mvn-install task fails as the install macro is not defined in its scope, and the artifacts task fails due to the repository refs being similarly undefined. I haven't tried it, but I suspect the publish task would be affected in the same way. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[5/5] git commit: Merge branch 'cassandra-1.2' into trunk
Merge branch 'cassandra-1.2' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/25438e6e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/25438e6e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/25438e6e Branch: refs/heads/trunk Commit: 25438e6ec38eceb56e7a32529913d85593edcdd2 Parents: e26b726 72e031e Author: Brandon Williams brandonwilli...@apache.org Authored: Wed May 1 20:11:34 2013 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Wed May 1 20:11:34 2013 -0500 -- .../apache/cassandra/dht/Murmur3Partitioner.java |2 +- .../apache/cassandra/dht/RandomPartitioner.java|2 +- 2 files changed, 2 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/25438e6e/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/25438e6e/src/java/org/apache/cassandra/dht/RandomPartitioner.java --
[3/5] git commit: Merge branch 'cassandra-1.1' into cassandra-1.2
Merge branch 'cassandra-1.1' into cassandra-1.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/37a0d324 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/37a0d324 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/37a0d324 Branch: refs/heads/trunk Commit: 37a0d3241c40cc885266d2f888146ea4129beefa Parents: 927c4a4 de212e5 Author: Brandon Williams brandonwilli...@apache.org Authored: Wed May 1 20:09:42 2013 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Wed May 1 20:09:42 2013 -0500 -- .../apache/cassandra/dht/RandomPartitioner.java|2 +- 1 files changed, 1 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/37a0d324/src/java/org/apache/cassandra/dht/RandomPartitioner.java --
[2/5] git commit: Give users a clue how they called it.
Give users a clue how they called it. Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de212e59 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de212e59 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de212e59 Branch: refs/heads/trunk Commit: de212e59a60cb173e44555cf2bd292779e27a4c0 Parents: b6730aa Author: Brandon Williams brandonwilli...@apache.org Authored: Wed May 1 20:09:21 2013 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Wed May 1 20:09:21 2013 -0500 -- .../apache/cassandra/dht/RandomPartitioner.java|2 +- 1 files changed, 1 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/de212e59/src/java/org/apache/cassandra/dht/RandomPartitioner.java -- diff --git a/src/java/org/apache/cassandra/dht/RandomPartitioner.java b/src/java/org/apache/cassandra/dht/RandomPartitioner.java index 15364d3..7e29a7d 100644 --- a/src/java/org/apache/cassandra/dht/RandomPartitioner.java +++ b/src/java/org/apache/cassandra/dht/RandomPartitioner.java @@ -160,7 +160,7 @@ public class RandomPartitioner extends AbstractPartitionerBigIntegerToken Iterator i = sortedTokens.iterator(); // 0-case -if (!i.hasNext()) { throw new RuntimeException(No nodes present in the cluster. How did you call this?); } +if (!i.hasNext()) { throw new RuntimeException(No nodes present in the cluster. Has this node finished starting up?); } // 1-case if (sortedTokens.size() == 1) { ownerships.put((Token)i.next(), new Float(1.0));
[1/5] git commit: Give users a clue how they called it.
Updated Branches: refs/heads/cassandra-1.1 b6730aaa5 - de212e59a refs/heads/trunk e26b726cc - 25438e6ec Give users a clue how they called it. Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de212e59 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de212e59 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de212e59 Branch: refs/heads/cassandra-1.1 Commit: de212e59a60cb173e44555cf2bd292779e27a4c0 Parents: b6730aa Author: Brandon Williams brandonwilli...@apache.org Authored: Wed May 1 20:09:21 2013 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Wed May 1 20:09:21 2013 -0500 -- .../apache/cassandra/dht/RandomPartitioner.java|2 +- 1 files changed, 1 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/de212e59/src/java/org/apache/cassandra/dht/RandomPartitioner.java -- diff --git a/src/java/org/apache/cassandra/dht/RandomPartitioner.java b/src/java/org/apache/cassandra/dht/RandomPartitioner.java index 15364d3..7e29a7d 100644 --- a/src/java/org/apache/cassandra/dht/RandomPartitioner.java +++ b/src/java/org/apache/cassandra/dht/RandomPartitioner.java @@ -160,7 +160,7 @@ public class RandomPartitioner extends AbstractPartitionerBigIntegerToken Iterator i = sortedTokens.iterator(); // 0-case -if (!i.hasNext()) { throw new RuntimeException(No nodes present in the cluster. How did you call this?); } +if (!i.hasNext()) { throw new RuntimeException(No nodes present in the cluster. Has this node finished starting up?); } // 1-case if (sortedTokens.size() == 1) { ownerships.put((Token)i.next(), new Float(1.0));
[4/5] git commit: Give users a clue how they called it.
Give users a clue how they called it. Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/72e031e6 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/72e031e6 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/72e031e6 Branch: refs/heads/trunk Commit: 72e031e6e81da79896f44450e0e683a3439ad1ae Parents: 37a0d32 Author: Brandon Williams brandonwilli...@apache.org Authored: Wed May 1 20:11:22 2013 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Wed May 1 20:11:22 2013 -0500 -- .../apache/cassandra/dht/Murmur3Partitioner.java |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/72e031e6/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java -- diff --git a/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java b/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java index a969320..502f5cc 100644 --- a/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java +++ b/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java @@ -121,7 +121,7 @@ public class Murmur3Partitioner extends AbstractPartitionerLongToken // 0-case if (!i.hasNext()) -throw new RuntimeException(No nodes present in the cluster. How did you call this?); +throw new RuntimeException(No nodes present in the cluster. Has this node finished starting up?); // 1-case if (sortedTokens.size() == 1) ownerships.put((Token) i.next(), new Float(1.0));
[1/2] git commit: Give users a clue how they called it.
Updated Branches: refs/heads/cassandra-1.2 1c86fa3c1 - dedeb0be8 Give users a clue how they called it. Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f82021b1 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f82021b1 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f82021b1 Branch: refs/heads/cassandra-1.2 Commit: f82021b1b977d8eb7f9ca794cf05c9c38fddd9a2 Parents: 1c86fa3 Author: Brandon Williams brandonwilli...@apache.org Authored: Wed May 1 20:09:21 2013 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Wed May 1 20:11:55 2013 -0500 -- .../apache/cassandra/dht/RandomPartitioner.java|2 +- 1 files changed, 1 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f82021b1/src/java/org/apache/cassandra/dht/RandomPartitioner.java -- diff --git a/src/java/org/apache/cassandra/dht/RandomPartitioner.java b/src/java/org/apache/cassandra/dht/RandomPartitioner.java index 26b399e..f99bcc0 100644 --- a/src/java/org/apache/cassandra/dht/RandomPartitioner.java +++ b/src/java/org/apache/cassandra/dht/RandomPartitioner.java @@ -161,7 +161,7 @@ public class RandomPartitioner extends AbstractPartitionerBigIntegerToken Iterator i = sortedTokens.iterator(); // 0-case -if (!i.hasNext()) { throw new RuntimeException(No nodes present in the cluster. How did you call this?); } +if (!i.hasNext()) { throw new RuntimeException(No nodes present in the cluster. Has this node finished starting up?); } // 1-case if (sortedTokens.size() == 1) { ownerships.put((Token)i.next(), new Float(1.0));
[2/2] git commit: Give users a clue how they called it.
Give users a clue how they called it. Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dedeb0be Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dedeb0be Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dedeb0be Branch: refs/heads/cassandra-1.2 Commit: dedeb0be8a8173a045e976b9878ff72110242f00 Parents: f82021b Author: Brandon Williams brandonwilli...@apache.org Authored: Wed May 1 20:11:22 2013 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Wed May 1 20:11:56 2013 -0500 -- .../apache/cassandra/dht/Murmur3Partitioner.java |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/dedeb0be/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java -- diff --git a/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java b/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java index a969320..502f5cc 100644 --- a/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java +++ b/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java @@ -121,7 +121,7 @@ public class Murmur3Partitioner extends AbstractPartitionerLongToken // 0-case if (!i.hasNext()) -throw new RuntimeException(No nodes present in the cluster. How did you call this?); +throw new RuntimeException(No nodes present in the cluster. Has this node finished starting up?); // 1-case if (sortedTokens.size() == 1) ownerships.put((Token) i.next(), new Float(1.0));
git commit: SchemaLoader.oldCfIdGenerator no longer used - removed
Updated Branches: refs/heads/trunk 25438e6ec - 3f394828f SchemaLoader.oldCfIdGenerator no longer used - removed Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f394828 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f394828 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f394828 Branch: refs/heads/trunk Commit: 3f394828f9c6cca20d8e376b9465fa2069efd542 Parents: 25438e6 Author: Dave Brosius dbros...@apache.org Authored: Wed May 1 21:03:33 2013 -0400 Committer: Dave Brosius dbros...@apache.org Committed: Wed May 1 21:13:00 2013 -0400 -- test/unit/org/apache/cassandra/SchemaLoader.java |3 --- 1 files changed, 0 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f394828/test/unit/org/apache/cassandra/SchemaLoader.java -- diff --git a/test/unit/org/apache/cassandra/SchemaLoader.java b/test/unit/org/apache/cassandra/SchemaLoader.java index 945d8e8..338abb9 100644 --- a/test/unit/org/apache/cassandra/SchemaLoader.java +++ b/test/unit/org/apache/cassandra/SchemaLoader.java @@ -21,7 +21,6 @@ import java.io.File; import java.io.IOException; import java.nio.ByteBuffer; import java.util.*; -import java.util.concurrent.atomic.AtomicInteger; import com.google.common.base.Charsets; import org.apache.cassandra.db.index.PerRowSecondaryIndexTest; @@ -52,8 +51,6 @@ public class SchemaLoader { private static Logger logger = LoggerFactory.getLogger(SchemaLoader.class); -private static AtomicInteger oldCfIdGenerator = new AtomicInteger(1000); - @BeforeClass public static void loadSchema() throws IOException {
[jira] [Commented] (CASSANDRA-5513) java.lang.ClassCastException: org.apache.cassandra.db.DeletedColumn cannot be cast to java.math.BigInteger
[ https://issues.apache.org/jira/browse/CASSANDRA-5513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13647161#comment-13647161 ] Brandon Williams commented on CASSANDRA-5513: - Your java6 is at least 5 revs behind, try the newest one. java.lang.ClassCastException: org.apache.cassandra.db.DeletedColumn cannot be cast to java.math.BigInteger -- Key: CASSANDRA-5513 URL: https://issues.apache.org/jira/browse/CASSANDRA-5513 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.2 Environment: Linux XYZ 3.5.0-27-generic #46~precise1-Ubuntu SMP Tue Mar 26 19:33:21 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Reporter: Jouni Kontinen ERROR 18:30:16,044 Exception in thread Thread[ReplicateOnWriteStage:24,5,main] java.lang.RuntimeException: java.lang.NullPointerException at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.NullPointerException at org.apache.cassandra.dht.BigIntegerToken.compareTo(BigIntegerToken.java:38) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:85) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:36) at java.util.Collections.indexedBinarySearch(Unknown Source) at java.util.Collections.binarySearch(Unknown Source) at org.apache.cassandra.io.sstable.SSTableReader.getIndexScanPosition(SSTableReader.java:482) at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:755) at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:717) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:43) at org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:101) at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:275) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1363) at org.apache.cassandra.db.ColumnFamilyStore.getThroughCache(ColumnFamilyStore.java:1176) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1132) at org.apache.cassandra.db.Table.getRow(Table.java:355) at org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64) at org.apache.cassandra.db.CounterMutation.makeReplicationMutation(CounterMutation.java:90) at org.apache.cassandra.service.StorageProxy$7$1.runMayThrow(StorageProxy.java:796) at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1578) ... 3 more ERROR 18:30:16,044 Exception in thread Thread[ReadStage:77,5,main] java.lang.RuntimeException: java.lang.NullPointerException at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.NullPointerException at org.apache.cassandra.dht.BigIntegerToken.compareTo(BigIntegerToken.java:38) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:85) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:36) at java.util.Collections.indexedBinarySearch(Unknown Source) at java.util.Collections.binarySearch(Unknown Source) at org.apache.cassandra.io.sstable.SSTableReader.getIndexScanPosition(SSTableReader.java:482) at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:755) at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:717) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:43) at org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:101) at
git commit: minor javadoc fixes
Updated Branches: refs/heads/trunk 3f394828f - a861c53e6 minor javadoc fixes Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a861c53e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a861c53e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a861c53e Branch: refs/heads/trunk Commit: a861c53e60402e4c7a5663d387c8f991285b3d98 Parents: 3f39482 Author: Dave Brosius dbros...@apache.org Authored: Wed May 1 21:36:58 2013 -0400 Committer: Dave Brosius dbros...@apache.org Committed: Wed May 1 21:36:58 2013 -0400 -- .../apache/cassandra/cql3/ColumnNameBuilder.java |2 +- .../org/apache/cassandra/db/ColumnFamilyStore.java |2 -- .../org/apache/cassandra/hadoop/ConfigHelper.java |4 ++-- 3 files changed, 3 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a861c53e/src/java/org/apache/cassandra/cql3/ColumnNameBuilder.java -- diff --git a/src/java/org/apache/cassandra/cql3/ColumnNameBuilder.java b/src/java/org/apache/cassandra/cql3/ColumnNameBuilder.java index b40efbb..b6625ab 100644 --- a/src/java/org/apache/cassandra/cql3/ColumnNameBuilder.java +++ b/src/java/org/apache/cassandra/cql3/ColumnNameBuilder.java @@ -34,7 +34,7 @@ public interface ColumnNameBuilder /** * Add a new ByteBuffer as the next component for this name. - * @param bb the ByteBuffer to add + * @param t the ByteBuffer to add * @param op the relationship this component should respect. * @throws IllegalStateException if the builder if full, i.e. if enough component has been added. * @return this builder http://git-wip-us.apache.org/repos/asf/cassandra/blob/a861c53e/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index 8903a46..b07d97c 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -1778,8 +1778,6 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean /** * Truncate deletes the entire column family's data with no expensive tombstone creation - * @return a Future to the delete operation. Call the future's get() to make - * sure the column family has been deleted */ public void truncateBlocking() { http://git-wip-us.apache.org/repos/asf/cassandra/blob/a861c53e/src/java/org/apache/cassandra/hadoop/ConfigHelper.java -- diff --git a/src/java/org/apache/cassandra/hadoop/ConfigHelper.java b/src/java/org/apache/cassandra/hadoop/ConfigHelper.java index 023c840..9d8b47e 100644 --- a/src/java/org/apache/cassandra/hadoop/ConfigHelper.java +++ b/src/java/org/apache/cassandra/hadoop/ConfigHelper.java @@ -496,7 +496,7 @@ public class ConfigHelper /** * @param conf The configuration to use. - * @return Value (converts MBs to Bytes) set by {@link setThriftFramedTransportSizeInMb(Configuration, int)} or default of 15MB + * @return Value (converts MBs to Bytes) set by {@link #setThriftFramedTransportSizeInMb(Configuration, int)} or default of 15MB */ public static int getThriftFramedTransportSize(Configuration conf) { @@ -510,7 +510,7 @@ public class ConfigHelper /** * @param conf The configuration to use. - * @return Value (converts MBs to Bytes) set by {@link setThriftMaxMessageLengthInMb(Configuration, int)} or default of 16MB + * @return Value (converts MBs to Bytes) set by {@link #setThriftMaxMessageLengthInMb(Configuration, int)} or default of 16MB */ public static int getThriftMaxMessageLength(Configuration conf) {
[jira] [Commented] (CASSANDRA-5521) move IndexSummary off heap
[ https://issues.apache.org/jira/browse/CASSANDRA-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13647188#comment-13647188 ] Jonathan Ellis commented on CASSANDRA-5521: --- Just add a IPartitioner.compareToken method that does what we need, then. Much better than creating extra objects that we don't care about. I like where we are now, with JNA being mostly optional (more so in 2.0 where we require Java7, so we don't need JNA for snapshot). Remember, we don't support JNA at all on Windows. I'd rather use less JNA than more. move IndexSummary off heap -- Key: CASSANDRA-5521 URL: https://issues.apache.org/jira/browse/CASSANDRA-5521 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jonathan Ellis Assignee: Vijay Fix For: 2.0 IndexSummary can still use a lot of heap for narrow-row sstables. (It can also contribute to memory fragmentation because of the large arrays it creates.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5521) move IndexSummary off heap
[ https://issues.apache.org/jira/browse/CASSANDRA-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13647197#comment-13647197 ] Pavel Yaskevich commented on CASSANDRA-5521: But don't you still have to generate token from Memory and make changes to getToken(BB) and underlying methods or was is proposed interface for compareToken? bq. I like where we are now, with JNA being mostly optional (more so in 2.0 where we require Java7, so we don't need JNA for snapshot). Remember, we don't support JNA at all on Windows. I'd rather use less JNA than more. I just to clarify, the only thing we need from JNA in this case is Pointer.getByteBuffer() which is actually a JNI method but unfortunately Unsafe doesn't have it somehow :( I agree tho that we should rely less on JNA but in this case we still pay the price of memory usage even putting off-heap so in non-JNA case we make it GC/allocation friendly it still makes a good improvement overall with almost no code changes. move IndexSummary off heap -- Key: CASSANDRA-5521 URL: https://issues.apache.org/jira/browse/CASSANDRA-5521 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jonathan Ellis Assignee: Vijay Fix For: 2.0 IndexSummary can still use a lot of heap for narrow-row sstables. (It can also contribute to memory fragmentation because of the large arrays it creates.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5521) move IndexSummary off heap
[ https://issues.apache.org/jira/browse/CASSANDRA-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13647215#comment-13647215 ] Jonathan Ellis commented on CASSANDRA-5521: --- Thinking about it, {{getToken(Memory, offset, length)}} is probably the right thing to add to IPartitioner. The rest can live in IndexSummary. That doesn't sound like a huge burden to me. And as I said, creating no Buffer (or DK) objects is better than creating native ones. :) move IndexSummary off heap -- Key: CASSANDRA-5521 URL: https://issues.apache.org/jira/browse/CASSANDRA-5521 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jonathan Ellis Assignee: Vijay Fix For: 2.0 IndexSummary can still use a lot of heap for narrow-row sstables. (It can also contribute to memory fragmentation because of the large arrays it creates.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5521) move IndexSummary off heap
[ https://issues.apache.org/jira/browse/CASSANDRA-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13647226#comment-13647226 ] Pavel Yaskevich commented on CASSANDRA-5521: So tokens that keep bb around right now would have to keep Memory and offset, size references? I'm not against this, just trying to clarify to myself if we would rather want to keep some kind of ROBuffer container for DK and Token to unify interface... move IndexSummary off heap -- Key: CASSANDRA-5521 URL: https://issues.apache.org/jira/browse/CASSANDRA-5521 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jonathan Ellis Assignee: Vijay Fix For: 2.0 IndexSummary can still use a lot of heap for narrow-row sstables. (It can also contribute to memory fragmentation because of the large arrays it creates.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5521) move IndexSummary off heap
[ https://issues.apache.org/jira/browse/CASSANDRA-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13647236#comment-13647236 ] Jonathan Ellis commented on CASSANDRA-5521: --- No, the Token wouldn't share any bytes with the key (with the exception of BOP, and I don't care about optimizing for that case), so there's no reason to not create a regular, on-heap Token. move IndexSummary off heap -- Key: CASSANDRA-5521 URL: https://issues.apache.org/jira/browse/CASSANDRA-5521 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jonathan Ellis Assignee: Vijay Fix For: 2.0 IndexSummary can still use a lot of heap for narrow-row sstables. (It can also contribute to memory fragmentation because of the large arrays it creates.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: fix mvn-install target patch by stunnicliffe reviewed by dbrosius for CASSANDRA-5532
Updated Branches: refs/heads/trunk a861c53e6 - 2bc79a074 fix mvn-install target patch by stunnicliffe reviewed by dbrosius for CASSANDRA-5532 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2bc79a07 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2bc79a07 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2bc79a07 Branch: refs/heads/trunk Commit: 2bc79a07474e48d57d9c17d2e597048006ff7bf2 Parents: a861c53 Author: Dave Brosius dbros...@apache.org Authored: Wed May 1 23:48:12 2013 -0400 Committer: Dave Brosius dbros...@apache.org Committed: Wed May 1 23:48:12 2013 -0400 -- build.xml |6 +- 1 files changed, 1 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2bc79a07/build.xml -- diff --git a/build.xml b/build.xml index ec2e90a..207ba1a 100644 --- a/build.xml +++ b/build.xml @@ -523,7 +523,7 @@ /artifact:pom /target -target name=_maven-ant-tasks-retrieve-build depends=maven-declare-dependencies +target name=maven-ant-tasks-retrieve-build depends=maven-declare-dependencies unless=without.maven artifact:dependencies pomRefId=build-deps-pom filesetId=build-dependency-jars sourcesFilesetId=build-dependency-sources @@ -548,10 +548,6 @@ /copy /target -target name=maven-ant-tasks-retrieve-build unless=without.maven - antcall target=_maven-ant-tasks-retrieve-build / -/target - target name=maven-ant-tasks-retrieve-test depends=maven-ant-tasks-init artifact:dependencies pomRefId=test-deps-pom filesetId=test-dependency-jars
[jira] [Commented] (CASSANDRA-5521) move IndexSummary off heap
[ https://issues.apache.org/jira/browse/CASSANDRA-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13647264#comment-13647264 ] Vijay commented on CASSANDRA-5521: -- Honestly, glad to see the thread going in the same thinking process which i went though Changing the Partitioner is a bigger change... before we go there, wondering if this optimization is going to help us? For BB is not cheap, but it is going to be good garbage which will live and die in young generation. I can think of 2 other options... 1) We can serialize and deserialize Token in IndexSummary for RP (for BOP we can serialize the key/byte[]) so we can compare incrementally too (taking the hit during flush) 2) We can use MMappedFile instead and get ByteBuffer (this could work in our favor, for the new SST's which is never queried there is zero overhead in memory ) :) move IndexSummary off heap -- Key: CASSANDRA-5521 URL: https://issues.apache.org/jira/browse/CASSANDRA-5521 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jonathan Ellis Assignee: Vijay Fix For: 2.0 IndexSummary can still use a lot of heap for narrow-row sstables. (It can also contribute to memory fragmentation because of the large arrays it creates.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-5521) move IndexSummary off heap
[ https://issues.apache.org/jira/browse/CASSANDRA-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13647264#comment-13647264 ] Vijay edited comment on CASSANDRA-5521 at 5/2/13 4:09 AM: -- Honestly, glad to see the thread going in the same thinking process which i went though Changing the Partitioner is a bigger change... but before we go there, wondering if this optimization is going to help us? For BB is not cheap, but it is going to be good garbage which will live and die in young generation. I can think of 2 other options... 1) We can serialize and deserialize Token in IndexSummary we still need additional function to serialize and deserialize from memory (for BOP we can serialize the key/byte[], we have also removed the token calculation overhead) so we can also try and compare incrementally. 2) We can use MMappedFile instead and get ByteBuffer (this could work in our favor, for the new SST's which is never queried there is zero overhead in memory ) :) was (Author: vijay2...@yahoo.com): Honestly, glad to see the thread going in the same thinking process which i went though Changing the Partitioner is a bigger change... before we go there, wondering if this optimization is going to help us? For BB is not cheap, but it is going to be good garbage which will live and die in young generation. I can think of 2 other options... 1) We can serialize and deserialize Token in IndexSummary for RP (for BOP we can serialize the key/byte[]) so we can compare incrementally too (taking the hit during flush) 2) We can use MMappedFile instead and get ByteBuffer (this could work in our favor, for the new SST's which is never queried there is zero overhead in memory ) :) move IndexSummary off heap -- Key: CASSANDRA-5521 URL: https://issues.apache.org/jira/browse/CASSANDRA-5521 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jonathan Ellis Assignee: Vijay Fix For: 2.0 IndexSummary can still use a lot of heap for narrow-row sstables. (It can also contribute to memory fragmentation because of the large arrays it creates.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5521) move IndexSummary off heap
[ https://issues.apache.org/jira/browse/CASSANDRA-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13647289#comment-13647289 ] Pavel Yaskevich commented on CASSANDRA-5521: bq. For BB is not cheap, but it is going to be good garbage which will live and die in young generation. Indeed, those are just containers so actual data is not copied and those buffers pretty GC friendly as you mentioned. But I think that option with using Memory with token is ok if we can encapsulate it properly. move IndexSummary off heap -- Key: CASSANDRA-5521 URL: https://issues.apache.org/jira/browse/CASSANDRA-5521 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jonathan Ellis Assignee: Vijay Fix For: 2.0 IndexSummary can still use a lot of heap for narrow-row sstables. (It can also contribute to memory fragmentation because of the large arrays it creates.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5525) Adding nodes to 1.2 cluster w/ vnodes streamed more data than average node load
[ https://issues.apache.org/jira/browse/CASSANDRA-5525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Watson updated CASSANDRA-5525: --- Attachment: cass-ring.txt Adding nodes to 1.2 cluster w/ vnodes streamed more data than average node load --- Key: CASSANDRA-5525 URL: https://issues.apache.org/jira/browse/CASSANDRA-5525 Project: Cassandra Issue Type: Bug Reporter: John Watson Attachments: cass-ring.txt, Screen Shot 2013-04-25 at 12.35.24 PM.png 12 node cluster upgraded from 1.1.9 to 1.2.3, enabled 'num_tokens: 256', restarted and ran upgradesstables and cleanup. Tried to join 2 additional nodes into the ring. However, 1 of the new nodes ran out of disk space. This started causing 'no host id' alerts in the live cluster when attempting to store hints for that node. {noformat} ERROR 10:12:02,408 Exception in thread Thread[MutationStage:190,5,main] java.lang.AssertionError: Missing host ID {noformat} The other node I killed to stop it from continuing to join. Since the live cluster was now in some sort of broken state dropping mutation messages on 3 nodes. This was fixed by restarting them, however 1 node never stopped, so had to decomm it (leaving the original cluster at 11 nodes.) Ring pre-join: {noformat} Load Tokens Owns (effective) Host ID 147.55 GB 256 16.7% 754f9f4c-4ba7-4495-97e7-1f5b6755cb27 124.99 GB 256 16.7% 93f4400a-09d9-4ca0-b6a6-9bcca2427450 136.63 GB 256 16.7% ff821e8e-b2ca-48a9-ac3f-8234b16329ce 141.78 GB 253 100.0%339c474f-cf19-4ada-9a47-8b10912d5eb3 137.74 GB 256 16.7% 6d726cbf-147d-426e-a735-e14928c95e45 135.9 GB 256 16.7% e59a02b3-8b91-4abd-990e-b3cb2a494950 165.96 GB 256 16.7% 83ca527c-60c5-4ea0-89a8-de53b92b99c8 135.41 GB 256 16.7% c3ea4026-551b-4a14-a346-480e8c1fe283 143.38 GB 256 16.7% df7ba879-74ad-400b-b371-91b45dcbed37 178.05 GB 256 25.0% 78192d73-be0b-4d49-a129-9bec0770efed 194.92 GB 256 25.0% 361d7e31-b155-4ce1-8890-451b3ddf46cf 150.5 GB 256 16.7% 9889280a-1433-439e-bb84-6b7e7f44d761 {noformat} Ring after decomm bad node: {noformat} Load Tokens Owns (effective) Host ID 80.95 GB 256 16.7% 754f9f4c-4ba7-4495-97e7-1f5b6755cb27 87.15 GB 256 16.7% 93f4400a-09d9-4ca0-b6a6-9bcca2427450 98.16 GB 256 16.7% ff821e8e-b2ca-48a9-ac3f-8234b16329ce 142.6 GB 253 100.0%339c474f-cf19-4ada-9a47-8b10912d5eb3 77.64 GB 256 16.7% e59a02b3-8b91-4abd-990e-b3cb2a494950 194.31 GB 256 25.0% 6d726cbf-147d-426e-a735-e14928c95e45 221.94 GB 256 33.3% 83ca527c-60c5-4ea0-89a8-de53b92b99c8 87.61 GB 256 16.7% c3ea4026-551b-4a14-a346-480e8c1fe283 101.02 GB 256 16.7% df7ba879-74ad-400b-b371-91b45dcbed37 172.44 GB 256 25.0% 78192d73-be0b-4d49-a129-9bec0770efed 108.5 GB 256 16.7% 9889280a-1433-439e-bb84-6b7e7f44d761 {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5525) Adding nodes to 1.2 cluster w/ vnodes streamed more data than average node load
[ https://issues.apache.org/jira/browse/CASSANDRA-5525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13647326#comment-13647326 ] John Watson commented on CASSANDRA-5525: Attached the output. We use RF=3 Adding nodes to 1.2 cluster w/ vnodes streamed more data than average node load --- Key: CASSANDRA-5525 URL: https://issues.apache.org/jira/browse/CASSANDRA-5525 Project: Cassandra Issue Type: Bug Reporter: John Watson Attachments: cass-ring.txt, Screen Shot 2013-04-25 at 12.35.24 PM.png 12 node cluster upgraded from 1.1.9 to 1.2.3, enabled 'num_tokens: 256', restarted and ran upgradesstables and cleanup. Tried to join 2 additional nodes into the ring. However, 1 of the new nodes ran out of disk space. This started causing 'no host id' alerts in the live cluster when attempting to store hints for that node. {noformat} ERROR 10:12:02,408 Exception in thread Thread[MutationStage:190,5,main] java.lang.AssertionError: Missing host ID {noformat} The other node I killed to stop it from continuing to join. Since the live cluster was now in some sort of broken state dropping mutation messages on 3 nodes. This was fixed by restarting them, however 1 node never stopped, so had to decomm it (leaving the original cluster at 11 nodes.) Ring pre-join: {noformat} Load Tokens Owns (effective) Host ID 147.55 GB 256 16.7% 754f9f4c-4ba7-4495-97e7-1f5b6755cb27 124.99 GB 256 16.7% 93f4400a-09d9-4ca0-b6a6-9bcca2427450 136.63 GB 256 16.7% ff821e8e-b2ca-48a9-ac3f-8234b16329ce 141.78 GB 253 100.0%339c474f-cf19-4ada-9a47-8b10912d5eb3 137.74 GB 256 16.7% 6d726cbf-147d-426e-a735-e14928c95e45 135.9 GB 256 16.7% e59a02b3-8b91-4abd-990e-b3cb2a494950 165.96 GB 256 16.7% 83ca527c-60c5-4ea0-89a8-de53b92b99c8 135.41 GB 256 16.7% c3ea4026-551b-4a14-a346-480e8c1fe283 143.38 GB 256 16.7% df7ba879-74ad-400b-b371-91b45dcbed37 178.05 GB 256 25.0% 78192d73-be0b-4d49-a129-9bec0770efed 194.92 GB 256 25.0% 361d7e31-b155-4ce1-8890-451b3ddf46cf 150.5 GB 256 16.7% 9889280a-1433-439e-bb84-6b7e7f44d761 {noformat} Ring after decomm bad node: {noformat} Load Tokens Owns (effective) Host ID 80.95 GB 256 16.7% 754f9f4c-4ba7-4495-97e7-1f5b6755cb27 87.15 GB 256 16.7% 93f4400a-09d9-4ca0-b6a6-9bcca2427450 98.16 GB 256 16.7% ff821e8e-b2ca-48a9-ac3f-8234b16329ce 142.6 GB 253 100.0%339c474f-cf19-4ada-9a47-8b10912d5eb3 77.64 GB 256 16.7% e59a02b3-8b91-4abd-990e-b3cb2a494950 194.31 GB 256 25.0% 6d726cbf-147d-426e-a735-e14928c95e45 221.94 GB 256 33.3% 83ca527c-60c5-4ea0-89a8-de53b92b99c8 87.61 GB 256 16.7% c3ea4026-551b-4a14-a346-480e8c1fe283 101.02 GB 256 16.7% df7ba879-74ad-400b-b371-91b45dcbed37 172.44 GB 256 25.0% 78192d73-be0b-4d49-a129-9bec0770efed 108.5 GB 256 16.7% 9889280a-1433-439e-bb84-6b7e7f44d761 {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira