[jira] [Updated] (CASSANDRA-2882) describe_ring should include datacenter/topology information
[ https://issues.apache.org/jira/browse/CASSANDRA-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Guzman updated CASSANDRA-2882: --- Attachment: 0001-adding-an-additional-parameter-to-the-TokenRange-res.patch v1 patch adding datacenter info and port where possible. needs some cleanup and review. > describe_ring should include datacenter/topology information > > > Key: CASSANDRA-2882 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2882 > Project: Cassandra > Issue Type: Improvement > Components: API, Core >Reporter: Mark Guzman >Assignee: Pavel Yaskevich >Priority: Minor > Labels: lhf > Fix For: 1.0 > > Attachments: > 0001-adding-an-additional-parameter-to-the-TokenRange-res.patch > > > describe_ring is great for getting a list of nodes in the cluster, but it > doesn't provide any information about the network topology which prevents > it's use in a multi-dc setup. It would be nice if we added another list to > the TokenRange object containing the DC information. > Optimally I could have ask any Cassandra node for this information and on the > client-side prefer local nodes but be able to fail to remote nodes without > requiring another lookup. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2388) ColumnFamilyRecordReader fails for a given split because a host is down, even if records could reasonably be read from other replica.
[ https://issues.apache.org/jira/browse/CASSANDRA-2388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13100093#comment-13100093 ] Mck SembWever commented on CASSANDRA-2388: -- In the meantime could we make this behavior configurable. eg replace CFRR:176 with something like {noformat} if(ConfigHelper.isDataLocalityDisabled()) { return split.getLocations()[0]; } else { throw new UnsupportedOperationException("no local connection available"); }{noformat} > ColumnFamilyRecordReader fails for a given split because a host is down, even > if records could reasonably be read from other replica. > - > > Key: CASSANDRA-2388 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2388 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Affects Versions: 0.6 >Reporter: Eldon Stegall >Assignee: Mck SembWever > Labels: hadoop, inputformat > Fix For: 0.8.6 > > Attachments: 0002_On_TException_try_next_split.patch, > CASSANDRA-2388-addition1.patch, CASSANDRA-2388-extended.patch, > CASSANDRA-2388.patch, CASSANDRA-2388.patch, CASSANDRA-2388.patch, > CASSANDRA-2388.patch > > > ColumnFamilyRecordReader only tries the first location for a given split. We > should try multiple locations for a given split. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2961) Expire dead gossip states based on time
[ https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13100089#comment-13100089 ] Jérémy Sevellec commented on CASSANDRA-2961: ok I like it, it's few things :-) : - hamscrest : In my case, It's true, I just use hamcrest with "is" into assert. There is a lot of other verb which interesting to make asserting more readable. Tt was for help for next but if you want I can remove it. tell me you do you prefer. - VersionedValue.getExpireTime : It's true, I put it in the Gossiper? a utility class? - addExpireTimeIfFound : ok i put one call in excise but i keep the method to isolate the thinking. if you're ok. - DEBUG log : ho there was (to make my test), but i remove it before creating the patch... I add them again > Expire dead gossip states based on time > --- > > Key: CASSANDRA-2961 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2961 > Project: Cassandra > Issue Type: Improvement >Affects Versions: 1.0 >Reporter: Brandon Williams >Priority: Minor > Fix For: 1.0 > > Attachments: trunk-2961-v2.patch, trunk-2961.patch > > > Currently dead states are held until aVeryLongTime, 3 days. The problem is > that if a node reboots within this period, it begins a new 3 days and will > repopulate the ring with the dead state. While mostly harmless, perpetuating > the state forever is at least wasting a small amount of bandwidth. Instead, > we can expire states based on a ttl, which will require that the cluster be > loosely time synced; within the quarantine period of 60s. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3149) Update CQL type names to match expected (SQL) behavor
[ https://issues.apache.org/jira/browse/CASSANDRA-3149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13100042#comment-13100042 ] Radim Kolar commented on CASSANDRA-3149: what CQL type you want for new Int32Type? > Update CQL type names to match expected (SQL) behavor > - > > Key: CASSANDRA-3149 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3149 > Project: Cassandra > Issue Type: Improvement >Affects Versions: 0.8.0 >Reporter: Jonathan Ellis >Assignee: Jonathan Ellis >Priority: Minor > Labels: cql > Fix For: 1.0 > > Attachments: 3149.txt > > > As discussed in CASSANDRA-3031, we should make the following changes: > - rename bytea to blob > - rename date to timestamp > - remove int, pending addition of CASSANDRA-3031 (bigint and varint will be > unchanged) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3150) ColumnFormatRecordReader loops forever
[ https://issues.apache.org/jira/browse/CASSANDRA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099943#comment-13099943 ] Mck SembWever commented on CASSANDRA-3150: -- I'll try and put debug in so i can get a log of get_slice_range calls from CFRR... (this may take some days) > ColumnFormatRecordReader loops forever > -- > > Key: CASSANDRA-3150 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3150 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Affects Versions: 0.8.4 >Reporter: Mck SembWever >Assignee: Mck SembWever >Priority: Critical > Attachments: CASSANDRA-3150.patch > > > From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039 > {quote} > bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner > bq. CFIF's inputSplitSize=196608 > bq. 3 map tasks (from 4013) is still running after read 25 million rows. > bq. Can this be a bug in StorageService.getSplits(..) ? > getSplits looks pretty foolproof to me but I guess we'd need to add > more debug logging to rule out a bug there for sure. > I guess the main alternative would be a bug in the recordreader paging. > {quote} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2449) Deprecate or modify per-cf memtable sizes in favor of the global threshold
[ https://issues.apache.org/jira/browse/CASSANDRA-2449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099765#comment-13099765 ] Hudson commented on CASSANDRA-2449: --- Integrated in Cassandra #1087 (See [https://builds.apache.org/job/Cassandra/1087/]) remove explicit per-CF memtable thresholds patch by jbellis; reviewed by brandonwilliams for CASSANDRA-2449 jbellis : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166520 Files : * /cassandra/trunk/NEWS.txt * /cassandra/trunk/conf/cassandra.yaml * /cassandra/trunk/interface/cassandra.thrift * /cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/CfDef.java * /cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Constants.java * /cassandra/trunk/src/avro/internode.genavro * /cassandra/trunk/src/java/org/apache/cassandra/config/CFMetaData.java * /cassandra/trunk/src/java/org/apache/cassandra/config/DatabaseDescriptor.java * /cassandra/trunk/src/java/org/apache/cassandra/cql/CreateColumnFamilyStatement.java * /cassandra/trunk/src/java/org/apache/cassandra/db/ColumnFamilyStore.java * /cassandra/trunk/src/java/org/apache/cassandra/db/ColumnFamilyStoreMBean.java * /cassandra/trunk/src/java/org/apache/cassandra/db/Memtable.java * /cassandra/trunk/src/java/org/apache/cassandra/db/Table.java * /cassandra/trunk/src/java/org/apache/cassandra/db/index/SecondaryIndex.java * /cassandra/trunk/src/java/org/apache/cassandra/db/index/SecondaryIndexManager.java * /cassandra/trunk/src/java/org/apache/cassandra/db/index/keys/KeysIndex.java * /cassandra/trunk/src/java/org/apache/cassandra/service/StorageService.java * /cassandra/trunk/test/unit/org/apache/cassandra/db/DefsTest.java > Deprecate or modify per-cf memtable sizes in favor of the global threshold > -- > > Key: CASSANDRA-2449 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2449 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Stu Hood >Assignee: Jonathan Ellis > Fix For: 1.0 > > Attachments: 2449.txt > > > The new memtable_total_space_in_mb setting is an excellent way to cap memory > usage for memtables, and one could argue that it should replace the per-cf > memtable sizes entirely. On the other hand, people may still want a knob to > tune to flush certain cfs less frequently. > I think a best of both worlds approach might be to deprecate the > memtable_(throughput|operations) settings, and replace them with a preference > value, which controls the relative memory usage of one CF versus another (all > CFs at 1 would mean equal preference). For backwards compatibility, we could > continue to read from the _throughput value and treat it as the preference > value, while logging a warning. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3118) nodetool can not decommission a node
[ https://issues.apache.org/jira/browse/CASSANDRA-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099750#comment-13099750 ] deng commented on CASSANDRA-3118: - Thanks for your help! > nodetool can not decommission a node > -- > > Key: CASSANDRA-3118 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3118 > Project: Cassandra > Issue Type: Bug > Components: Tools >Affects Versions: 0.8.4 > Environment: Cassandra0.84 >Reporter: deng > Attachments: 3118-debug.txt > > > when i use nodetool ring and get the result ,and than i want to decommission > 100.86.17.90 node ,but i get the error: > [root@ip bin]# ./nodetool -h10.86.12.225 ring > Address DC RackStatus State LoadOwns > Token > > 154562542458917734942660802527609328132 > 100.86.17.90 datacenter1 rack1 Up Leaving 1.08 MB > 11.21% 3493450320433654773610109291263389161 > 100.86.12.225datacenter1 rack1 Up Normal 558.25 MB > 14.25% 27742979166206700793970535921354744095 > 100.86.12.224datacenter1 rack1 Up Normal 5.01 GB 6.58% > 38945137636148605752956920077679425910 > ERROR: > root@ip bin]# ./nodetool -h100.86.17.90 decommission > Exception in thread "main" java.lang.UnsupportedOperationException > at java.util.AbstractList.remove(AbstractList.java:144) > at java.util.AbstractList$Itr.remove(AbstractList.java:360) > at java.util.AbstractCollection.removeAll(AbstractCollection.java:337) > at > org.apache.cassandra.service.StorageService.calculatePendingRanges(StorageService.java:1041) > at > org.apache.cassandra.service.StorageService.calculatePendingRanges(StorageService.java:1006) > at > org.apache.cassandra.service.StorageService.handleStateLeaving(StorageService.java:877) > at > org.apache.cassandra.service.StorageService.onChange(StorageService.java:732) > at > org.apache.cassandra.gms.Gossiper.doNotifications(Gossiper.java:839) > at > org.apache.cassandra.gms.Gossiper.addLocalApplicationState(Gossiper.java:986) > at > org.apache.cassandra.service.StorageService.startLeaving(StorageService.java:1836) > at > org.apache.cassandra.service.StorageService.decommission(StorageService.java:1855) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:93) > at > com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:27) > at > com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208) > at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:120) > at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:262) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) > at > javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1426) > at > javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72) > at > javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1264) > at > javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1359) > at > javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305) > at sun.rmi.transport.Transport$1.run(Transport.java:159) > at java.security.AccessController.doPrivileged(Native Method) > at sun.rmi.transport.Transport.serviceCall(Transport.java:155) > at > sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535) > at > sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:
[jira] [Updated] (CASSANDRA-2961) Expire dead gossip states based on time
[ https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-2961: Priority: Minor (was: Major) > Expire dead gossip states based on time > --- > > Key: CASSANDRA-2961 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2961 > Project: Cassandra > Issue Type: Improvement >Affects Versions: 1.0 >Reporter: Brandon Williams >Priority: Minor > Fix For: 1.0 > > Attachments: trunk-2961-v2.patch, trunk-2961.patch > > > Currently dead states are held until aVeryLongTime, 3 days. The problem is > that if a node reboots within this period, it begins a new 3 days and will > repopulate the ring with the dead state. While mostly harmless, perpetuating > the state forever is at least wasting a small amount of bandwidth. Instead, > we can expire states based on a ttl, which will require that the cluster be > loosely time synced; within the quarantine period of 60s. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3156) assertion error in RowRepairResolver
[ https://issues.apache.org/jira/browse/CASSANDRA-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-3156: -- Affects Version/s: 1.0 > assertion error in RowRepairResolver > > > Key: CASSANDRA-3156 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3156 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 1.0 >Reporter: Brandon Williams >Priority: Blocker > Fix For: 1.0 > > > Only seems to happen on a coordinator who does not have a copy of the data: > DEBUG 03:15:59,866 Processing response on a callback from 3840@/10.179.64.227 > DEBUG 03:15:59,866 Preprocessed data response > DEBUG 03:15:59,866 Processing response on a callback from 3841@/10.179.111.137 > DEBUG 03:15:59,866 Preprocessed digest response > DEBUG 03:15:59,865 Processing response on a callback from 3837@/10.179.111.137 > DEBUG 03:15:59,865 Preprocessed data response > DEBUG 03:15:59,865 Preprocessed data response > DEBUG 03:15:59,867 Preprocessed digest response > DEBUG 03:15:59,867 resolving 2 responses > ERROR 03:15:59,866 Fatal exception in thread > Thread[ReadRepairStage:526,5,main] > java.lang.AssertionError > at > org.apache.cassandra.service.RowRepairResolver.resolve(RowRepairResolver.java:77) > at > org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:54) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > ERROR 03:15:59,866 Fatal exception in thread > Thread[ReadRepairStage:525,5,main] > java.lang.AssertionError > at > org.apache.cassandra.service.RowRepairResolver.resolve(RowRepairResolver.java:77) > at > org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:54) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > ERROR 03:15:59,867 Fatal exception in thread > Thread[ReadRepairStage:528,5,main] > java.lang.AssertionError > at > org.apache.cassandra.service.RowRepairResolver.resolve(RowRepairResolver.java:77) > at > org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:54) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > DEBUG 03:15:59,867 resolving 2 responses > DEBUG 03:15:59,867 resolving 2 responses > DEBUG 03:15:59,867 resolving 2 responses -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3156) assertion error in RowRepairResolver
[ https://issues.apache.org/jira/browse/CASSANDRA-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099745#comment-13099745 ] Jonathan Ellis commented on CASSANDRA-3156: --- So, how RR is supposed to work is like this: Optimistic phase: Coordinator sends data read to closest replica, digest to others If there is a mismatch (optimism fail), we go to the repair phase: Coordinator sends data reads to all replicas to merge + repair The failing assert is saying "I got a digest reply, during the repair phase--i.e. we sent a data request but got a digest back." No idea how this is happening. > assertion error in RowRepairResolver > > > Key: CASSANDRA-3156 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3156 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 1.0 >Reporter: Brandon Williams >Priority: Blocker > Fix For: 1.0 > > > Only seems to happen on a coordinator who does not have a copy of the data: > DEBUG 03:15:59,866 Processing response on a callback from 3840@/10.179.64.227 > DEBUG 03:15:59,866 Preprocessed data response > DEBUG 03:15:59,866 Processing response on a callback from 3841@/10.179.111.137 > DEBUG 03:15:59,866 Preprocessed digest response > DEBUG 03:15:59,865 Processing response on a callback from 3837@/10.179.111.137 > DEBUG 03:15:59,865 Preprocessed data response > DEBUG 03:15:59,865 Preprocessed data response > DEBUG 03:15:59,867 Preprocessed digest response > DEBUG 03:15:59,867 resolving 2 responses > ERROR 03:15:59,866 Fatal exception in thread > Thread[ReadRepairStage:526,5,main] > java.lang.AssertionError > at > org.apache.cassandra.service.RowRepairResolver.resolve(RowRepairResolver.java:77) > at > org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:54) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > ERROR 03:15:59,866 Fatal exception in thread > Thread[ReadRepairStage:525,5,main] > java.lang.AssertionError > at > org.apache.cassandra.service.RowRepairResolver.resolve(RowRepairResolver.java:77) > at > org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:54) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > ERROR 03:15:59,867 Fatal exception in thread > Thread[ReadRepairStage:528,5,main] > java.lang.AssertionError > at > org.apache.cassandra.service.RowRepairResolver.resolve(RowRepairResolver.java:77) > at > org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:54) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > DEBUG 03:15:59,867 resolving 2 responses > DEBUG 03:15:59,867 resolving 2 responses > DEBUG 03:15:59,867 resolving 2 responses -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
buildbot success in ASF Buildbot on cassandra-trunk
The Buildbot has detected a restored build on builder cassandra-trunk while building ASF Buildbot. Full details are available at: http://ci.apache.org/builders/cassandra-trunk/builds/1628 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: isis_ubuntu Build Reason: scheduler Build Source Stamp: [branch cassandra/trunk] 1166521 Blamelist: jbellis Build succeeded! sincerely, -The Buildbot
svn commit: r1166521 - in /cassandra/trunk/src/java/org/apache/cassandra: cli/CliClient.java thrift/ThriftValidation.java
Author: jbellis Date: Thu Sep 8 03:42:40 2011 New Revision: 1166521 URL: http://svn.apache.org/viewvc?rev=1166521&view=rev Log: fix build for removed Thrift fields Modified: cassandra/trunk/src/java/org/apache/cassandra/cli/CliClient.java cassandra/trunk/src/java/org/apache/cassandra/thrift/ThriftValidation.java Modified: cassandra/trunk/src/java/org/apache/cassandra/cli/CliClient.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/cli/CliClient.java?rev=1166521&r1=1166520&r2=1166521&view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/cli/CliClient.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/cli/CliClient.java Thu Sep 8 03:42:40 2011 @@ -1218,10 +1218,8 @@ public class CliClient cfDef.setColumn_metadata(getCFColumnMetaFromTree(cfDef, arrayOfMetaAttributes)); break; case MEMTABLE_OPERATIONS: - cfDef.setMemtable_operations_in_millions(Double.parseDouble(mValue)); break; case MEMTABLE_THROUGHPUT: -cfDef.setMemtable_throughput_in_mb(Integer.parseInt(mValue)); break; case ROW_CACHE_SAVE_PERIOD: cfDef.setRow_cache_save_period_in_seconds(Integer.parseInt(mValue)); @@ -1635,8 +1633,6 @@ public class CliClient normaliseType(cfDef.default_validation_class, "org.apache.cassandra.db.marshal")); writeAttr(sb, false, "key_validation_class", normaliseType(cfDef.key_validation_class, "org.apache.cassandra.db.marshal")); -writeAttr(sb, false, "memtable_operations", cfDef.memtable_operations_in_millions); -writeAttr(sb, false, "memtable_throughput", cfDef.memtable_throughput_in_mb); writeAttr(sb, false, "rows_cached", cfDef.row_cache_size); writeAttr(sb, false, "row_cache_save_period", cfDef.row_cache_save_period_in_seconds); writeAttr(sb, false, "keys_cached", cfDef.key_cache_size); @@ -1928,8 +1924,6 @@ public class CliClient cf_def.row_cache_size, cf_def.row_cache_save_period_in_seconds, cf_def.row_cache_keys_to_save == Integer.MAX_VALUE ? "all" : cf_def.row_cache_keys_to_save); sessionState.out.printf(" Key cache size / save period in seconds: %s/%s%n", cf_def.key_cache_size, cf_def.key_cache_save_period_in_seconds); -sessionState.out.printf(" Memtable thresholds: %s/%s (millions of ops/MB)%n", -cf_def.memtable_operations_in_millions, cf_def.memtable_throughput_in_mb); sessionState.out.printf(" GC grace seconds: %s%n", cf_def.gc_grace_seconds); sessionState.out.printf(" Compaction min/max thresholds: %s/%s%n", cf_def.min_compaction_threshold, cf_def.max_compaction_threshold); sessionState.out.printf(" Read repair chance: %s%n", cf_def.read_repair_chance); Modified: cassandra/trunk/src/java/org/apache/cassandra/thrift/ThriftValidation.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/thrift/ThriftValidation.java?rev=1166521&r1=1166520&r2=1166521&view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/thrift/ThriftValidation.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/thrift/ThriftValidation.java Thu Sep 8 03:42:40 2011 @@ -26,7 +26,6 @@ import java.util.*; import org.apache.cassandra.config.*; import org.apache.cassandra.db.*; import org.apache.cassandra.db.index.SecondaryIndex; -import org.apache.cassandra.db.index.SecondaryIndexManager; import org.apache.cassandra.db.marshal.*; import org.apache.cassandra.db.migration.Migration; import org.apache.cassandra.dht.IPartitioner; @@ -650,7 +649,6 @@ public class ThriftValidation } } validateMinMaxCompactionThresholds(cf_def); -validateMemtableSettings(cf_def); } catch (ConfigurationException e) { @@ -712,14 +710,6 @@ public class ThriftValidation } } -public static void validateMemtableSettings(org.apache.cassandra.thrift.CfDef cf_def) throws ConfigurationException -{ -if (cf_def.isSetMemtable_throughput_in_mb()) - DatabaseDescriptor.validateMemtableThroughput(cf_def.memtable_throughput_in_mb); -if (cf_def.isSetMemtable_operations_in_millions()) - DatabaseDescriptor.validateMemtableOperations(cf_def.memtable_operations_in_millions); -} - public static void validateKeyspaceNotYetExisting(String newKsName) throws InvalidRequestException { // keyspace names must be unique case-insensitively because the keyspace name becomes the directory
buildbot failure in ASF Buildbot on cassandra-trunk
The Buildbot has detected a new failure on builder cassandra-trunk while building ASF Buildbot. Full details are available at: http://ci.apache.org/builders/cassandra-trunk/builds/1627 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: isis_ubuntu Build Reason: scheduler Build Source Stamp: [branch cassandra/trunk] 1166520 Blamelist: jbellis BUILD FAILED: failed compile sincerely, -The Buildbot
[jira] [Issue Comment Edited] (CASSANDRA-3156) assertion error in RowRepairResolver
[ https://issues.apache.org/jira/browse/CASSANDRA-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099732#comment-13099732 ] Brandon Williams edited comment on CASSANDRA-3156 at 9/8/11 3:23 AM: - Also some spurious digest mismatches mixed in, even though I have no reason to suspect there is actually a mismatch in my dev env (3 nodes, rf=2): DEBUG 03:15:59,823 Digest mismatch: org.apache.cassandra.service.DigestMismatchException: Mismatch for key DecoratedKey(20580074455139572311737153648595094740, 30363933) (fb3f10b793298382b554737490bc78b5 vs db8d74ec919be7c1a1dda15c85754eb0) at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:105) at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:30) at org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.runMayThrow(ReadCallback.java:229) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) But this seems to happen when the coordinator _does_ have a copy of the data. was (Author: brandon.williams): Also some spurious digest mismatches mixed in, even though I have no reason to suspect there is actually a mismatch in my dev env (3 nodes, rf=2): DEBUG 03:15:59,823 Digest mismatch: org.apache.cassandra.service.DigestMismatchException: Mismatch for key DecoratedKey(20580074455139572311737153648595094740, 30363933) (fb3f10b793298382b554737490bc78b5 vs db8d74ec919be7c1a1dda15c85754eb0) at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:105) at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:30) at org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.runMayThrow(ReadCallback.java:229) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) > assertion error in RowRepairResolver > > > Key: CASSANDRA-3156 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3156 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Brandon Williams >Priority: Blocker > Fix For: 1.0 > > > Only seems to happen on a coordinator who does not have a copy of the data: > DEBUG 03:15:59,866 Processing response on a callback from 3840@/10.179.64.227 > DEBUG 03:15:59,866 Preprocessed data response > DEBUG 03:15:59,866 Processing response on a callback from 3841@/10.179.111.137 > DEBUG 03:15:59,866 Preprocessed digest response > DEBUG 03:15:59,865 Processing response on a callback from 3837@/10.179.111.137 > DEBUG 03:15:59,865 Preprocessed data response > DEBUG 03:15:59,865 Preprocessed data response > DEBUG 03:15:59,867 Preprocessed digest response > DEBUG 03:15:59,867 resolving 2 responses > ERROR 03:15:59,866 Fatal exception in thread > Thread[ReadRepairStage:526,5,main] > java.lang.AssertionError > at > org.apache.cassandra.service.RowRepairResolver.resolve(RowRepairResolver.java:77) > at > org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:54) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > ERROR 03:15:59,866 Fatal exception in thread > Thread[ReadRepairStage:525,5,main] > java.lang.AssertionError > at > org.apache.cassandra.service.RowRepairResolver.resolve(RowRepairResolver.java:77) > at > org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:54) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > ERROR 03:15:59,867 Fatal exception in thread > Thread[ReadRepairStage:528,5,main] > java.lang.AssertionError > at > org.apache.cassandra.service.RowRepairResolver.resolve(RowRepairResolver.java:77) > at > org.apache
[jira] [Commented] (CASSANDRA-3156) assertion error in RowRepairResolver
[ https://issues.apache.org/jira/browse/CASSANDRA-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099732#comment-13099732 ] Brandon Williams commented on CASSANDRA-3156: - Also some spurious digest mismatches mixed in, even though I have no reason to suspect there is actually a mismatch in my dev env (3 nodes, rf=2): DEBUG 03:15:59,823 Digest mismatch: org.apache.cassandra.service.DigestMismatchException: Mismatch for key DecoratedKey(20580074455139572311737153648595094740, 30363933) (fb3f10b793298382b554737490bc78b5 vs db8d74ec919be7c1a1dda15c85754eb0) at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:105) at org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:30) at org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.runMayThrow(ReadCallback.java:229) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) > assertion error in RowRepairResolver > > > Key: CASSANDRA-3156 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3156 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Brandon Williams >Priority: Blocker > Fix For: 1.0 > > > Only seems to happen on a coordinator who does not have a copy of the data: > DEBUG 03:15:59,866 Processing response on a callback from 3840@/10.179.64.227 > DEBUG 03:15:59,866 Preprocessed data response > DEBUG 03:15:59,866 Processing response on a callback from 3841@/10.179.111.137 > DEBUG 03:15:59,866 Preprocessed digest response > DEBUG 03:15:59,865 Processing response on a callback from 3837@/10.179.111.137 > DEBUG 03:15:59,865 Preprocessed data response > DEBUG 03:15:59,865 Preprocessed data response > DEBUG 03:15:59,867 Preprocessed digest response > DEBUG 03:15:59,867 resolving 2 responses > ERROR 03:15:59,866 Fatal exception in thread > Thread[ReadRepairStage:526,5,main] > java.lang.AssertionError > at > org.apache.cassandra.service.RowRepairResolver.resolve(RowRepairResolver.java:77) > at > org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:54) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > ERROR 03:15:59,866 Fatal exception in thread > Thread[ReadRepairStage:525,5,main] > java.lang.AssertionError > at > org.apache.cassandra.service.RowRepairResolver.resolve(RowRepairResolver.java:77) > at > org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:54) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > ERROR 03:15:59,867 Fatal exception in thread > Thread[ReadRepairStage:528,5,main] > java.lang.AssertionError > at > org.apache.cassandra.service.RowRepairResolver.resolve(RowRepairResolver.java:77) > at > org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:54) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > DEBUG 03:15:59,867 resolving 2 responses > DEBUG 03:15:59,867 resolving 2 responses > DEBUG 03:15:59,867 resolving 2 responses -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2449) Deprecate or modify per-cf memtable sizes in favor of the global threshold
[ https://issues.apache.org/jira/browse/CASSANDRA-2449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099731#comment-13099731 ] Brandon Williams commented on CASSANDRA-2449: - +1 > Deprecate or modify per-cf memtable sizes in favor of the global threshold > -- > > Key: CASSANDRA-2449 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2449 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Stu Hood >Assignee: Jonathan Ellis > Fix For: 1.0 > > Attachments: 2449.txt > > > The new memtable_total_space_in_mb setting is an excellent way to cap memory > usage for memtables, and one could argue that it should replace the per-cf > memtable sizes entirely. On the other hand, people may still want a knob to > tune to flush certain cfs less frequently. > I think a best of both worlds approach might be to deprecate the > memtable_(throughput|operations) settings, and replace them with a preference > value, which controls the relative memory usage of one CF versus another (all > CFs at 1 would mean equal preference). For backwards compatibility, we could > continue to read from the _throughput value and treat it as the preference > value, while logging a warning. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2819) Split rpc timeout for read and write ops
[ https://issues.apache.org/jira/browse/CASSANDRA-2819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Melvin Wang updated CASSANDRA-2819: --- Attachment: c2819.patch make ReadCallback, RepairCallback cancellable through timeoutReporter; make it possible to get the timeout value by verb so that DroppableRunnable can drop according to different verbs. > Split rpc timeout for read and write ops > > > Key: CASSANDRA-2819 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2819 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Stu Hood >Assignee: Melvin Wang > Fix For: 1.0 > > Attachments: 2819-v4.txt, c2819.patch, rpc-jira.patch > > > Given the vastly different latency characteristics of reads and writes, it > makes sense for them to have independent rpc timeouts internally. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-3156) assertion error in RowRepairResolver
assertion error in RowRepairResolver Key: CASSANDRA-3156 URL: https://issues.apache.org/jira/browse/CASSANDRA-3156 Project: Cassandra Issue Type: Bug Components: Core Reporter: Brandon Williams Priority: Blocker Fix For: 1.0 Only seems to happen on a coordinator who does not have a copy of the data: DEBUG 03:15:59,866 Processing response on a callback from 3840@/10.179.64.227 DEBUG 03:15:59,866 Preprocessed data response DEBUG 03:15:59,866 Processing response on a callback from 3841@/10.179.111.137 DEBUG 03:15:59,866 Preprocessed digest response DEBUG 03:15:59,865 Processing response on a callback from 3837@/10.179.111.137 DEBUG 03:15:59,865 Preprocessed data response DEBUG 03:15:59,865 Preprocessed data response DEBUG 03:15:59,867 Preprocessed digest response DEBUG 03:15:59,867 resolving 2 responses ERROR 03:15:59,866 Fatal exception in thread Thread[ReadRepairStage:526,5,main] java.lang.AssertionError at org.apache.cassandra.service.RowRepairResolver.resolve(RowRepairResolver.java:77) at org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:54) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) ERROR 03:15:59,866 Fatal exception in thread Thread[ReadRepairStage:525,5,main] java.lang.AssertionError at org.apache.cassandra.service.RowRepairResolver.resolve(RowRepairResolver.java:77) at org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:54) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) ERROR 03:15:59,867 Fatal exception in thread Thread[ReadRepairStage:528,5,main] java.lang.AssertionError at org.apache.cassandra.service.RowRepairResolver.resolve(RowRepairResolver.java:77) at org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:54) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) DEBUG 03:15:59,867 resolving 2 responses DEBUG 03:15:59,867 resolving 2 responses DEBUG 03:15:59,867 resolving 2 responses -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3154) Bad equality check in ColumnFamilyStore.isCompleteSSTables()
[ https://issues.apache.org/jira/browse/CASSANDRA-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099727#comment-13099727 ] Tupshin Harper commented on CASSANDRA-3154: --- +1 to getting rid of the code instead. > Bad equality check in ColumnFamilyStore.isCompleteSSTables() > > > Key: CASSANDRA-3154 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3154 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 1.0 >Reporter: Tupshin Harper >Assignee: Tupshin Harper >Priority: Minor > Fix For: 1.0 > > Attachments: 3154.txt, CASSANDRA-3154.diff > > > The equality check in isCompleteSSTables() always fails because it tries to > call equals() with a Set and a List. This might result in failure to purge > tombstones in some cases. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2247) Cleanup unused imports and generics
[ https://issues.apache.org/jira/browse/CASSANDRA-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099707#comment-13099707 ] Hudson commented on CASSANDRA-2247: --- Integrated in Cassandra #1086 (See [https://builds.apache.org/job/Cassandra/1086/]) add generic wildcards patch by Norman Maurer and Tupshin Harper for CASSANDRA-2247 jbellis : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166499 Files : * /cassandra/trunk/src/java/org/apache/cassandra/db/compaction/AbstractCompactedRow.java * /cassandra/trunk/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java * /cassandra/trunk/src/java/org/apache/cassandra/db/filter/IFilter.java * /cassandra/trunk/src/java/org/apache/cassandra/db/filter/NamesQueryFilter.java * /cassandra/trunk/src/java/org/apache/cassandra/db/filter/QueryFilter.java * /cassandra/trunk/src/java/org/apache/cassandra/db/filter/SliceQueryFilter.java * /cassandra/trunk/src/java/org/apache/cassandra/db/migration/Migration.java * /cassandra/trunk/src/java/org/apache/cassandra/io/sstable/IndexSummary.java * /cassandra/trunk/src/java/org/apache/cassandra/io/sstable/KeyIterator.java * /cassandra/trunk/src/java/org/apache/cassandra/io/sstable/ReducingKeyIterator.java * /cassandra/trunk/src/java/org/apache/cassandra/io/sstable/SSTableIdentityIterator.java * /cassandra/trunk/src/java/org/apache/cassandra/io/sstable/SSTableScanner.java * /cassandra/trunk/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java * /cassandra/trunk/src/java/org/apache/cassandra/locator/AbstractEndpointSnitch.java * /cassandra/trunk/src/java/org/apache/cassandra/net/AsyncResult.java * /cassandra/trunk/src/java/org/apache/cassandra/net/Header.java * /cassandra/trunk/src/java/org/apache/cassandra/net/MessageDeliveryTask.java * /cassandra/trunk/src/java/org/apache/cassandra/service/AbstractRowResolver.java * /cassandra/trunk/src/java/org/apache/cassandra/service/DigestMismatchException.java * /cassandra/trunk/src/java/org/apache/cassandra/service/EmbeddedCassandraService.java * /cassandra/trunk/src/java/org/apache/cassandra/service/MigrationManager.java * /cassandra/trunk/src/java/org/apache/cassandra/service/RowRepairResolver.java > Cleanup unused imports and generics > --- > > Key: CASSANDRA-2247 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2247 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Norman Maurer >Assignee: Norman Maurer > Fix For: 1.0 > > Attachments: CASSANDRA-2247-part1.diff, > CASSANDRA-2247-part2-rebased.diff, CASSANDRA-2247-part2-rebasedv2.diff, > CASSANDRA-2247-part2.diff > > > In current cassandra trunk are many classes which import packages which are > never used. The same is true for Loggers which are often instanced and then > not used. Beside this I see many warnings related to generic usage. Would be > nice to clean this up a bit. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3153) Add support for BigDecimal Java data type to JDBC ResultSet
[ https://issues.apache.org/jira/browse/CASSANDRA-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099708#comment-13099708 ] Hudson commented on CASSANDRA-3153: --- Integrated in Cassandra #1086 (See [https://builds.apache.org/job/Cassandra/1086/]) add jdbc BigDecimal support patch by Rick Shaw; reviewed by jbellis for CASSANDRA-3153 jbellis : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166497 Files : * /cassandra/trunk/drivers/java/CHANGES.txt * /cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CResultSet.java * /cassandra/trunk/src/java/org/apache/cassandra/cql/jdbc/TypesMap.java > Add support for BigDecimal Java data type to JDBC ResultSet > --- > > Key: CASSANDRA-3153 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3153 > Project: Cassandra > Issue Type: Sub-task > Components: Drivers >Affects Versions: 0.8.4 >Reporter: Rick Shaw >Assignee: Rick Shaw >Priority: Trivial > Labels: JDBC, > Attachments: ResultSet-support-for-BigDecimal-v1.txt > > > This patch adds support for {{BigDecimal}} to the {{ResultSet}} using the > recently added {{DecimalType}} data type. > It supports translation from a column that contained the following Java (CQL) > datatypes: > - {{Long - (bigint)}} > - {{Double - (double)}} > - {{BigInteger - (varint)}} > - {{BigDecimal - (decimal)}} > - {{String - (ascii,text,varchar)}} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-3155) Secondary index can report it's memory consumption
Secondary index can report it's memory consumption -- Key: CASSANDRA-3155 URL: https://issues.apache.org/jira/browse/CASSANDRA-3155 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Jason Rutherglen Assignee: Jason Rutherglen Priority: Minor Fix For: 1.0 A secondary index will consume RAM which should be reported back to Cassandra to be factored into it's flush by RAM amount. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (CASSANDRA-2247) Cleanup unused imports and generics
[ https://issues.apache.org/jira/browse/CASSANDRA-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-2247. --- Resolution: Fixed Fix Version/s: 1.0 Reviewer: tupshin Assignee: Norman Maurer committed, thanks! (I did put a couple of the unused loggers back, since it's handy to have them there ready to go when you're troubleshooting, and they're harmless otherwise.) > Cleanup unused imports and generics > --- > > Key: CASSANDRA-2247 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2247 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Norman Maurer >Assignee: Norman Maurer > Fix For: 1.0 > > Attachments: CASSANDRA-2247-part1.diff, > CASSANDRA-2247-part2-rebased.diff, CASSANDRA-2247-part2-rebasedv2.diff, > CASSANDRA-2247-part2.diff > > > In current cassandra trunk are many classes which import packages which are > never used. The same is true for Loggers which are often instanced and then > not used. Beside this I see many warnings related to generic usage. Would be > nice to clean this up a bit. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (CASSANDRA-2247) Cleanup unused imports and generics
[ https://issues.apache.org/jira/browse/CASSANDRA-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reopened CASSANDRA-2247: --- > Cleanup unused imports and generics > --- > > Key: CASSANDRA-2247 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2247 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Norman Maurer > Fix For: 1.0 > > Attachments: CASSANDRA-2247-part1.diff, > CASSANDRA-2247-part2-rebased.diff, CASSANDRA-2247-part2-rebasedv2.diff, > CASSANDRA-2247-part2.diff > > > In current cassandra trunk are many classes which import packages which are > never used. The same is true for Loggers which are often instanced and then > not used. Beside this I see many warnings related to generic usage. Would be > nice to clean this up a bit. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
svn commit: r1166499 - in /cassandra/trunk/src/java/org/apache/cassandra: db/compaction/ db/filter/ db/migration/ io/sstable/ locator/ net/ service/
Author: jbellis Date: Thu Sep 8 01:58:42 2011 New Revision: 1166499 URL: http://svn.apache.org/viewvc?rev=1166499&view=rev Log: add generic wildcards patch by Norman Maurer and Tupshin Harper for CASSANDRA-2247 Modified: cassandra/trunk/src/java/org/apache/cassandra/db/compaction/AbstractCompactedRow.java cassandra/trunk/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java cassandra/trunk/src/java/org/apache/cassandra/db/filter/IFilter.java cassandra/trunk/src/java/org/apache/cassandra/db/filter/NamesQueryFilter.java cassandra/trunk/src/java/org/apache/cassandra/db/filter/QueryFilter.java cassandra/trunk/src/java/org/apache/cassandra/db/filter/SliceQueryFilter.java cassandra/trunk/src/java/org/apache/cassandra/db/migration/Migration.java cassandra/trunk/src/java/org/apache/cassandra/io/sstable/IndexSummary.java cassandra/trunk/src/java/org/apache/cassandra/io/sstable/KeyIterator.java cassandra/trunk/src/java/org/apache/cassandra/io/sstable/ReducingKeyIterator.java cassandra/trunk/src/java/org/apache/cassandra/io/sstable/SSTableIdentityIterator.java cassandra/trunk/src/java/org/apache/cassandra/io/sstable/SSTableScanner.java cassandra/trunk/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java cassandra/trunk/src/java/org/apache/cassandra/locator/AbstractEndpointSnitch.java cassandra/trunk/src/java/org/apache/cassandra/net/AsyncResult.java cassandra/trunk/src/java/org/apache/cassandra/net/Header.java cassandra/trunk/src/java/org/apache/cassandra/net/MessageDeliveryTask.java cassandra/trunk/src/java/org/apache/cassandra/service/AbstractRowResolver.java cassandra/trunk/src/java/org/apache/cassandra/service/DigestMismatchException.java cassandra/trunk/src/java/org/apache/cassandra/service/EmbeddedCassandraService.java cassandra/trunk/src/java/org/apache/cassandra/service/MigrationManager.java cassandra/trunk/src/java/org/apache/cassandra/service/RowRepairResolver.java Modified: cassandra/trunk/src/java/org/apache/cassandra/db/compaction/AbstractCompactedRow.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/db/compaction/AbstractCompactedRow.java?rev=1166499&r1=1166498&r2=1166499&view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/db/compaction/AbstractCompactedRow.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/db/compaction/AbstractCompactedRow.java Thu Sep 8 01:58:42 2011 @@ -21,7 +21,6 @@ package org.apache.cassandra.db.compacti */ -import java.io.Closeable; import java.io.DataOutput; import java.io.IOException; import java.security.MessageDigest; @@ -35,9 +34,9 @@ import org.apache.cassandra.db.Decorated */ public abstract class AbstractCompactedRow { -public final DecoratedKey key; +public final DecoratedKey key; -public AbstractCompactedRow(DecoratedKey key) +public AbstractCompactedRow(DecoratedKey key) { this.key = key; } Modified: cassandra/trunk/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java?rev=1166499&r1=1166498&r2=1166499&view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java Thu Sep 8 01:58:42 2011 @@ -49,7 +49,7 @@ public class PrecompactedRow extends Abs private final int gcBefore; // For testing purposes -public PrecompactedRow(DecoratedKey key, ColumnFamily compacted) +public PrecompactedRow(DecoratedKey key, ColumnFamily compacted) { super(key); this.compactedCf = compacted; @@ -57,14 +57,14 @@ public class PrecompactedRow extends Abs } /** it is caller's responsibility to call removeDeleted + removeOldShards from the cf before calling this constructor */ -public PrecompactedRow(DecoratedKey key, CompactionController controller, ColumnFamily cf) +public PrecompactedRow(DecoratedKey key, CompactionController controller, ColumnFamily cf) { super(key); this.gcBefore = controller.gcBefore; compactedCf = cf; } -public static ColumnFamily removeDeletedAndOldShards(DecoratedKey key, CompactionController controller, ColumnFamily cf) +public static ColumnFamily removeDeletedAndOldShards(DecoratedKey key, CompactionController controller, ColumnFamily cf) { return removeDeletedAndOldShards(controller.shouldPurge(key), controller, cf); } Modified: cassandra/trunk/src/java/org/apache/cassandra/db/filter/IFilter.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/db/filter/IFilt
svn commit: r1166497 - in /cassandra/trunk: drivers/java/CHANGES.txt drivers/java/src/org/apache/cassandra/cql/jdbc/CResultSet.java src/java/org/apache/cassandra/cql/jdbc/TypesMap.java
Author: jbellis Date: Thu Sep 8 01:40:14 2011 New Revision: 1166497 URL: http://svn.apache.org/viewvc?rev=1166497&view=rev Log: add jdbc BigDecimal support patch by Rick Shaw; reviewed by jbellis for CASSANDRA-3153 Modified: cassandra/trunk/drivers/java/CHANGES.txt cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CResultSet.java cassandra/trunk/src/java/org/apache/cassandra/cql/jdbc/TypesMap.java Modified: cassandra/trunk/drivers/java/CHANGES.txt URL: http://svn.apache.org/viewvc/cassandra/trunk/drivers/java/CHANGES.txt?rev=1166497&r1=1166496&r2=1166497&view=diff == --- cassandra/trunk/drivers/java/CHANGES.txt (original) +++ cassandra/trunk/drivers/java/CHANGES.txt Thu Sep 8 01:40:14 2011 @@ -2,3 +2,4 @@ * improve JDBC spec compliance (CASSANDRA-2720, 2754, 3052, 3089) * cooperate with other jdbc drivers (CASSANDRA-2842) * fix unbox-to-NPE with null primitives (CASSANDRA-2956) + * add BigDecimal support (CASSANDRA-3153) Modified: cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CResultSet.java URL: http://svn.apache.org/viewvc/cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CResultSet.java?rev=1166497&r1=1166496&r2=1166497&view=diff == --- cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CResultSet.java (original) +++ cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CResultSet.java Thu Sep 8 01:40:14 2011 @@ -165,28 +165,60 @@ class CResultSet extends AbstractResultS throw new SQLFeatureNotSupportedException(NOT_SUPPORTED); } -// Big Decimal (awaiting a new AbstractType implementation) -public BigDecimal getBigDecimal(int arg0) throws SQLException +public BigDecimal getBigDecimal(int index) throws SQLException { -throw new SQLFeatureNotSupportedException(NOT_SUPPORTED); +checkIndex(index); +return getBigDecimal(values.get(index - 1)); } -public BigDecimal getBigDecimal(int arg0, int arg1) throws SQLException +/** @deprecated */ +public BigDecimal getBigDecimal(int index, int scale) throws SQLException { -throw new SQLFeatureNotSupportedException(NOT_SUPPORTED); +checkIndex(index); +return (getBigDecimal(values.get(index - 1))).setScale(scale); } -public BigDecimal getBigDecimal(String arg0) throws SQLException +public BigDecimal getBigDecimal(String name) throws SQLException { -throw new SQLFeatureNotSupportedException(NOT_SUPPORTED); +checkName(name); +return getBigDecimal(valueMap.get(name)); } -public BigDecimal getBigDecimal(String arg0, int arg1) throws SQLException +/** @deprecated */ +public BigDecimal getBigDecimal(String name, int scale) throws SQLException { -throw new SQLFeatureNotSupportedException(NOT_SUPPORTED); +checkName(name); +return (getBigDecimal(valueMap.get(name))).setScale(scale); } +private BigDecimal getBigDecimal(TypedColumn column) throws SQLException +{ +checkNotClosed(); +Object value = column.getValue(); +wasNull = value == null; + +if (wasNull) return BigDecimal.ZERO; + +if (value instanceof BigDecimal) return (BigDecimal) value; + +if (value instanceof Long) return BigDecimal.valueOf((Long) value); + +if (value instanceof Double) return BigDecimal.valueOf((Double) value); + +if (value instanceof BigInteger) return new BigDecimal((BigInteger) value); + +try +{ +if (value instanceof String) return (new BigDecimal((String) value)); +} +catch (NumberFormatException e) +{ +throw new SQLSyntaxErrorException(e); +} + +throw new SQLSyntaxErrorException(String.format(NOT_TRANSLATABLE, value.getClass().getSimpleName(), "BigDecimal")); +} public BigInteger getBigInteger(int index) throws SQLException { checkIndex(index); Modified: cassandra/trunk/src/java/org/apache/cassandra/cql/jdbc/TypesMap.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/cql/jdbc/TypesMap.java?rev=1166497&r1=1166496&r2=1166497&view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/cql/jdbc/TypesMap.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/cql/jdbc/TypesMap.java Thu Sep 8 01:40:14 2011 @@ -14,6 +14,7 @@ public class TypesMap map.put("org.apache.cassandra.db.marshal.BytesType", JdbcBytes.instance); map.put("org.apache.cassandra.db.marshal.ColumnCounterType", JdbcCounterColumn.instance); map.put("org.apache.cassandra.db.marshal.DateType", JdbcDate.instance); +map.put("org.apache.cassandra.db.marshal
[jira] [Updated] (CASSANDRA-3154) Bad equality check in ColumnFamilyStore.isCompleteSSTables()
[ https://issues.apache.org/jira/browse/CASSANDRA-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-3154: -- Attachment: 3154.txt I'd rather get rid of that code. It's not useful because - for leveled compactions, you are effectively guaranteed that once you have more than a couple sstables, you'll never compact all sstables at once - for non-leveled compactions, you have a small enough number of sstables that isKeyInRemainingSSTables is fine without adding additional optimization for the "major" case This patch gets rid of isMajor, and additionally renames CompactionType to OperationType to better reflect the "compaction" stage's role as generic background IO manager. > Bad equality check in ColumnFamilyStore.isCompleteSSTables() > > > Key: CASSANDRA-3154 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3154 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 1.0 >Reporter: Tupshin Harper >Assignee: Tupshin Harper > Fix For: 1.0 > > Attachments: 3154.txt, CASSANDRA-3154.diff > > > The equality check in isCompleteSSTables() always fails because it tries to > call equals() with a Set and a List. This might result in failure to purge > tombstones in some cases. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3154) Bad equality check in ColumnFamilyStore.isCompleteSSTables()
[ https://issues.apache.org/jira/browse/CASSANDRA-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-3154: -- Reviewer: bcoverston Priority: Minor (was: Major) Affects Version/s: 1.0 Fix Version/s: 1.0 > Bad equality check in ColumnFamilyStore.isCompleteSSTables() > > > Key: CASSANDRA-3154 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3154 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 1.0 >Reporter: Tupshin Harper >Assignee: Tupshin Harper >Priority: Minor > Fix For: 1.0 > > Attachments: 3154.txt, CASSANDRA-3154.diff > > > The equality check in isCompleteSSTables() always fails because it tries to > call equals() with a Set and a List. This might result in failure to purge > tombstones in some cases. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3152) Logic of AbstractNetworkTopologySnitch.compareEndpoints is wrong
[ https://issues.apache.org/jira/browse/CASSANDRA-3152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099695#comment-13099695 ] Hudson commented on CASSANDRA-3152: --- Integrated in Cassandra-0.8 #320 (See [https://builds.apache.org/job/Cassandra-0.8/320/]) allow topology sort to work with non-unique rack names between datacenters patch by Vijay; reviewed by jbellis for CASSANDRA-3152 jbellis : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166484 Files : * /cassandra/branches/cassandra-0.8/src/java/org/apache/cassandra/locator/AbstractNetworkTopologySnitch.java > Logic of AbstractNetworkTopologySnitch.compareEndpoints is wrong > > > Key: CASSANDRA-3152 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3152 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: JVM >Reporter: Vijay >Assignee: Vijay >Priority: Minor > Fix For: 0.8.6 > > Attachments: 0001-fix-dc-rack-sorting-on-ANTS.patch > > > Current logic in ANTS.cE is to compare the rack and then compare the DC's, > the problem is when we have the same rack name but the racks are in a > diffrent DC's this logic breaks... > Example: > "us-east,1a", InetAddress.getByName("127.0.0.1") > "us-east,1b", InetAddress.getByName("127.0.0.2") > "us-east,1c", InetAddress.getByName("127.0.0.3") > "us-west,1a", InetAddress.getByName("127.0.0.4") > "us-west,1b", InetAddress.getByName("127.0.0.5") > "us-west,1c", InetAddress.getByName("127.0.0.6") > Expected: > /127.0.0.1,/127.0.0.3,/127.0.0.2,/127.0.0.4,/127.0.0.5,/127.0.0.6 > Current: > /127.0.0.1,/127.0.0.4,/127.0.0.3,/127.0.0.2,/127.0.0.5,/127.0.0.6 -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3152) Logic of AbstractNetworkTopologySnitch.compareEndpoints is wrong
[ https://issues.apache.org/jira/browse/CASSANDRA-3152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-3152: -- Priority: Minor (was: Major) Affects Version/s: (was: 0.8.4) > Logic of AbstractNetworkTopologySnitch.compareEndpoints is wrong > > > Key: CASSANDRA-3152 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3152 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: JVM >Reporter: Vijay >Assignee: Vijay >Priority: Minor > Fix For: 0.8.6 > > Attachments: 0001-fix-dc-rack-sorting-on-ANTS.patch > > > Current logic in ANTS.cE is to compare the rack and then compare the DC's, > the problem is when we have the same rack name but the racks are in a > diffrent DC's this logic breaks... > Example: > "us-east,1a", InetAddress.getByName("127.0.0.1") > "us-east,1b", InetAddress.getByName("127.0.0.2") > "us-east,1c", InetAddress.getByName("127.0.0.3") > "us-west,1a", InetAddress.getByName("127.0.0.4") > "us-west,1b", InetAddress.getByName("127.0.0.5") > "us-west,1c", InetAddress.getByName("127.0.0.6") > Expected: > /127.0.0.1,/127.0.0.3,/127.0.0.2,/127.0.0.4,/127.0.0.5,/127.0.0.6 > Current: > /127.0.0.1,/127.0.0.4,/127.0.0.3,/127.0.0.2,/127.0.0.5,/127.0.0.6 -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
svn commit: r1166484 - /cassandra/branches/cassandra-0.8/src/java/org/apache/cassandra/locator/AbstractNetworkTopologySnitch.java
Author: jbellis Date: Thu Sep 8 00:54:06 2011 New Revision: 1166484 URL: http://svn.apache.org/viewvc?rev=1166484&view=rev Log: allow topology sort to work with non-unique rack names between datacenters patch by Vijay; reviewed by jbellis for CASSANDRA-3152 Modified: cassandra/branches/cassandra-0.8/src/java/org/apache/cassandra/locator/AbstractNetworkTopologySnitch.java Modified: cassandra/branches/cassandra-0.8/src/java/org/apache/cassandra/locator/AbstractNetworkTopologySnitch.java URL: http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.8/src/java/org/apache/cassandra/locator/AbstractNetworkTopologySnitch.java?rev=1166484&r1=1166483&r2=1166484&view=diff == --- cassandra/branches/cassandra-0.8/src/java/org/apache/cassandra/locator/AbstractNetworkTopologySnitch.java (original) +++ cassandra/branches/cassandra-0.8/src/java/org/apache/cassandra/locator/AbstractNetworkTopologySnitch.java Thu Sep 8 00:54:06 2011 @@ -84,14 +84,6 @@ public abstract class AbstractNetworkTop if (address.equals(a2) && !address.equals(a1)) return 1; -String addressRack = getRack(address); -String a1Rack = getRack(a1); -String a2Rack = getRack(a2); -if (addressRack.equals(a1Rack) && !addressRack.equals(a2Rack)) -return -1; -if (addressRack.equals(a2Rack) && !addressRack.equals(a1Rack)) -return 1; - String addressDatacenter = getDatacenter(address); String a1Datacenter = getDatacenter(a1); String a2Datacenter = getDatacenter(a2); @@ -100,6 +92,13 @@ public abstract class AbstractNetworkTop if (addressDatacenter.equals(a2Datacenter) && !addressDatacenter.equals(a1Datacenter)) return 1; +String addressRack = getRack(address); +String a1Rack = getRack(a1); +String a2Rack = getRack(a2); +if (addressRack.equals(a1Rack) && !addressRack.equals(a2Rack)) +return -1; +if (addressRack.equals(a2Rack) && !addressRack.equals(a1Rack)) +return 1; return 0; } }
[jira] [Updated] (CASSANDRA-3152) Logic of AbstractNetworkTopologySnitch.compareEndpoints is wrong
[ https://issues.apache.org/jira/browse/CASSANDRA-3152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vijay updated CASSANDRA-3152: - Attachment: 0001-fix-dc-rack-sorting-on-ANTS.patch Tested and passed basically moved the DC comparison logic up in ANTS > Logic of AbstractNetworkTopologySnitch.compareEndpoints is wrong > > > Key: CASSANDRA-3152 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3152 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 0.8.4 > Environment: JVM >Reporter: Vijay >Assignee: Vijay > Fix For: 0.8.5 > > Attachments: 0001-fix-dc-rack-sorting-on-ANTS.patch > > > Current logic in ANTS.cE is to compare the rack and then compare the DC's, > the problem is when we have the same rack name but the racks are in a > diffrent DC's this logic breaks... > Example: > "us-east,1a", InetAddress.getByName("127.0.0.1") > "us-east,1b", InetAddress.getByName("127.0.0.2") > "us-east,1c", InetAddress.getByName("127.0.0.3") > "us-west,1a", InetAddress.getByName("127.0.0.4") > "us-west,1b", InetAddress.getByName("127.0.0.5") > "us-west,1c", InetAddress.getByName("127.0.0.6") > Expected: > /127.0.0.1,/127.0.0.3,/127.0.0.2,/127.0.0.4,/127.0.0.5,/127.0.0.6 > Current: > /127.0.0.1,/127.0.0.4,/127.0.0.3,/127.0.0.2,/127.0.0.5,/127.0.0.6 -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3154) Bad equality check in ColumnFamilyStore.isCompleteSSTables()
[ https://issues.apache.org/jira/browse/CASSANDRA-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099660#comment-13099660 ] Benjamin Coverston commented on CASSANDRA-3154: --- The cardinality restriction of .isEqualCollection is probably more restrictive than we need, but this does indeed fix the existing shallow equality problem. +1 > Bad equality check in ColumnFamilyStore.isCompleteSSTables() > > > Key: CASSANDRA-3154 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3154 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Tupshin Harper >Assignee: Tupshin Harper > Attachments: CASSANDRA-3154.diff > > > The equality check in isCompleteSSTables() always fails because it tries to > call equals() with a Set and a List. This might result in failure to purge > tombstones in some cases. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3154) Bad equality check in ColumnFamilyStore.isCompleteSSTables()
[ https://issues.apache.org/jira/browse/CASSANDRA-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tupshin Harper updated CASSANDRA-3154: -- Attachment: CASSANDRA-3154.diff > Bad equality check in ColumnFamilyStore.isCompleteSSTables() > > > Key: CASSANDRA-3154 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3154 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Tupshin Harper >Assignee: Tupshin Harper > Attachments: CASSANDRA-3154.diff > > > The equality check in isCompleteSSTables() always fails because it tries to > call equals() with a Set and a List. This might result in failure to purge > tombstones in some cases. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-3154) Bad equality check in ColumnFamilyStore.isCompleteSSTables()
Bad equality check in ColumnFamilyStore.isCompleteSSTables() Key: CASSANDRA-3154 URL: https://issues.apache.org/jira/browse/CASSANDRA-3154 Project: Cassandra Issue Type: Bug Components: Core Reporter: Tupshin Harper Assignee: Tupshin Harper The equality check in isCompleteSSTables() always fails because it tries to call equals() with a Set and a List. This might result in failure to purge tombstones in some cases. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3153) Add support for BigDecimal Java data type to JDBC ResultSet
[ https://issues.apache.org/jira/browse/CASSANDRA-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rick Shaw updated CASSANDRA-3153: - Issue Type: Sub-task (was: Improvement) Parent: CASSANDRA-2876 > Add support for BigDecimal Java data type to JDBC ResultSet > --- > > Key: CASSANDRA-3153 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3153 > Project: Cassandra > Issue Type: Sub-task > Components: Drivers >Affects Versions: 0.8.4 >Reporter: Rick Shaw >Assignee: Rick Shaw >Priority: Trivial > Labels: JDBC, > Fix For: 0.8.6 > > Attachments: ResultSet-support-for-BigDecimal-v1.txt > > > This patch adds support for {{BigDecimal}} to the {{ResultSet}} using the > recently added {{DecimalType}} data type. > It supports translation from a column that contained the following Java (CQL) > datatypes: > - {{Long - (bigint)}} > - {{Double - (double)}} > - {{BigInteger - (varint)}} > - {{BigDecimal - (decimal)}} > - {{String - (ascii,text,varchar)}} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3153) Add support for BigDecimal Java data type to JDBC ResultSet
[ https://issues.apache.org/jira/browse/CASSANDRA-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rick Shaw updated CASSANDRA-3153: - Attachment: ResultSet-support-for-BigDecimal-v1.txt > Add support for BigDecimal Java data type to JDBC ResultSet > --- > > Key: CASSANDRA-3153 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3153 > Project: Cassandra > Issue Type: Improvement > Components: Drivers >Affects Versions: 0.8.4 >Reporter: Rick Shaw >Assignee: Rick Shaw >Priority: Trivial > Labels: JDBC, > Fix For: 0.8.6 > > Attachments: ResultSet-support-for-BigDecimal-v1.txt > > > This patch adds support for {{BigDecimal}} to the {{ResultSet}} using the > recently added {{DecimalType}} data type. > It supports translation from a column that contained the following Java (CQL) > datatypes: > - {{Long - (bigint)}} > - {{Double - (double)}} > - {{BigInteger - (varint)}} > - {{BigDecimal - (decimal)}} > - {{String - (ascii,text,varchar)}} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-3153) Add support for BigDecimal Java data type to JDBC ResultSet
Add support for BigDecimal Java data type to JDBC ResultSet --- Key: CASSANDRA-3153 URL: https://issues.apache.org/jira/browse/CASSANDRA-3153 Project: Cassandra Issue Type: Improvement Components: Drivers Affects Versions: 0.8.4 Reporter: Rick Shaw Assignee: Rick Shaw Priority: Trivial Fix For: 0.8.6 This patch adds support for {{BigDecimal}} to the {{ResultSet}} using the recently added {{DecimalType}} data type. It supports translation from a column that contained the following Java (CQL) datatypes: - {{Long - (bigint)}} - {{Double - (double)}} - {{BigInteger - (varint)}} - {{BigDecimal - (decimal)}} - {{String - (ascii,text,varchar)}} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-622) Improve commitlog performance
[ https://issues.apache.org/jira/browse/CASSANDRA-622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099614#comment-13099614 ] Jonathan Ellis commented on CASSANDRA-622: -- bq. reserving space for each with a [AtomicInteger] first For an example of something similar, look at how SlabAllocator.Region.allocate uses this approach to reserve parts of a region for the ByteBuffers it allocates. > Improve commitlog performance > - > > Key: CASSANDRA-622 > URL: https://issues.apache.org/jira/browse/CASSANDRA-622 > Project: Cassandra > Issue Type: Improvement >Reporter: Jonathan Ellis >Priority: Minor > Labels: gsoc, gsoc2010 > > Postgresql uses fixed-size commitlog files that it pre-allocates (filling > with zeros) so "appending" to the log can use cheaper fsync-without-metadata > (length changes is "metadata"). Then, when a commitlog is not needed, it > "recycles" it by renaming it to a higher number. Commitlog entries have an > increasing id, and if you come to an out-of-sequence (earlier) id, then you > must have have reached the end of the commitlog and are reading from the > "recycled" part. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-3152) Logic of AbstractNetworkTopologySnitch.compareEndpoints is wrong
Logic of AbstractNetworkTopologySnitch.compareEndpoints is wrong Key: CASSANDRA-3152 URL: https://issues.apache.org/jira/browse/CASSANDRA-3152 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.8.4 Environment: JVM Reporter: Vijay Assignee: Vijay Fix For: 0.8.5 Current logic in ANTS.cE is to compare the rack and then compare the DC's, the problem is when we have the same rack name but the racks are in a diffrent DC's this logic breaks... Example: "us-east,1a", InetAddress.getByName("127.0.0.1") "us-east,1b", InetAddress.getByName("127.0.0.2") "us-east,1c", InetAddress.getByName("127.0.0.3") "us-west,1a", InetAddress.getByName("127.0.0.4") "us-west,1b", InetAddress.getByName("127.0.0.5") "us-west,1c", InetAddress.getByName("127.0.0.6") Expected: /127.0.0.1,/127.0.0.3,/127.0.0.2,/127.0.0.4,/127.0.0.5,/127.0.0.6 Current: /127.0.0.1,/127.0.0.4,/127.0.0.3,/127.0.0.2,/127.0.0.5,/127.0.0.6 -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2247) Cleanup unused imports and generics
[ https://issues.apache.org/jira/browse/CASSANDRA-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tupshin Harper updated CASSANDRA-2247: -- Attachment: CASSANDRA-2247-part2-rebasedv2.diff Fixed spacing and improperly reordered imports > Cleanup unused imports and generics > --- > > Key: CASSANDRA-2247 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2247 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Norman Maurer > Attachments: CASSANDRA-2247-part1.diff, > CASSANDRA-2247-part2-rebased.diff, CASSANDRA-2247-part2-rebasedv2.diff, > CASSANDRA-2247-part2.diff > > > In current cassandra trunk are many classes which import packages which are > never used. The same is true for Loggers which are often instanced and then > not used. Beside this I see many warnings related to generic usage. Would be > nice to clean this up a bit. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3119) Cli syntax for creating keyspace is inconsistent in 1.0
[ https://issues.apache.org/jira/browse/CASSANDRA-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099568#comment-13099568 ] Hudson commented on CASSANDRA-3119: --- Integrated in Cassandra #1085 (See [https://builds.apache.org/job/Cassandra/1085/]) Fix inconsistency of the CLI syntax when {} should be used instead of [{}] patch by Jake Luciani; reviewed by Pavel Yaskevich for CASSANDRA-3119 xedin : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166367 Files : * /cassandra/trunk/CHANGES.txt * /cassandra/trunk/src/java/org/apache/cassandra/cli/CliClient.java * /cassandra/trunk/test/unit/org/apache/cassandra/cli/CliTest.java > Cli syntax for creating keyspace is inconsistent in 1.0 > --- > > Key: CASSANDRA-3119 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3119 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.0 >Reporter: Sylvain Lebresne >Assignee: T Jake Luciani >Priority: Minor > Labels: cli > Fix For: 1.0 > > Attachments: v1-0001-CASSANDRA-3119-warn-on-old-cli-syntax.txt > > > In 0.8, to create a keyspace you could do: > {noformat} > create keyspace test with placement_strategy = > 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = > [{replication_factor:3}] > {noformat} > In current trunk, if you try that, you get back "null". Turns out this is > because the syntax for strategy_options has changed and you should not use > the brackets, i.e: > {noformat} > strategy_options = {replication_factor:3} > {noformat} > (and note that reversely, this syntax doesn't work in 0.8). > I'm not sure what motivated that change but this is very user unfriendly. The > help does correctly mention the new syntax, but it is the kind of changes > that takes you 5 minutes to notice. It will also break people scripts for no > good reason that I can see. > We should either: > # revert to the old syntax > # support both the new and old syntax > # at least print a meaningful error message when the old syntax is used > Imho, the last solution is by far the worst solution. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2961) Expire dead gossip states based on time
[ https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099466#comment-13099466 ] Brandon Williams commented on CASSANDRA-2961: - A few things: * I don't think it's worth pulling in the hamcrest dependency for 'is' instead of writing assertEquals(1L, expireTime) * VersionedValue.getExpireTime feels like the wrong place to me for that logic, but I could be wrong * rather than having multiple calls to addExpireTimeIfFound let's put this in excise() * some DEBUG logging to know when an endpoint is going to be expired (and whether a timestamp was supplied or not) could be helpful in the future > Expire dead gossip states based on time > --- > > Key: CASSANDRA-2961 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2961 > Project: Cassandra > Issue Type: Improvement >Affects Versions: 1.0 >Reporter: Brandon Williams > Labels: patch > Fix For: 1.0 > > Attachments: trunk-2961-v2.patch, trunk-2961.patch > > > Currently dead states are held until aVeryLongTime, 3 days. The problem is > that if a node reboots within this period, it begins a new 3 days and will > repopulate the ring with the dead state. While mostly harmless, perpetuating > the state forever is at least wasting a small amount of bandwidth. Instead, > we can expire states based on a ttl, which will require that the cluster be > loosely time synced; within the quarantine period of 60s. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2247) Cleanup unused imports and generics
[ https://issues.apache.org/jira/browse/CASSANDRA-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099312#comment-13099312 ] Jonathan Ellis commented on CASSANDRA-2247: --- There's a few lines that add unnecessary whitespace to a blank line, and some places where import order is rearranged incorrectly (see wiki.apache.org/cassandra/CodeStyle). I wouldn't normally cavil here but it *is* a cleanup patch. :) > Cleanup unused imports and generics > --- > > Key: CASSANDRA-2247 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2247 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Norman Maurer > Attachments: CASSANDRA-2247-part1.diff, > CASSANDRA-2247-part2-rebased.diff, CASSANDRA-2247-part2.diff > > > In current cassandra trunk are many classes which import packages which are > never used. The same is true for Loggers which are often instanced and then > not used. Beside this I see many warnings related to generic usage. Would be > nice to clean this up a bit. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2449) Deprecate or modify per-cf memtable sizes in favor of the global threshold
[ https://issues.apache.org/jira/browse/CASSANDRA-2449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-2449: -- Attachment: 2449.txt Removing optional fields is backwards-compatible in both Thrift and Avro so I went ahead and removed them in the new patch. > Deprecate or modify per-cf memtable sizes in favor of the global threshold > -- > > Key: CASSANDRA-2449 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2449 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Stu Hood >Assignee: Jonathan Ellis > Fix For: 1.0 > > Attachments: 2449.txt > > > The new memtable_total_space_in_mb setting is an excellent way to cap memory > usage for memtables, and one could argue that it should replace the per-cf > memtable sizes entirely. On the other hand, people may still want a knob to > tune to flush certain cfs less frequently. > I think a best of both worlds approach might be to deprecate the > memtable_(throughput|operations) settings, and replace them with a preference > value, which controls the relative memory usage of one CF versus another (all > CFs at 1 would mean equal preference). For backwards compatibility, we could > continue to read from the _throughput value and treat it as the preference > value, while logging a warning. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2449) Deprecate or modify per-cf memtable sizes in favor of the global threshold
[ https://issues.apache.org/jira/browse/CASSANDRA-2449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-2449: -- Attachment: (was: 2449.txt) > Deprecate or modify per-cf memtable sizes in favor of the global threshold > -- > > Key: CASSANDRA-2449 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2449 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Stu Hood >Assignee: Jonathan Ellis > Fix For: 1.0 > > > The new memtable_total_space_in_mb setting is an excellent way to cap memory > usage for memtables, and one could argue that it should replace the per-cf > memtable sizes entirely. On the other hand, people may still want a knob to > tune to flush certain cfs less frequently. > I think a best of both worlds approach might be to deprecate the > memtable_(throughput|operations) settings, and replace them with a preference > value, which controls the relative memory usage of one CF versus another (all > CFs at 1 would mean equal preference). For backwards compatibility, we could > continue to read from the _throughput value and treat it as the preference > value, while logging a warning. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CASSANDRA-2904) get_range_slices with no columns could be made faster by scanning the index file
[ https://issues.apache.org/jira/browse/CASSANDRA-2904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099296#comment-13099296 ] Tupshin Harper edited comment on CASSANDRA-2904 at 9/7/11 8:46 PM: --- Added a patch that adds a SSTableIndexScanner and related changes per Jonathan's suggest was (Author: tupshin): Adds SSTableIndexScanner per Jonathan's suggest > get_range_slices with no columns could be made faster by scanning the index > file > > > Key: CASSANDRA-2904 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2904 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Jean-Francois Im >Priority: Minor > Attachments: CASSANDRA-2904-v1.diff > > > When scanning a column family using get_range_slices() and a predicate that > contains no columns, the scan operates on the actual data, not the index file. > Our use case for this is that we have a column family that has relatively > wide rows(varying from 10kb to over 100kb of data per row) and we need to do > iterate through all the keys to figure out which rows we are interested in; > obviously, going through the index file than the data is faster in this > case(in the order of minutes versus hours). -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2904) get_range_slices with no columns could be made faster by scanning the index file
[ https://issues.apache.org/jira/browse/CASSANDRA-2904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tupshin Harper updated CASSANDRA-2904: -- Attachment: CASSANDRA-2904-v1.diff > get_range_slices with no columns could be made faster by scanning the index > file > > > Key: CASSANDRA-2904 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2904 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Jean-Francois Im >Priority: Minor > Attachments: CASSANDRA-2904-v1.diff > > > When scanning a column family using get_range_slices() and a predicate that > contains no columns, the scan operates on the actual data, not the index file. > Our use case for this is that we have a column family that has relatively > wide rows(varying from 10kb to over 100kb of data per row) and we need to do > iterate through all the keys to figure out which rows we are interested in; > obviously, going through the index file than the data is faster in this > case(in the order of minutes versus hours). -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2449) Deprecate or modify per-cf memtable sizes in favor of the global threshold
[ https://issues.apache.org/jira/browse/CASSANDRA-2449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-2449: -- Attachment: 2449.txt the memtable threshold fields were already optional in both Thrift and Avro, so I've just taken them out of our internal CFMetadata and anything that still touched that. > Deprecate or modify per-cf memtable sizes in favor of the global threshold > -- > > Key: CASSANDRA-2449 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2449 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Stu Hood >Assignee: Jonathan Ellis > Fix For: 1.0 > > Attachments: 2449.txt > > > The new memtable_total_space_in_mb setting is an excellent way to cap memory > usage for memtables, and one could argue that it should replace the per-cf > memtable sizes entirely. On the other hand, people may still want a knob to > tune to flush certain cfs less frequently. > I think a best of both worlds approach might be to deprecate the > memtable_(throughput|operations) settings, and replace them with a preference > value, which controls the relative memory usage of one CF versus another (all > CFs at 1 would mean equal preference). For backwards compatibility, we could > continue to read from the _throughput value and treat it as the preference > value, while logging a warning. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3150) ColumnFormatRecordReader loops forever
[ https://issues.apache.org/jira/browse/CASSANDRA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099281#comment-13099281 ] Jonathan Ellis commented on CASSANDRA-3150: --- bq. so that doesn't explain this bug? I'm afraid not. bq. Never worked. Then I guess your tokens still aren't balanced (and nodetool is smoking crack). It's virtually impossible to pick balanced tokens until we do CASSANDRA-2917. > ColumnFormatRecordReader loops forever > -- > > Key: CASSANDRA-3150 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3150 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Affects Versions: 0.8.4 >Reporter: Mck SembWever >Assignee: Mck SembWever >Priority: Critical > Attachments: CASSANDRA-3150.patch > > > From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039 > {quote} > bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner > bq. CFIF's inputSplitSize=196608 > bq. 3 map tasks (from 4013) is still running after read 25 million rows. > bq. Can this be a bug in StorageService.getSplits(..) ? > getSplits looks pretty foolproof to me but I guess we'd need to add > more debug logging to rule out a bug there for sure. > I guess the main alternative would be a bug in the recordreader paging. > {quote} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3100) Secondary index still does minor compacting after deleting index
[ https://issues.apache.org/jira/browse/CASSANDRA-3100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099282#comment-13099282 ] Jeremy Hanna commented on CASSANDRA-3100: - I'm not 100% sure - we've since moved to 0.8.4 and I haven't seen it happen again. > Secondary index still does minor compacting after deleting index > > > Key: CASSANDRA-3100 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3100 > Project: Cassandra > Issue Type: Bug >Affects Versions: 0.7.8 >Reporter: Jeremy Hanna > Fix For: 0.7.10 > > > We deleted all of our secondary indexes. A couple of days later I was > watching compactionstats on one of the nodes and it was in the process of > minor compacting one of the deleted secondary indexes. I double checked the > keyspace definitions on the CLI and there were no secondary indexes defined. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2247) Cleanup unused imports and generics
[ https://issues.apache.org/jira/browse/CASSANDRA-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tupshin Harper updated CASSANDRA-2247: -- Attachment: CASSANDRA-2247-part2-rebased.diff Rebased the previous part2 patch against current trunk > Cleanup unused imports and generics > --- > > Key: CASSANDRA-2247 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2247 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Norman Maurer > Attachments: CASSANDRA-2247-part1.diff, > CASSANDRA-2247-part2-rebased.diff, CASSANDRA-2247-part2.diff > > > In current cassandra trunk are many classes which import packages which are > never used. The same is true for Loggers which are often instanced and then > not used. Beside this I see many warnings related to generic usage. Would be > nice to clean this up a bit. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2434) node bootstrapping can violate consistency
[ https://issues.apache.org/jira/browse/CASSANDRA-2434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] paul cannon updated CASSANDRA-2434: --- Attachment: 2434-testery.patch.txt Patch 2434-testery.patch.txt adds a bit to unit tests to exercise o.a.c.dht.BootStrapper.getRangesWithStrictSource(). > node bootstrapping can violate consistency > -- > > Key: CASSANDRA-2434 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2434 > Project: Cassandra > Issue Type: Bug >Reporter: Peter Schuller >Assignee: paul cannon > Fix For: 1.1 > > Attachments: 2434-2.patch.txt, 2434-testery.patch.txt > > > My reading (a while ago) of the code indicates that there is no logic > involved during bootstrapping that avoids consistency level violations. If I > recall correctly it just grabs neighbors that are currently up. > There are at least two issues I have with this behavior: > * If I have a cluster where I have applications relying on QUORUM with RF=3, > and bootstrapping complete based on only one node, I have just violated the > supposedly guaranteed consistency semantics of the cluster. > * Nodes can flap up and down at any time, so even if a human takes care to > look at which nodes are up and things about it carefully before > bootstrapping, there's no guarantee. > A complication is that not only does it depend on use-case where this is an > issue (if all you ever do you do at CL.ONE, it's fine); even in a cluster > which is otherwise used for QUORUM operations you may wish to accept > less-than-quorum nodes during bootstrap in various emergency situations. > A potential easy fix is to have bootstrap take an argument which is the > number of hosts to bootstrap from, or to assume QUORUM if none is given. > (A related concern is bootstrapping across data centers. You may *want* to > bootstrap to a local node and then do a repair to avoid sending loads of data > across DC:s while still achieving consistency. Or even if you don't care > about the consistency issues, I don't think there is currently a way to > bootstrap from local nodes only.) > Thoughts? -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3150) ColumnFormatRecordReader loops forever
[ https://issues.apache.org/jira/browse/CASSANDRA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099269#comment-13099269 ] Mck SembWever commented on CASSANDRA-3150: -- bq. BOP is sorting the shorter token correctly since 3 < 7. Sorry, so that doesn't explain this bug? bq. Load won't decrease until you run cleanup. Never worked. repair and cleanup is run every night, the move was done one week ago and more than a couple of weeks ago. > ColumnFormatRecordReader loops forever > -- > > Key: CASSANDRA-3150 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3150 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Affects Versions: 0.8.4 >Reporter: Mck SembWever >Assignee: Mck SembWever >Priority: Critical > Attachments: CASSANDRA-3150.patch > > > From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039 > {quote} > bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner > bq. CFIF's inputSplitSize=196608 > bq. 3 map tasks (from 4013) is still running after read 25 million rows. > bq. Can this be a bug in StorageService.getSplits(..) ? > getSplits looks pretty foolproof to me but I guess we'd need to add > more debug logging to rule out a bug there for sure. > I guess the main alternative would be a bug in the recordreader paging. > {quote} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2434) node bootstrapping can violate consistency
[ https://issues.apache.org/jira/browse/CASSANDRA-2434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] paul cannon updated CASSANDRA-2434: --- Attachment: 2434-2.patch.txt updated patch fixes the docstring for getRangesWithStrictSource(). > node bootstrapping can violate consistency > -- > > Key: CASSANDRA-2434 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2434 > Project: Cassandra > Issue Type: Bug >Reporter: Peter Schuller >Assignee: paul cannon > Fix For: 1.1 > > Attachments: 2434-2.patch.txt > > > My reading (a while ago) of the code indicates that there is no logic > involved during bootstrapping that avoids consistency level violations. If I > recall correctly it just grabs neighbors that are currently up. > There are at least two issues I have with this behavior: > * If I have a cluster where I have applications relying on QUORUM with RF=3, > and bootstrapping complete based on only one node, I have just violated the > supposedly guaranteed consistency semantics of the cluster. > * Nodes can flap up and down at any time, so even if a human takes care to > look at which nodes are up and things about it carefully before > bootstrapping, there's no guarantee. > A complication is that not only does it depend on use-case where this is an > issue (if all you ever do you do at CL.ONE, it's fine); even in a cluster > which is otherwise used for QUORUM operations you may wish to accept > less-than-quorum nodes during bootstrap in various emergency situations. > A potential easy fix is to have bootstrap take an argument which is the > number of hosts to bootstrap from, or to assume QUORUM if none is given. > (A related concern is bootstrapping across data centers. You may *want* to > bootstrap to a local node and then do a repair to avoid sending loads of data > across DC:s while still achieving consistency. Or even if you don't care > about the consistency issues, I don't think there is currently a way to > bootstrap from local nodes only.) > Thoughts? -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2434) node bootstrapping can violate consistency
[ https://issues.apache.org/jira/browse/CASSANDRA-2434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] paul cannon updated CASSANDRA-2434: --- Attachment: (was: 2434.patch.txt) > node bootstrapping can violate consistency > -- > > Key: CASSANDRA-2434 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2434 > Project: Cassandra > Issue Type: Bug >Reporter: Peter Schuller >Assignee: paul cannon > Fix For: 1.1 > > Attachments: 2434-2.patch.txt > > > My reading (a while ago) of the code indicates that there is no logic > involved during bootstrapping that avoids consistency level violations. If I > recall correctly it just grabs neighbors that are currently up. > There are at least two issues I have with this behavior: > * If I have a cluster where I have applications relying on QUORUM with RF=3, > and bootstrapping complete based on only one node, I have just violated the > supposedly guaranteed consistency semantics of the cluster. > * Nodes can flap up and down at any time, so even if a human takes care to > look at which nodes are up and things about it carefully before > bootstrapping, there's no guarantee. > A complication is that not only does it depend on use-case where this is an > issue (if all you ever do you do at CL.ONE, it's fine); even in a cluster > which is otherwise used for QUORUM operations you may wish to accept > less-than-quorum nodes during bootstrap in various emergency situations. > A potential easy fix is to have bootstrap take an argument which is the > number of hosts to bootstrap from, or to assume QUORUM if none is given. > (A related concern is bootstrapping across data centers. You may *want* to > bootstrap to a local node and then do a repair to avoid sending loads of data > across DC:s while still achieving consistency. Or even if you don't care > about the consistency issues, I don't think there is currently a way to > bootstrap from local nodes only.) > Thoughts? -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
svn commit: r1166367 - in /cassandra/trunk: CHANGES.txt src/java/org/apache/cassandra/cli/CliClient.java test/unit/org/apache/cassandra/cli/CliTest.java
Author: xedin Date: Wed Sep 7 20:07:38 2011 New Revision: 1166367 URL: http://svn.apache.org/viewvc?rev=1166367&view=rev Log: Fix inconsistency of the CLI syntax when {} should be used instead of [{}] patch by Jake Luciani; reviewed by Pavel Yaskevich for CASSANDRA-3119 Modified: cassandra/trunk/CHANGES.txt cassandra/trunk/src/java/org/apache/cassandra/cli/CliClient.java cassandra/trunk/test/unit/org/apache/cassandra/cli/CliTest.java Modified: cassandra/trunk/CHANGES.txt URL: http://svn.apache.org/viewvc/cassandra/trunk/CHANGES.txt?rev=1166367&r1=1166366&r2=1166367&view=diff == --- cassandra/trunk/CHANGES.txt (original) +++ cassandra/trunk/CHANGES.txt Wed Sep 7 20:07:38 2011 @@ -64,7 +64,8 @@ * fix of the CQL count() behavior (CASSANDRA-3068) * use TreeMap backed column families for the SSTable simple writers (CASSANDRA-3148) - + * fix inconsistency of the CLI syntax when {} should be used instead of [{}] + (CASSANDRA-3119) 0.8.5 * fix NPE when encryption_options is unspecified (CASSANDRA-3007) Modified: cassandra/trunk/src/java/org/apache/cassandra/cli/CliClient.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/cli/CliClient.java?rev=1166367&r1=1166366&r2=1166367&view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/cli/CliClient.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/cli/CliClient.java Wed Sep 7 20:07:38 2011 @@ -25,6 +25,8 @@ import java.nio.ByteBuffer; import java.nio.charset.CharacterCodingException; import java.util.*; +import antlr.Token; + import com.google.common.base.Predicate; import com.google.common.collect.Collections2; import org.apache.commons.lang.StringUtils; @@ -2438,9 +2440,20 @@ public class CliClient */ private Map getStrategyOptionsFromTree(Tree options) { +//Check for old [{}] syntax +if (options.getText().equalsIgnoreCase("ARRAY")) +{ +System.err.println("WARNING: [{}] strategy_options syntax is deprecated, please use {}"); + +if (options.getChildCount() == 0) +return Collections.EMPTY_MAP; + +return getStrategyOptionsFromTree(options.getChild(0)); +} + // this map will be returned Map strategyOptions = new HashMap(); - + // each child node is ^(PAIR $key $value) for (int j = 0; j < options.getChildCount(); j++) { Modified: cassandra/trunk/test/unit/org/apache/cassandra/cli/CliTest.java URL: http://svn.apache.org/viewvc/cassandra/trunk/test/unit/org/apache/cassandra/cli/CliTest.java?rev=1166367&r1=1166366&r2=1166367&view=diff == --- cassandra/trunk/test/unit/org/apache/cassandra/cli/CliTest.java (original) +++ cassandra/trunk/test/unit/org/apache/cassandra/cli/CliTest.java Wed Sep 7 20:07:38 2011 @@ -129,6 +129,7 @@ public class CliTest extends CleanupHelp "drop index on CF3.'big world';", "update keyspace TestKeySpace with placement_strategy='org.apache.cassandra.locator.LocalStrategy' and durable_writes = false;", "update keyspace TestKeySpace with strategy_options={DC1:3, DC2:4, DC5:1};", +"update keyspace TestKeySpace with strategy_options=[{DC1:3, DC2:4, DC5:1}];", "assume 123 comparator as utf8;", "assume 123 sub_comparator as integer;", "assume 123 validator as lexicaluuid;",
[jira] [Updated] (CASSANDRA-2961) Expire dead gossip states based on time
[ https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jérémy Sevellec updated CASSANDRA-2961: --- Attachment: trunk-2961-v2.patch > Expire dead gossip states based on time > --- > > Key: CASSANDRA-2961 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2961 > Project: Cassandra > Issue Type: Improvement >Affects Versions: 1.0 >Reporter: Brandon Williams > Labels: patch > Fix For: 1.0 > > Attachments: trunk-2961-v2.patch, trunk-2961.patch > > > Currently dead states are held until aVeryLongTime, 3 days. The problem is > that if a node reboots within this period, it begins a new 3 days and will > repopulate the ring with the dead state. While mostly harmless, perpetuating > the state forever is at least wasting a small amount of bandwidth. Instead, > we can expire states based on a ttl, which will require that the cluster be > loosely time synced; within the quarantine period of 60s. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CASSANDRA-3150) ColumnFormatRecordReader loops forever
[ https://issues.apache.org/jira/browse/CASSANDRA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099263#comment-13099263 ] Jonathan Ellis edited comment on CASSANDRA-3150 at 9/7/11 7:57 PM: --- If Token's Comparable implementation is broken, anything is possible. But I don't think it is. In your case, for instance, BOP is sorting the shorter token correctly since 3 < 7. Load won't decrease until you run cleanup. was (Author: jbellis): If Token's Comparable implementation is broken, anything is possible. But I don't think it is. In your case, for instance, OPP is sorting the shorter token correctly since 3 < 7. Load won't decrease until you run cleanup. > ColumnFormatRecordReader loops forever > -- > > Key: CASSANDRA-3150 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3150 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Affects Versions: 0.8.4 >Reporter: Mck SembWever >Assignee: Mck SembWever >Priority: Critical > Attachments: CASSANDRA-3150.patch > > > From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039 > {quote} > bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner > bq. CFIF's inputSplitSize=196608 > bq. 3 map tasks (from 4013) is still running after read 25 million rows. > bq. Can this be a bug in StorageService.getSplits(..) ? > getSplits looks pretty foolproof to me but I guess we'd need to add > more debug logging to rule out a bug there for sure. > I guess the main alternative would be a bug in the recordreader paging. > {quote} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3150) ColumnFormatRecordReader loops forever
[ https://issues.apache.org/jira/browse/CASSANDRA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099263#comment-13099263 ] Jonathan Ellis commented on CASSANDRA-3150: --- If Token's Comparable implementation is broken, anything is possible. But I don't think it is. In your case, for instance, OPP is sorting the shorter token correctly since 3 < 7. Load won't decrease until you run cleanup. > ColumnFormatRecordReader loops forever > -- > > Key: CASSANDRA-3150 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3150 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Affects Versions: 0.8.4 >Reporter: Mck SembWever >Assignee: Mck SembWever >Priority: Critical > Attachments: CASSANDRA-3150.patch > > > From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039 > {quote} > bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner > bq. CFIF's inputSplitSize=196608 > bq. 3 map tasks (from 4013) is still running after read 25 million rows. > bq. Can this be a bug in StorageService.getSplits(..) ? > getSplits looks pretty foolproof to me but I guess we'd need to add > more debug logging to rule out a bug there for sure. > I guess the main alternative would be a bug in the recordreader paging. > {quote} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3149) Update CQL type names to match expected (SQL) behavor
[ https://issues.apache.org/jira/browse/CASSANDRA-3149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-3149: -- Attachment: 3149.txt updated patch to add boolean to textile doc, which was also missing. Also updates NEWS. > Update CQL type names to match expected (SQL) behavor > - > > Key: CASSANDRA-3149 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3149 > Project: Cassandra > Issue Type: Improvement >Affects Versions: 0.8.0 >Reporter: Jonathan Ellis >Assignee: Jonathan Ellis >Priority: Minor > Labels: cql > Fix For: 1.0 > > Attachments: 3149.txt > > > As discussed in CASSANDRA-3031, we should make the following changes: > - rename bytea to blob > - rename date to timestamp > - remove int, pending addition of CASSANDRA-3031 (bigint and varint will be > unchanged) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3149) Update CQL type names to match expected (SQL) behavor
[ https://issues.apache.org/jira/browse/CASSANDRA-3149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-3149: -- Attachment: (was: 3149.txt) > Update CQL type names to match expected (SQL) behavor > - > > Key: CASSANDRA-3149 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3149 > Project: Cassandra > Issue Type: Improvement >Affects Versions: 0.8.0 >Reporter: Jonathan Ellis >Assignee: Jonathan Ellis >Priority: Minor > Labels: cql > Fix For: 1.0 > > > As discussed in CASSANDRA-3031, we should make the following changes: > - rename bytea to blob > - rename date to timestamp > - remove int, pending addition of CASSANDRA-3031 (bigint and varint will be > unchanged) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CASSANDRA-3150) ColumnFormatRecordReader loops forever
[ https://issues.apache.org/jira/browse/CASSANDRA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099254#comment-13099254 ] Mck SembWever edited comment on CASSANDRA-3150 at 9/7/11 7:43 PM: -- What about the case where tokens of different length exist. Could get_range_slices be busted there? I don't know if this is actually possible but from {noformat} Address Status State LoadOwnsToken Token(bytes[76118303760208547436305468318170713656]) 152.90.241.22 Up Normal 270.46 GB 33.33% Token(bytes[30303030303031333131313739353337303038d4e7f72db2ed11e09d7c68b59973a5d8]) 152.90.241.24 Up Normal 247.89 GB 33.33% Token(bytes[303030303030313331323631393735313231381778518cc00711e0acb968b59973a5d8]) 152.90.241.23 Up Normal 1.1 TB 33.33% Token(bytes[76118303760208547436305468318170713656]) {noformat} you see the real tokens are very long compared to the initial_tokens the cluster was configured with. (The two long tokens have been moved off their initial_tokens, and to note the load on .23 never decreased to ~300GB as it should have...). was (Author: michaelsembwever): What about the case where tokens of different length exist. I don't know if this is actually possible but from {noformat} Address Status State LoadOwnsToken Token(bytes[76118303760208547436305468318170713656]) 152.90.241.22 Up Normal 270.46 GB 33.33% Token(bytes[30303030303031333131313739353337303038d4e7f72db2ed11e09d7c68b59973a5d8]) 152.90.241.24 Up Normal 247.89 GB 33.33% Token(bytes[303030303030313331323631393735313231381778518cc00711e0acb968b59973a5d8]) 152.90.241.23 Up Normal 1.1 TB 33.33% Token(bytes[76118303760208547436305468318170713656]) {noformat} you see the real tokens are very long compared to the initial_tokens the cluster was configured with. (The two long tokens have been moved off their initial_tokens, and to note the load on .23 never decreased to ~300GB as it should have...). > ColumnFormatRecordReader loops forever > -- > > Key: CASSANDRA-3150 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3150 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Affects Versions: 0.8.4 >Reporter: Mck SembWever >Assignee: Mck SembWever >Priority: Critical > Attachments: CASSANDRA-3150.patch > > > From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039 > {quote} > bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner > bq. CFIF's inputSplitSize=196608 > bq. 3 map tasks (from 4013) is still running after read 25 million rows. > bq. Can this be a bug in StorageService.getSplits(..) ? > getSplits looks pretty foolproof to me but I guess we'd need to add > more debug logging to rule out a bug there for sure. > I guess the main alternative would be a bug in the recordreader paging. > {quote} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CASSANDRA-3150) ColumnFormatRecordReader loops forever
[ https://issues.apache.org/jira/browse/CASSANDRA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099254#comment-13099254 ] Mck SembWever edited comment on CASSANDRA-3150 at 9/7/11 7:38 PM: -- What about the case where tokens of different length exist. I don't know if this is actually possible but from {noformat} Address Status State LoadOwnsToken Token(bytes[76118303760208547436305468318170713656]) 152.90.241.22 Up Normal 270.46 GB 33.33% Token(bytes[30303030303031333131313739353337303038d4e7f72db2ed11e09d7c68b59973a5d8]) 152.90.241.24 Up Normal 247.89 GB 33.33% Token(bytes[303030303030313331323631393735313231381778518cc00711e0acb968b59973a5d8]) 152.90.241.23 Up Normal 1.1 TB 33.33% Token(bytes[76118303760208547436305468318170713656]) {noformat} you see the real tokens are very long compared to the initial_tokens the cluster was configured with. (The two long tokens have been moved off their initial_tokens, and to note the load on .23 never decreased to ~300GB as it should have...). was (Author: michaelsembwever): What about the case where tokens of different length exist. I don't know if this is actually possible but from {noformat} Address Status State LoadOwnsToken Token(bytes[76118303760208547436305468318170713656]) 152.90.241.22 Up Normal 270.46 GB 33.33% Token(bytes[30303030303031333131313739353337303038d4e7f72db2ed11e09d7c68b59973a5d8]) 152.90.241.24 Up Normal 247.89 GB 33.33% Token(bytes[303030303030313331323631393735313231381778518cc00711e0acb968b59973a5d8]) 152.90.241.23 Up Normal 1.1 TB 33.33% Token(bytes[76118303760208547436305468318170713656]) {noformat} you see the real tokens are very long compared to the initial_tokens the cluster was configured with. (The two long tokens has since been moved, and to note the load on .23 never decreased to ~300GB as it should have...). > ColumnFormatRecordReader loops forever > -- > > Key: CASSANDRA-3150 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3150 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Affects Versions: 0.8.4 >Reporter: Mck SembWever >Assignee: Mck SembWever >Priority: Critical > Attachments: CASSANDRA-3150.patch > > > From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039 > {quote} > bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner > bq. CFIF's inputSplitSize=196608 > bq. 3 map tasks (from 4013) is still running after read 25 million rows. > bq. Can this be a bug in StorageService.getSplits(..) ? > getSplits looks pretty foolproof to me but I guess we'd need to add > more debug logging to rule out a bug there for sure. > I guess the main alternative would be a bug in the recordreader paging. > {quote} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3150) ColumnFormatRecordReader loops forever
[ https://issues.apache.org/jira/browse/CASSANDRA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099254#comment-13099254 ] Mck SembWever commented on CASSANDRA-3150: -- What about the case where tokens of different length exist. I don't know if this is actually possible but from {noformat} Address Status State LoadOwnsToken Token(bytes[76118303760208547436305468318170713656]) 152.90.241.22 Up Normal 270.46 GB 33.33% Token(bytes[30303030303031333131313739353337303038d4e7f72db2ed11e09d7c68b59973a5d8]) 152.90.241.24 Up Normal 247.89 GB 33.33% Token(bytes[303030303030313331323631393735313231381778518cc00711e0acb968b59973a5d8]) 152.90.241.23 Up Normal 1.1 TB 33.33% Token(bytes[76118303760208547436305468318170713656]) {noformat} you see the real tokens are very long compared to the initial_tokens the cluster was configured with. (The two long tokens has since been moved, and to note the load on .23 never decreased to ~300GB as it should have...). > ColumnFormatRecordReader loops forever > -- > > Key: CASSANDRA-3150 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3150 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Affects Versions: 0.8.4 >Reporter: Mck SembWever >Assignee: Mck SembWever >Priority: Critical > Attachments: CASSANDRA-3150.patch > > > From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039 > {quote} > bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner > bq. CFIF's inputSplitSize=196608 > bq. 3 map tasks (from 4013) is still running after read 25 million rows. > bq. Can this be a bug in StorageService.getSplits(..) ? > getSplits looks pretty foolproof to me but I guess we'd need to add > more debug logging to rule out a bug there for sure. > I guess the main alternative would be a bug in the recordreader paging. > {quote} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3150) ColumnFormatRecordReader loops forever
[ https://issues.apache.org/jira/browse/CASSANDRA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099247#comment-13099247 ] Jonathan Ellis commented on CASSANDRA-3150: --- bq. a gap so small there exists no rows in between Right. So you page through with startToken increasing, until either you hit endToken or you get no rows back. (Recall that the rows from get_range_slices come back in token order.) > ColumnFormatRecordReader loops forever > -- > > Key: CASSANDRA-3150 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3150 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Affects Versions: 0.8.4 >Reporter: Mck SembWever >Assignee: Mck SembWever >Priority: Critical > Attachments: CASSANDRA-3150.patch > > > From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039 > {quote} > bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner > bq. CFIF's inputSplitSize=196608 > bq. 3 map tasks (from 4013) is still running after read 25 million rows. > bq. Can this be a bug in StorageService.getSplits(..) ? > getSplits looks pretty foolproof to me but I guess we'd need to add > more debug logging to rule out a bug there for sure. > I guess the main alternative would be a bug in the recordreader paging. > {quote} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3119) Cli syntax for creating keyspace is inconsistent in 1.0
[ https://issues.apache.org/jira/browse/CASSANDRA-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-3119: -- Reviewer: xedin > Cli syntax for creating keyspace is inconsistent in 1.0 > --- > > Key: CASSANDRA-3119 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3119 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.0 >Reporter: Sylvain Lebresne >Assignee: T Jake Luciani >Priority: Minor > Labels: cli > Fix For: 1.0 > > Attachments: v1-0001-CASSANDRA-3119-warn-on-old-cli-syntax.txt > > > In 0.8, to create a keyspace you could do: > {noformat} > create keyspace test with placement_strategy = > 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = > [{replication_factor:3}] > {noformat} > In current trunk, if you try that, you get back "null". Turns out this is > because the syntax for strategy_options has changed and you should not use > the brackets, i.e: > {noformat} > strategy_options = {replication_factor:3} > {noformat} > (and note that reversely, this syntax doesn't work in 0.8). > I'm not sure what motivated that change but this is very user unfriendly. The > help does correctly mention the new syntax, but it is the kind of changes > that takes you 5 minutes to notice. It will also break people scripts for no > good reason that I can see. > We should either: > # revert to the old syntax > # support both the new and old syntax > # at least print a meaningful error message when the old syntax is used > Imho, the last solution is by far the worst solution. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CASSANDRA-3150) ColumnFormatRecordReader loops forever
[ https://issues.apache.org/jira/browse/CASSANDRA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099234#comment-13099234 ] Mck SembWever edited comment on CASSANDRA-3150 at 9/7/11 7:24 PM: -- Here keyRange is startToken to split.getEndToken() startToken is updated each iterate to the last row read (each iterate is batchRowCount rows). What happens if split.getEndToken() doesn't correspond to any of the rowKeys? To me it reads that startToken will hop over split.getEndToken() and get_range_slices(..) will start querying against wrapping ranges. This will still return rows and so the iteration will continue, now forever. The only way out for this code today is a) startToken equals split.getEndToken(), or b) get_range_slices(..) is called with startToken equals split.getEndToken() OR a gap so small there exists no rows in between. was (Author: michaelsembwever): Here keyRange is startToken to split.getEndToken() startToken is updated each iterate to the last row read (each iterate is batchRowCount rows). What happens if split.getEndToken() doesn't correspond to any of the rowKeys? To me it reads that startToken will hop over split.getEndToken() and get_range_slices(..) will start returning wrapping ranges. This will still return rows and so the iteration will continue, now forever. The only way out for this code today is a) startToken equals split.getEndToken(), or b) get_range_slices(..) is called with startToken equals split.getEndToken() OR a gap so small there exists no rows in between. > ColumnFormatRecordReader loops forever > -- > > Key: CASSANDRA-3150 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3150 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Affects Versions: 0.8.4 >Reporter: Mck SembWever >Assignee: Mck SembWever >Priority: Critical > Attachments: CASSANDRA-3150.patch > > > From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039 > {quote} > bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner > bq. CFIF's inputSplitSize=196608 > bq. 3 map tasks (from 4013) is still running after read 25 million rows. > bq. Can this be a bug in StorageService.getSplits(..) ? > getSplits looks pretty foolproof to me but I guess we'd need to add > more debug logging to rule out a bug there for sure. > I guess the main alternative would be a bug in the recordreader paging. > {quote} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3150) ColumnFormatRecordReader loops forever
[ https://issues.apache.org/jira/browse/CASSANDRA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099243#comment-13099243 ] Jonathan Ellis commented on CASSANDRA-3150: --- the next startToken always comes from the most recently returned range (s.t. start < range <= end), so unless there's a bad bug in get_range_slices it can't ever sort after endToken. > ColumnFormatRecordReader loops forever > -- > > Key: CASSANDRA-3150 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3150 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Affects Versions: 0.8.4 >Reporter: Mck SembWever >Assignee: Mck SembWever >Priority: Critical > Attachments: CASSANDRA-3150.patch > > > From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039 > {quote} > bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner > bq. CFIF's inputSplitSize=196608 > bq. 3 map tasks (from 4013) is still running after read 25 million rows. > bq. Can this be a bug in StorageService.getSplits(..) ? > getSplits looks pretty foolproof to me but I guess we'd need to add > more debug logging to rule out a bug there for sure. > I guess the main alternative would be a bug in the recordreader paging. > {quote} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3119) Cli syntax for creating keyspace is inconsistent in 1.0
[ https://issues.apache.org/jira/browse/CASSANDRA-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099236#comment-13099236 ] T Jake Luciani commented on CASSANDRA-3119: --- The patch accepts both new and old syntax, also warns the user when they use [{}] > Cli syntax for creating keyspace is inconsistent in 1.0 > --- > > Key: CASSANDRA-3119 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3119 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.0 >Reporter: Sylvain Lebresne >Assignee: T Jake Luciani >Priority: Minor > Labels: cli > Fix For: 1.0 > > Attachments: v1-0001-CASSANDRA-3119-warn-on-old-cli-syntax.txt > > > In 0.8, to create a keyspace you could do: > {noformat} > create keyspace test with placement_strategy = > 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = > [{replication_factor:3}] > {noformat} > In current trunk, if you try that, you get back "null". Turns out this is > because the syntax for strategy_options has changed and you should not use > the brackets, i.e: > {noformat} > strategy_options = {replication_factor:3} > {noformat} > (and note that reversely, this syntax doesn't work in 0.8). > I'm not sure what motivated that change but this is very user unfriendly. The > help does correctly mention the new syntax, but it is the kind of changes > that takes you 5 minutes to notice. It will also break people scripts for no > good reason that I can see. > We should either: > # revert to the old syntax > # support both the new and old syntax > # at least print a meaningful error message when the old syntax is used > Imho, the last solution is by far the worst solution. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CASSANDRA-3150) ColumnFormatRecordReader loops forever
[ https://issues.apache.org/jira/browse/CASSANDRA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099234#comment-13099234 ] Mck SembWever edited comment on CASSANDRA-3150 at 9/7/11 7:17 PM: -- Here keyRange is startToken to split.getEndToken() startToken is updated each iterate to the last row read (each iterate is batchRowCount rows). What happens if split.getEndToken() doesn't correspond to any of the rowKeys? To me it reads that startToken will hop over split.getEndToken() and get_range_slices(..) will start returning wrapping ranges. This will still return rows and so the iteration will continue, now forever. The only way out for this code today is a) startToken equals split.getEndToken(), or b) get_range_slices(..) is called with startToken equals split.getEndToken() OR a gap so small there exists no rows in between. was (Author: michaelsembwever): Here keyRange is startToken to split.getEndToken() startToken is updated each iterate to the last row read (each iterate is batchRowCount rows). What happens is split.getEndToken() doesn't correspond to any of the rowKeys? To me it reads that startToken will hop over split.getEndToken() and get_rage_slices(..) will start returning wrapping ranges. This will still return rows and so the iteration will continue, now forever. The only way out for this code today is a) startToken equals split.getEndToken(), or b) get_range_slices(..) is called with startToken equals split.getEndToken() OR a gap so small there exists no rows in between. > ColumnFormatRecordReader loops forever > -- > > Key: CASSANDRA-3150 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3150 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Affects Versions: 0.8.4 >Reporter: Mck SembWever >Assignee: Mck SembWever >Priority: Critical > Attachments: CASSANDRA-3150.patch > > > From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039 > {quote} > bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner > bq. CFIF's inputSplitSize=196608 > bq. 3 map tasks (from 4013) is still running after read 25 million rows. > bq. Can this be a bug in StorageService.getSplits(..) ? > getSplits looks pretty foolproof to me but I guess we'd need to add > more debug logging to rule out a bug there for sure. > I guess the main alternative would be a bug in the recordreader paging. > {quote} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3119) Cli syntax for creating keyspace is inconsistent in 1.0
[ https://issues.apache.org/jira/browse/CASSANDRA-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-3119: -- Attachment: v1-0001-CASSANDRA-3119-warn-on-old-cli-syntax.txt > Cli syntax for creating keyspace is inconsistent in 1.0 > --- > > Key: CASSANDRA-3119 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3119 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.0 >Reporter: Sylvain Lebresne >Assignee: T Jake Luciani >Priority: Minor > Labels: cli > Fix For: 1.0 > > Attachments: v1-0001-CASSANDRA-3119-warn-on-old-cli-syntax.txt > > > In 0.8, to create a keyspace you could do: > {noformat} > create keyspace test with placement_strategy = > 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = > [{replication_factor:3}] > {noformat} > In current trunk, if you try that, you get back "null". Turns out this is > because the syntax for strategy_options has changed and you should not use > the brackets, i.e: > {noformat} > strategy_options = {replication_factor:3} > {noformat} > (and note that reversely, this syntax doesn't work in 0.8). > I'm not sure what motivated that change but this is very user unfriendly. The > help does correctly mention the new syntax, but it is the kind of changes > that takes you 5 minutes to notice. It will also break people scripts for no > good reason that I can see. > We should either: > # revert to the old syntax > # support both the new and old syntax > # at least print a meaningful error message when the old syntax is used > Imho, the last solution is by far the worst solution. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3150) ColumnFormatRecordReader loops forever
[ https://issues.apache.org/jira/browse/CASSANDRA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099234#comment-13099234 ] Mck SembWever commented on CASSANDRA-3150: -- Here keyRange is startToken to split.getEndToken() startToken is updated each iterate to the last row read (each iterate is batchRowCount rows). What happens is split.getEndToken() doesn't correspond to any of the rowKeys? To me it reads that startToken will hop over split.getEndToken() and get_rage_slices(..) will start returning wrapping ranges. This will still return rows and so the iteration will continue, now forever. The only way out for this code today is a) startToken equals split.getEndToken(), or b) get_range_slices(..) is called with startToken equals split.getEndToken() OR a gap so small there exists no rows in between. > ColumnFormatRecordReader loops forever > -- > > Key: CASSANDRA-3150 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3150 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Affects Versions: 0.8.4 >Reporter: Mck SembWever >Assignee: Mck SembWever >Priority: Critical > Attachments: CASSANDRA-3150.patch > > > From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039 > {quote} > bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner > bq. CFIF's inputSplitSize=196608 > bq. 3 map tasks (from 4013) is still running after read 25 million rows. > bq. Can this be a bug in StorageService.getSplits(..) ? > getSplits looks pretty foolproof to me but I guess we'd need to add > more debug logging to rule out a bug there for sure. > I guess the main alternative would be a bug in the recordreader paging. > {quote} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (CASSANDRA-1599) Add sort/order support for secondary indexing
[ https://issues.apache.org/jira/browse/CASSANDRA-1599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reassigned CASSANDRA-1599: - Assignee: (was: Jonathan Ellis) > Add sort/order support for secondary indexing > - > > Key: CASSANDRA-1599 > URL: https://issues.apache.org/jira/browse/CASSANDRA-1599 > Project: Cassandra > Issue Type: New Feature > Components: API >Reporter: Todd Nine > Original Estimate: 32h > Remaining Estimate: 32h > > For a lot of users paging is a standard use case on many web applications. > It would be nice to allow paging as part of a Boolean Expression. > Page -> start index >-> end index >-> page timestamp >-> Sort Order > When sorting, is it possible to sort both ASC and DESC? > -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2474) CQL support for compound columns
[ https://issues.apache.org/jira/browse/CASSANDRA-2474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-2474: -- Fix Version/s: (was: 1.0) 1.1 > CQL support for compound columns > > > Key: CASSANDRA-2474 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2474 > Project: Cassandra > Issue Type: Sub-task > Components: API, Core >Reporter: Eric Evans >Assignee: Pavel Yaskevich > Labels: cql > Fix For: 1.1 > > Attachments: screenshot-1.jpg, screenshot-2.jpg > > > For the most part, this boils down to supporting the specification of > compound column names (the CQL syntax is colon-delimted terms), and then > teaching the decoders (drivers) to create structures from the results. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3114) After Choosing EC2Snitch you can't migrate off w/o a full cluster restart
[ https://issues.apache.org/jira/browse/CASSANDRA-3114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099225#comment-13099225 ] Brandon Williams commented on CASSANDRA-3114: - I don't see how making your dc/rack names your external IP address is going to solve anything. > After Choosing EC2Snitch you can't migrate off w/o a full cluster restart > - > > Key: CASSANDRA-3114 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3114 > Project: Cassandra > Issue Type: Bug >Affects Versions: 0.7.8, 0.8.4 >Reporter: Benjamin Coverston > > Once you choose the Ec2Snitch the gossip messages will trigger this exception > if you try to move (for example) to the property file snitch: > ERROR [pool-2-thread-11] 2011-08-30 16:38:06,935 Cassandra.java (line 3041) > Internal error processing get_slice > java.lang.NullPointerException > at org.apache.cassandra.locator.Ec2Snitch.getDatacenter(Ec2Snitch.java:84) > at > org.apache.cassandra.locator.DynamicEndpointSnitch.getDatacenter(DynamicEndpointSnitch.java:122) > > at > org.apache.cassandra.service.DatacenterReadCallback.assureSufficientLiveNodes(DatacenterReadCallback.java:77) > > at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:516) > at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:480) > at > org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:109) > > at > org.apache.cassandra.thrift.CassandraServer.getSlice(CassandraServer.java:263) > > at > org.apache.cassandra.thrift.CassandraServer.multigetSliceInternal(CassandraServer.java:345) > > at > org.apache.cassandra.thrift.CassandraServer.get_slice(CassandraServer.java:306) > > at > org.apache.cassandra.thrift.Cassandra$Processor$get_slice.process(Cassandra.java:3033) > > at > org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889) > at > org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > > at java.lang.Thread.run(Thread.java:662) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3114) After Choosing EC2Snitch you can't migrate off w/o a full cluster restart
[ https://issues.apache.org/jira/browse/CASSANDRA-3114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099226#comment-13099226 ] Jackson Chung commented on CASSANDRA-3114: -- "but you'd still have to name things with the ec2snitch conventions for things to not break" still hold true with the above. > After Choosing EC2Snitch you can't migrate off w/o a full cluster restart > - > > Key: CASSANDRA-3114 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3114 > Project: Cassandra > Issue Type: Bug >Affects Versions: 0.7.8, 0.8.4 >Reporter: Benjamin Coverston > > Once you choose the Ec2Snitch the gossip messages will trigger this exception > if you try to move (for example) to the property file snitch: > ERROR [pool-2-thread-11] 2011-08-30 16:38:06,935 Cassandra.java (line 3041) > Internal error processing get_slice > java.lang.NullPointerException > at org.apache.cassandra.locator.Ec2Snitch.getDatacenter(Ec2Snitch.java:84) > at > org.apache.cassandra.locator.DynamicEndpointSnitch.getDatacenter(DynamicEndpointSnitch.java:122) > > at > org.apache.cassandra.service.DatacenterReadCallback.assureSufficientLiveNodes(DatacenterReadCallback.java:77) > > at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:516) > at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:480) > at > org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:109) > > at > org.apache.cassandra.thrift.CassandraServer.getSlice(CassandraServer.java:263) > > at > org.apache.cassandra.thrift.CassandraServer.multigetSliceInternal(CassandraServer.java:345) > > at > org.apache.cassandra.thrift.CassandraServer.get_slice(CassandraServer.java:306) > > at > org.apache.cassandra.thrift.Cassandra$Processor$get_slice.process(Cassandra.java:3033) > > at > org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889) > at > org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > > at java.lang.Thread.run(Thread.java:662) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3089) Support RowId in ResultSet
[ https://issues.apache.org/jira/browse/CASSANDRA-3089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099222#comment-13099222 ] Hudson commented on CASSANDRA-3089: --- Integrated in Cassandra #1084 (See [https://builds.apache.org/job/Cassandra/1084/]) add getRowId support to CResultSet patch by Rick Shaw; reviewed by jbellis for CASSANDRA-3089 jbellis : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166301 Files : * /cassandra/trunk/drivers/java/CHANGES.txt * /cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/AbstractResultSet.java * /cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CResultSet.java * /cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraDatabaseMetaData.java * /cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraPreparedStatement.java > Support RowId in ResultSet > -- > > Key: CASSANDRA-3089 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3089 > Project: Cassandra > Issue Type: Sub-task > Components: Drivers >Affects Versions: 0.8.4 >Reporter: Rick Shaw >Assignee: Rick Shaw >Priority: Trivial > Labels: JDBC, lhf > Attachments: add-rowid-support-v2.txt, add-rowid-support-v3.txt, > add-rowid-support-v4.txt, add-rowid-support.txt > > > Support the JDBC concept of {{RowId}} by using the C* row index value. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3114) After Choosing EC2Snitch you can't migrate off w/o a full cluster restart
[ https://issues.apache.org/jira/browse/CASSANDRA-3114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099219#comment-13099219 ] Jackson Chung commented on CASSANDRA-3114: -- What if do this in the Abstract? {code:title=AbstractEndpointSnitch.java} public void gossiperStarting() { String dc = getDatacenter(FBUtilities.getBroadcastAddress()); String rack = getRack(FBUtilities.getBroadcastAddress()); logger.info(this.getClass().getSimpleName() +" adding ApplicationState DC=" + dc + " Rack=" + rack); Gossiper.instance.addLocalApplicationState(ApplicationState.DC, StorageService.instance.valueFactory.datacenter(dc)); Gossiper.instance.addLocalApplicationState(ApplicationState.RACK, StorageService.instance.valueFactory.rack(rack)); } {code} > After Choosing EC2Snitch you can't migrate off w/o a full cluster restart > - > > Key: CASSANDRA-3114 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3114 > Project: Cassandra > Issue Type: Bug >Affects Versions: 0.7.8, 0.8.4 >Reporter: Benjamin Coverston > > Once you choose the Ec2Snitch the gossip messages will trigger this exception > if you try to move (for example) to the property file snitch: > ERROR [pool-2-thread-11] 2011-08-30 16:38:06,935 Cassandra.java (line 3041) > Internal error processing get_slice > java.lang.NullPointerException > at org.apache.cassandra.locator.Ec2Snitch.getDatacenter(Ec2Snitch.java:84) > at > org.apache.cassandra.locator.DynamicEndpointSnitch.getDatacenter(DynamicEndpointSnitch.java:122) > > at > org.apache.cassandra.service.DatacenterReadCallback.assureSufficientLiveNodes(DatacenterReadCallback.java:77) > > at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:516) > at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:480) > at > org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:109) > > at > org.apache.cassandra.thrift.CassandraServer.getSlice(CassandraServer.java:263) > > at > org.apache.cassandra.thrift.CassandraServer.multigetSliceInternal(CassandraServer.java:345) > > at > org.apache.cassandra.thrift.CassandraServer.get_slice(CassandraServer.java:306) > > at > org.apache.cassandra.thrift.Cassandra$Processor$get_slice.process(Cassandra.java:3033) > > at > org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889) > at > org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > > at java.lang.Thread.run(Thread.java:662) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3146) Minor changes to IntervalTree
[ https://issues.apache.org/jira/browse/CASSANDRA-3146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099220#comment-13099220 ] Hudson commented on CASSANDRA-3146: --- Integrated in Cassandra #1084 (See [https://builds.apache.org/job/Cassandra/1084/]) intervaltree cleanup patch by Paul Cannon; reviewed by Ben Coverston for CASSANDRA-3146 jbellis : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166305 Files : * /cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalNode.java * /cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalTree.java > Minor changes to IntervalTree > - > > Key: CASSANDRA-3146 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3146 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: paul cannon >Assignee: paul cannon >Priority: Minor > Fix For: 1.0 > > Attachments: 3146.patch.txt > > > I have a few minor changes to IntervalTree that I feel improve its > performance and readability. None of this should have an effect on > correctness. > Details: > * rename IntervalNode members v_left/v_right to > intersects_left/intersects_right, to avoid confusion with the members > similarly named "left" and "right" > * remove the unused IntervalNode.interval member > * don't calculate the list of intersecting intervals twice in IntervalNode > constructor > * fix comment in IntervalNode constructor: s/i.min/i.max/ > * remove unused java.util.Collections import from IntervalTree.java > * remove unused code path (checking twice for null == node) in > IntervalTree.searchInternal() > * genericize Interval parameter type to IntervalTree.search() > There are still a lot of unchecked operations around the Interval generic > stuff, and the OCD guy inside me wants it to be competely type-safe, but in > real life this ought to be fine like it is. Plus the static Orderings in > Interval.java would need to be made instance variables and that would just be > annoying. > Ok, so, go ahead and ignore any of this if appropriate. It just helped me > feel better with the code. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3145) IntervalTree could miscalculate its max
[ https://issues.apache.org/jira/browse/CASSANDRA-3145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099221#comment-13099221 ] Hudson commented on CASSANDRA-3145: --- Integrated in Cassandra #1084 (See [https://builds.apache.org/job/Cassandra/1084/]) fix IntervalTree max calculation patch by Paul Cannon; reviewed by Ben Coverston for CASSANDRA-3145 jbellis : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166302 Files : * /cassandra/trunk/CHANGES.txt * /cassandra/trunk/src/java/org/apache/cassandra/db/ColumnFamilyStore.java * /cassandra/trunk/src/java/org/apache/cassandra/db/DataTracker.java * /cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalNode.java * /cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalTree.java > IntervalTree could miscalculate its max > --- > > Key: CASSANDRA-3145 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3145 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: paul cannon >Assignee: paul cannon >Priority: Minor > Fix For: 1.0 > > Attachments: 3145.patch.txt > > > The implementation of IntervalTree in trunk expects an ordered list of > Interval objects as the argument to its constructor. It uses the ordering > (only) to determine its minimum and maximum endpoints out of all Intervals > stored in it. However, no ordering should be able to guarantee the first > element has the set-wide minimum and that the last element has the set-wide > maximum; you have to order by minima or maxima or some combination. > I propose that the requirement for ordered input to the IntervalTree > constructor be dropped, seeing as how the elements will be sorted as > necessary inside the IntervalNode object anyway. The set-wide minimum and > maximum could be more straightforwardly calculated inside IntervalNode, and > just exposed via IntervalTree. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3150) ColumnFormatRecordReader loops forever
[ https://issues.apache.org/jira/browse/CASSANDRA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099217#comment-13099217 ] Jonathan Ellis commented on CASSANDRA-3150: --- The start==end check on 234 is a special case, because start==end is a wrapping Range. The main "stop when we're done logic" is this: {code} rows = client.get_range_slices(new ColumnParent(cfName), predicate, keyRange, consistencyLevel); // nothing new? reached the end if (rows.isEmpty()) { rows = null; return; } {code} > ColumnFormatRecordReader loops forever > -- > > Key: CASSANDRA-3150 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3150 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Affects Versions: 0.8.4 >Reporter: Mck SembWever >Assignee: Mck SembWever >Priority: Critical > Attachments: CASSANDRA-3150.patch > > > From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039 > {quote} > bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner > bq. CFIF's inputSplitSize=196608 > bq. 3 map tasks (from 4013) is still running after read 25 million rows. > bq. Can this be a bug in StorageService.getSplits(..) ? > getSplits looks pretty foolproof to me but I guess we'd need to add > more debug logging to rule out a bug there for sure. > I guess the main alternative would be a bug in the recordreader paging. > {quote} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2474) CQL support for compound columns
[ https://issues.apache.org/jira/browse/CASSANDRA-2474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099189#comment-13099189 ] Jonathan Ellis commented on CASSANDRA-2474: --- We also should support defining CompositeType columns w/o using the internal AbstractType names. > CQL support for compound columns > > > Key: CASSANDRA-2474 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2474 > Project: Cassandra > Issue Type: Sub-task > Components: API, Core >Reporter: Eric Evans >Assignee: Pavel Yaskevich > Labels: cql > Fix For: 1.0 > > Attachments: screenshot-1.jpg, screenshot-2.jpg > > > For the most part, this boils down to supporting the specification of > compound column names (the CQL syntax is colon-delimted terms), and then > teaching the decoders (drivers) to create structures from the results. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
buildbot success in ASF Buildbot on cassandra-trunk
The Buildbot has detected a restored build on builder cassandra-trunk while building ASF Buildbot. Full details are available at: http://ci.apache.org/builders/cassandra-trunk/builds/1622 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: isis_ubuntu Build Reason: scheduler Build Source Stamp: [branch cassandra/trunk] 1166303 Blamelist: jbellis Build succeeded! sincerely, -The Buildbot
svn commit: r1166305 - in /cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree: IntervalNode.java IntervalTree.java
Author: jbellis Date: Wed Sep 7 18:31:32 2011 New Revision: 1166305 URL: http://svn.apache.org/viewvc?rev=1166305&view=rev Log: intervaltree cleanup patch by Paul Cannon; reviewed by Ben Coverston for CASSANDRA-3146 Modified: cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalNode.java cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalTree.java Modified: cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalNode.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalNode.java?rev=1166305&r1=1166304&r2=1166305&view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalNode.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalNode.java Wed Sep 7 18:31:32 2011 @@ -7,12 +7,11 @@ import com.google.common.collect.Immutab public class IntervalNode { -Interval interval; Comparable v_pt; Comparable v_min; Comparable v_max; -List v_left; -List v_right; +List intersects_left; +List intersects_right; IntervalNode left = null; IntervalNode right = null; @@ -21,9 +20,10 @@ public class IntervalNode if (toBisect.size() > 0) { findMinMedianMax(toBisect); -v_left = interval.minOrdering.sortedCopy(getIntersectingIntervals(toBisect)); -v_right = interval.maxOrdering.reverse().sortedCopy(getIntersectingIntervals(toBisect)); -//if i.min < v_pt then it goes to the left subtree +List intersects = getIntersectingIntervals(toBisect); +intersects_left = Interval.minOrdering.sortedCopy(intersects); +intersects_right = Interval.maxOrdering.reverse().sortedCopy(intersects); +//if i.max < v_pt then it goes to the left subtree List leftSegment = getLeftIntervals(toBisect); List rightSegment = getRightIntervals(toBisect); if (leftSegment.size() > 0) @@ -84,5 +84,4 @@ public class IntervalNode v_max = allEndpoints.get(allEndpoints.size() - 1); } } - } Modified: cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalTree.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalTree.java?rev=1166305&r1=1166304&r2=1166305&view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalTree.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalTree.java Wed Sep 7 18:31:32 2011 @@ -1,6 +1,5 @@ package org.apache.cassandra.utils.IntervalTree; -import java.util.Collections; import java.util.LinkedList; import java.util.List; @@ -28,7 +27,7 @@ public class IntervalTree return head.v_min; } -public List search(Interval searchInterval) +public List search(Interval searchInterval) { List retlist = new LinkedList(); searchInternal(head, searchInterval, retlist); @@ -41,14 +40,12 @@ public class IntervalTree return; if (null == node || node.v_pt == null) return; -if (null == node) -return; //if searchInterval.contains(node.v_pt) //then add every interval contained in this node to the result set then search left and right for further //overlapping intervals if (searchInterval.contains(node.v_pt)) { -for (Interval interval : node.v_left) +for (Interval interval : node.intersects_left) { retList.add(interval.Data); } @@ -59,12 +56,12 @@ public class IntervalTree } //if v.pt < searchInterval.left -//add intervals in v with v[i].right >= searchitnerval.left +//add intervals in v with v[i].right >= searchInterval.left //L contains no overlaps //R May if (node.v_pt.compareTo(searchInterval.min) < 0) { -for (Interval interval : node.v_right) +for (Interval interval : node.intersects_right) { if (interval.max.compareTo(searchInterval.min) >= 0) { @@ -77,12 +74,12 @@ public class IntervalTree } //if v.pt > searchInterval.right -//add intervals in v with [i].left <= searchitnerval.right +//add intervals in v with [i].left <= searchInterval.right //R contains no overlaps //L May if (node.v_pt.compareTo(searchInterval.max) > 0) { -for (Interval interval : node.v_left) +for (Interval interval : node.intersects_left) { if (interval.min.compareTo(search
[jira] [Updated] (CASSANDRA-3150) ColumnFormatRecordReader loops forever
[ https://issues.apache.org/jira/browse/CASSANDRA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mck SembWever updated CASSANDRA-3150: - Attachment: CASSANDRA-3150.patch If the split's end token does not match any of the row key tokens the RowIterator will never stop (see RowIterator:243) This patch 1) presumes this is the problem, 2) compares each row token with the split end token and exits when need be (which only works on order preserving partitioners, and 3) stops iterating when totalRowCount has been read. Just (3) has been tested and works. > ColumnFormatRecordReader loops forever > -- > > Key: CASSANDRA-3150 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3150 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Affects Versions: 0.8.4 >Reporter: Mck SembWever >Assignee: Mck SembWever >Priority: Critical > Attachments: CASSANDRA-3150.patch > > > From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039 > {quote} > bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner > bq. CFIF's inputSplitSize=196608 > bq. 3 map tasks (from 4013) is still running after read 25 million rows. > bq. Can this be a bug in StorageService.getSplits(..) ? > getSplits looks pretty foolproof to me but I guess we'd need to add > more debug logging to rule out a bug there for sure. > I guess the main alternative would be a bug in the recordreader paging. > {quote} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
svn commit: r1166303 - /cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraRowId.java
Author: jbellis Date: Wed Sep 7 18:28:38 2011 New Revision: 1166303 URL: http://svn.apache.org/viewvc?rev=1166303&view=rev Log: add CassandraRowId.java Added: cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraRowId.java Added: cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraRowId.java URL: http://svn.apache.org/viewvc/cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraRowId.java?rev=1166303&view=auto == --- cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraRowId.java (added) +++ cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraRowId.java Wed Sep 7 18:28:38 2011 @@ -0,0 +1,68 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + * + */ +package org.apache.cassandra.cql.jdbc; + +import java.nio.ByteBuffer; +import java.sql.RowId; + +import org.apache.cassandra.utils.ByteBufferUtil; + +class CassandraRowId implements RowId +{ +private final ByteBuffer bytes; + +public CassandraRowId (ByteBuffer bytes) +{ +this.bytes = bytes; +} + +public byte[] getBytes() +{ +return ByteBufferUtil.getArray(bytes); +} + +public String toString() +{ +return ByteBufferUtil.bytesToHex(bytes); +} + +public int hashCode() +{ +final int prime = 31; +int result = 1; +result = prime * result + ((bytes == null) ? 0 : bytes.hashCode()); +return result; +} + +public boolean equals(Object obj) +{ +if (this == obj) return true; +if (obj == null) return false; +if (getClass() != obj.getClass()) return false; +CassandraRowId other = (CassandraRowId) obj; +if (bytes == null) +{ +if (other.bytes != null) return false; +} +else if (!bytes.equals(other.bytes)) return false; +return true; +} +}
svn commit: r1166302 - in /cassandra/trunk: ./ src/java/org/apache/cassandra/db/ src/java/org/apache/cassandra/utils/IntervalTree/
Author: jbellis Date: Wed Sep 7 18:28:09 2011 New Revision: 1166302 URL: http://svn.apache.org/viewvc?rev=1166302&view=rev Log: fix IntervalTree max calculation patch by Paul Cannon; reviewed by Ben Coverston for CASSANDRA-3145 Modified: cassandra/trunk/CHANGES.txt cassandra/trunk/src/java/org/apache/cassandra/db/ColumnFamilyStore.java cassandra/trunk/src/java/org/apache/cassandra/db/DataTracker.java cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalNode.java cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalTree.java Modified: cassandra/trunk/CHANGES.txt URL: http://svn.apache.org/viewvc/cassandra/trunk/CHANGES.txt?rev=1166302&r1=1166301&r2=1166302&view=diff == --- cassandra/trunk/CHANGES.txt (original) +++ cassandra/trunk/CHANGES.txt Wed Sep 7 18:28:09 2011 @@ -44,7 +44,7 @@ Thrift<->Avro conversion methods (CASSANDRA-3032) * Add timeouts to client request schedulers (CASSANDRA-3079, 3096) * Cli to use hashes rather than array of hashes for strategy options (CASSANDRA-3081) - * LeveledCompactionStrategy (CASSANDRA-1608, 3085, 3110, 3087) + * LeveledCompactionStrategy (CASSANDRA-1608, 3085, 3110, 3087, 3145) * Improvements of the CLI `describe` command (CASSANDRA-2630) * reduce window where dropped CF sstables may not be deleted (CASSANDRA-2942) * Expose gossip/FD info to JMX (CASSANDRA-2806) @@ -65,6 +65,7 @@ * use TreeMap backed column families for the SSTable simple writers (CASSANDRA-3148) + 0.8.5 * fix NPE when encryption_options is unspecified (CASSANDRA-3007) * include column name in validation failure exceptions (CASSANDRA-2849) Modified: cassandra/trunk/src/java/org/apache/cassandra/db/ColumnFamilyStore.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/db/ColumnFamilyStore.java?rev=1166302&r1=1166301&r2=1166302&view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/db/ColumnFamilyStore.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/db/ColumnFamilyStore.java Wed Sep 7 18:28:09 2011 @@ -1303,7 +1303,7 @@ public class ColumnFamilyStore implement view = data.getView(); // startAt == minimum is ok, but stopAt == minimum is confusing because all IntervalTree deals with // is Comparable, so it won't know to special-case that. -Comparable stopInTree = stopAt.isEmpty() ? view.intervalTree.max : stopAt; +Comparable stopInTree = stopAt.isEmpty() ? view.intervalTree.max() : stopAt; sstables = view.intervalTree.search(new Interval(startWith, stopInTree)); if (SSTableReader.acquireReferences(sstables)) break; Modified: cassandra/trunk/src/java/org/apache/cassandra/db/DataTracker.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/db/DataTracker.java?rev=1166302&r1=1166301&r2=1166302&view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/db/DataTracker.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/db/DataTracker.java Wed Sep 7 18:28:09 2011 @@ -516,11 +516,9 @@ public class DataTracker private IntervalTree buildIntervalTree(List sstables) { -List itsstList = ImmutableList.copyOf(Ordering.from(SSTable.sstableComparator).sortedCopy(sstables)); -List intervals = new ArrayList(itsstList.size()); -for (SSTableReader sstable : itsstList) +List intervals = new ArrayList(sstables.size()); +for (SSTableReader sstable : sstables) intervals.add(new Interval(sstable.first, sstable.last, sstable)); -assert intervals.size() == sstables.size(); return new IntervalTree(intervals); } Modified: cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalNode.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalNode.java?rev=1166302&r1=1166301&r2=1166302&view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalNode.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/utils/IntervalTree/IntervalNode.java Wed Sep 7 18:28:09 2011 @@ -1,13 +1,16 @@ package org.apache.cassandra.utils.IntervalTree; import java.util.ArrayList; +import java.util.Collections; import java.util.List; -import java.util.concurrent.ConcurrentSkipListSet; +import com.google.common.collect.ImmutableList; public class IntervalNode { Interval interval; Comparable v_pt; +Comparable v_min; +Comparable v_max; List v_left; List v_right;
buildbot failure in ASF Buildbot on cassandra-trunk
The Buildbot has detected a new failure on builder cassandra-trunk while building ASF Buildbot. Full details are available at: http://ci.apache.org/builders/cassandra-trunk/builds/1620 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: isis_ubuntu Build Reason: scheduler Build Source Stamp: [branch cassandra/trunk] 1166301 Blamelist: jbellis BUILD FAILED: failed compile sincerely, -The Buildbot
svn commit: r1166301 - in /cassandra/trunk/drivers/java: ./ src/org/apache/cassandra/cql/jdbc/
Author: jbellis Date: Wed Sep 7 18:26:19 2011 New Revision: 1166301 URL: http://svn.apache.org/viewvc?rev=1166301&view=rev Log: add getRowId support to CResultSet patch by Rick Shaw; reviewed by jbellis for CASSANDRA-3089 Modified: cassandra/trunk/drivers/java/CHANGES.txt cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/AbstractResultSet.java cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CResultSet.java cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraDatabaseMetaData.java cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraPreparedStatement.java Modified: cassandra/trunk/drivers/java/CHANGES.txt URL: http://svn.apache.org/viewvc/cassandra/trunk/drivers/java/CHANGES.txt?rev=1166301&r1=1166300&r2=1166301&view=diff == --- cassandra/trunk/drivers/java/CHANGES.txt (original) +++ cassandra/trunk/drivers/java/CHANGES.txt Wed Sep 7 18:26:19 2011 @@ -1,4 +1,4 @@ 1.0.4 - * improve JDBC spec compliance (CASSANDRA-2720, 2754, 3052) + * improve JDBC spec compliance (CASSANDRA-2720, 2754, 3052, 3089) * cooperate with other jdbc drivers (CASSANDRA-2842) * fix unbox-to-NPE with null primitives (CASSANDRA-2956) Modified: cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/AbstractResultSet.java URL: http://svn.apache.org/viewvc/cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/AbstractResultSet.java?rev=1166301&r1=1166300&r2=1166301&view=diff == --- cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/AbstractResultSet.java (original) +++ cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/AbstractResultSet.java Wed Sep 7 18:26:19 2011 @@ -155,10 +155,6 @@ abstract class AbstractResultSet throw new SQLFeatureNotSupportedException(NOT_SUPPORTED); } -public RowId getRowId(int arg0) throws SQLException -{ -throw new SQLFeatureNotSupportedException(NOT_SUPPORTED); -} public SQLXML getSQLXML(int arg0) throws SQLException { Modified: cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CResultSet.java URL: http://svn.apache.org/viewvc/cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CResultSet.java?rev=1166301&r1=1166300&r2=1166301&view=diff == --- cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CResultSet.java (original) +++ cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CResultSet.java Wed Sep 7 18:26:19 2011 @@ -592,10 +592,24 @@ class CResultSet extends AbstractResultS return rowNumber; } -// RowId (shall we just store the raw bytes as it is kept in C* ? Probably... -public RowId getRowId(String arg0) throws SQLException +public RowId getRowId(int index) throws SQLException { -throw new SQLFeatureNotSupportedException(NOT_SUPPORTED); +checkIndex(index); +return getRowId(values.get(index - 1)); +} + +public RowId getRowId(String name) throws SQLException +{ +checkName(name); +return getRowId(valueMap.get(name)); +} + +private final RowId getRowId(TypedColumn column) throws SQLException +{ +checkNotClosed(); +ByteBuffer value = column.getRawColumn().value; +wasNull = value == null; +return value == null ? null : new CassandraRowId(value); } public short getShort(int index) throws SQLException Modified: cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraDatabaseMetaData.java URL: http://svn.apache.org/viewvc/cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraDatabaseMetaData.java?rev=1166301&r1=1166300&r2=1166301&view=diff == --- cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraDatabaseMetaData.java (original) +++ cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraDatabaseMetaData.java Wed Sep 7 18:26:19 2011 @@ -29,9 +29,6 @@ import java.sql.RowIdLifetime; import java.sql.SQLException; import java.sql.SQLFeatureNotSupportedException; -import org.apache.cassandra.db.DBConstants; -import org.apache.cassandra.utils.FBUtilities; - class CassandraDatabaseMetaData implements DatabaseMetaData { private CassandraConnection connection; @@ -360,7 +357,7 @@ class CassandraDatabaseMetaData implemen public RowIdLifetime getRowIdLifetime() throws SQLException { -return RowIdLifetime.ROWID_UNSUPPORTED; +return RowIdLifetime.ROWID_VALID_FOREVER; } public String getSQLKeywords() throws SQLException Modified: cassandra/trunk/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraPreparedState
[jira] [Commented] (CASSANDRA-3068) Fix count()
[ https://issues.apache.org/jira/browse/CASSANDRA-3068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099174#comment-13099174 ] Hudson commented on CASSANDRA-3068: --- Integrated in Cassandra #1083 (See [https://builds.apache.org/job/Cassandra/1083/]) fix of the CQL count() behavior patch by Jonathan Ellis and Pavel Yaskevich; reviewed by Eric Evans and Pavel Yaskevich for CASSANDRA-3068 xedin : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166273 Files : * /cassandra/trunk/CHANGES.txt * /cassandra/trunk/doc/cql/CQL.textile * /cassandra/trunk/src/java/org/apache/cassandra/cql/QueryProcessor.java * /cassandra/trunk/test/system/test_cql.py > Fix count() > --- > > Key: CASSANDRA-3068 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3068 > Project: Cassandra > Issue Type: Sub-task > Components: API, Core >Reporter: Jonathan Ellis >Assignee: Jonathan Ellis > Labels: cql > Fix For: 1.0 > > Attachments: 3068-v2.txt, 3068.txt, CASSANDRA-3068-v3.patch > > > count() has been broken since it was introduced in CASSANDRA-1704. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3148) Use TreeMap backed column families for the SSTable simple writers
[ https://issues.apache.org/jira/browse/CASSANDRA-3148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099175#comment-13099175 ] Hudson commented on CASSANDRA-3148: --- Integrated in Cassandra #1083 (See [https://builds.apache.org/job/Cassandra/1083/]) Use TreeMap backed column families for the SSTable simple writers patch by slebresne; reviewed by jbellis for CASSANDRA-3148 slebresne : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166283 Files : * /cassandra/trunk/CHANGES.txt * /cassandra/trunk/src/java/org/apache/cassandra/db/ArrayBackedSortedColumns.java * /cassandra/trunk/src/java/org/apache/cassandra/db/ColumnFamilySerializer.java * /cassandra/trunk/src/java/org/apache/cassandra/db/ReadResponse.java * /cassandra/trunk/src/java/org/apache/cassandra/db/Row.java * /cassandra/trunk/src/java/org/apache/cassandra/db/RowMutation.java * /cassandra/trunk/src/java/org/apache/cassandra/db/ThreadSafeSortedColumns.java * /cassandra/trunk/src/java/org/apache/cassandra/db/TreeMapBackedSortedColumns.java * /cassandra/trunk/src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java * /cassandra/trunk/src/java/org/apache/cassandra/io/sstable/SSTableSimpleWriter.java * /cassandra/trunk/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java * /cassandra/trunk/test/unit/org/apache/cassandra/db/ArrayBackedSortedColumnsTest.java > Use TreeMap backed column families for the SSTable simple writers > - > > Key: CASSANDRA-3148 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3148 > Project: Cassandra > Issue Type: Improvement > Components: Core >Affects Versions: 1.0 >Reporter: Sylvain Lebresne >Assignee: Sylvain Lebresne >Priority: Trivial > Fix For: 1.0 > > Attachments: 0001-Use-TreeMap-for-SSTable-simple-writers-CFs.patch > > > SSTable*SimpleWriter classes are not intended to be used concurrently (and > indeed they are not thread safe), so there is no point in using CLSM backed > column families. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3146) Minor changes to IntervalTree
[ https://issues.apache.org/jira/browse/CASSANDRA-3146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099171#comment-13099171 ] Benjamin Coverston commented on CASSANDRA-3146: --- +1 These changes look good. > Minor changes to IntervalTree > - > > Key: CASSANDRA-3146 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3146 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: paul cannon >Assignee: paul cannon >Priority: Minor > Fix For: 1.0 > > Attachments: 3146.patch.txt > > > I have a few minor changes to IntervalTree that I feel improve its > performance and readability. None of this should have an effect on > correctness. > Details: > * rename IntervalNode members v_left/v_right to > intersects_left/intersects_right, to avoid confusion with the members > similarly named "left" and "right" > * remove the unused IntervalNode.interval member > * don't calculate the list of intersecting intervals twice in IntervalNode > constructor > * fix comment in IntervalNode constructor: s/i.min/i.max/ > * remove unused java.util.Collections import from IntervalTree.java > * remove unused code path (checking twice for null == node) in > IntervalTree.searchInternal() > * genericize Interval parameter type to IntervalTree.search() > There are still a lot of unchecked operations around the Interval generic > stuff, and the OCD guy inside me wants it to be competely type-safe, but in > real life this ought to be fine like it is. Plus the static Orderings in > Interval.java would need to be made instance variables and that would just be > annoying. > Ok, so, go ahead and ignore any of this if appropriate. It just helped me > feel better with the code. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3145) IntervalTree could miscalculate its max
[ https://issues.apache.org/jira/browse/CASSANDRA-3145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099172#comment-13099172 ] Benjamin Coverston commented on CASSANDRA-3145: --- +1 the patch looks good > IntervalTree could miscalculate its max > --- > > Key: CASSANDRA-3145 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3145 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: paul cannon >Assignee: paul cannon >Priority: Minor > Fix For: 1.0 > > Attachments: 3145.patch.txt > > > The implementation of IntervalTree in trunk expects an ordered list of > Interval objects as the argument to its constructor. It uses the ordering > (only) to determine its minimum and maximum endpoints out of all Intervals > stored in it. However, no ordering should be able to guarantee the first > element has the set-wide minimum and that the last element has the set-wide > maximum; you have to order by minima or maxima or some combination. > I propose that the requirement for ordered input to the IntervalTree > constructor be dropped, seeing as how the elements will be sorted as > necessary inside the IntervalNode object anyway. The set-wide minimum and > maximum could be more straightforwardly calculated inside IntervalNode, and > just exposed via IntervalTree. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-3151) CLI documentation should explain how to create column families with CompositeType's
CLI documentation should explain how to create column families with CompositeType's --- Key: CASSANDRA-3151 URL: https://issues.apache.org/jira/browse/CASSANDRA-3151 Project: Cassandra Issue Type: Improvement Reporter: Ryan King Priority: Minor -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2819) Split rpc timeout for read and write ops
[ https://issues.apache.org/jira/browse/CASSANDRA-2819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099155#comment-13099155 ] Melvin Wang commented on CASSANDRA-2819: I was on vacation since mid last week. Picking it up now. > Split rpc timeout for read and write ops > > > Key: CASSANDRA-2819 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2819 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Stu Hood >Assignee: Melvin Wang > Fix For: 1.0 > > Attachments: 2819-v4.txt, rpc-jira.patch > > > Given the vastly different latency characteristics of reads and writes, it > makes sense for them to have independent rpc timeouts internally. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2474) CQL support for compound columns
[ https://issues.apache.org/jira/browse/CASSANDRA-2474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099152#comment-13099152 ] Jonathan Ellis commented on CASSANDRA-2474: --- +1 > CQL support for compound columns > > > Key: CASSANDRA-2474 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2474 > Project: Cassandra > Issue Type: Sub-task > Components: API, Core >Reporter: Eric Evans >Assignee: Pavel Yaskevich > Labels: cql > Fix For: 1.0 > > Attachments: screenshot-1.jpg, screenshot-2.jpg > > > For the most part, this boils down to supporting the specification of > compound column names (the CQL syntax is colon-delimted terms), and then > teaching the decoders (drivers) to create structures from the results. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2474) CQL support for compound columns
[ https://issues.apache.org/jira/browse/CASSANDRA-2474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099141#comment-13099141 ] T Jake Luciani commented on CASSANDRA-2474: --- bq. UPDATE tweets SET COMPOUND NAME ('2e1c3308', 'cscotta') = 'My motocycle...' WHERE KEY = ; We can create a function like COMPOUND_NAME which will create a composite column under the hood, that will work for hive too. The syntax would then look like: {code} UPDATE tweets:transposed SET value = 'my motorcycle' WHERE KEY= AND column = COMPOUND_NAME('2e1c3308', 'cscotta'); {code} > CQL support for compound columns > > > Key: CASSANDRA-2474 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2474 > Project: Cassandra > Issue Type: Sub-task > Components: API, Core >Reporter: Eric Evans >Assignee: Pavel Yaskevich > Labels: cql > Fix For: 1.0 > > Attachments: screenshot-1.jpg, screenshot-2.jpg > > > For the most part, this boils down to supporting the specification of > compound column names (the CQL syntax is colon-delimted terms), and then > teaching the decoders (drivers) to create structures from the results. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-3150) ColumnFormatRecordReader loops forever
ColumnFormatRecordReader loops forever -- Key: CASSANDRA-3150 URL: https://issues.apache.org/jira/browse/CASSANDRA-3150 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 0.8.4 Reporter: Mck SembWever Assignee: Mck SembWever Priority: Critical >From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039 {quote} bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner bq. CFIF's inputSplitSize=196608 bq. 3 map tasks (from 4013) is still running after read 25 million rows. bq. Can this be a bug in StorageService.getSplits(..) ? getSplits looks pretty foolproof to me but I guess we'd need to add more debug logging to rule out a bug there for sure. I guess the main alternative would be a bug in the recordreader paging. {quote} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CASSANDRA-3137) Implement wrapping intersections for ConfigHelper's InputKeyRange
[ https://issues.apache.org/jira/browse/CASSANDRA-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13099124#comment-13099124 ] Mck SembWever edited comment on CASSANDRA-3137 at 9/7/11 5:37 PM: -- Indeed. I could be using this asap. The use case is... We're using a ByteOrderedPartition because we run incremental hadoop jobs over one of our column families where "events" initially come in. This cf has RF=1 and time-based UUID keys that are manipulated so that their byte ordering are time ordered. (the byte-unsigned timestamp put up front). Each column has ttl of 3 months. After 3 months of data we saw all data on one node. Now i understand as the token range is the timestamp range which is from 1970 to 2270 so of course our 3 month period fell on one node (with a 3 node cluster even 100 years would fall on one node). To properly manage this cf we need to either continuously move nodes around, a cumbersome operation, or change the key so it's prefixed with {{timestamp % 3months}}. This would allow 3 months of data to cycle over the whole cluster and wrap around again. Obviously we're leaning towards the latter solution as it simplifies operations. But it does require this patch. (When CFIF supports IndexClause everything changes, we change our cluster to RandomPartitioner, use secondary indexes, and never look back...) was (Author: michaelsembwever): Indeed. I could be using this asap. The use case is... We're using a ByteOrderedPartition because we run incremental hadoop jobs over one of our column families where "events" initially come in. This cf has RF=1 and time-based UUID keys that are manipulated so that their byte ordering are time ordered. (mostSigBits extracted and the byte-unsigned timestamp put up front). Each column has ttl of 3 months. After 3 months of data we saw all data on one node. Now i understand as the token range is the timestamp range which is from 1970 to 2270 so of course our 3 month period fell on one node (with 3 node cluster even 100 years would fall on one node). To properly manage this cf we need to either continuous move nodes around, a cumbersome operation, or change the key so it's prefixed with {{timestamp % 3months}}. This would allow 3 months of data to cycle over the whole cluster and wrap around again. Obviously we're leaning towards the latter solution as it simplifies operations. But it does require this patch. (When CFIF supports IndexClause everything changes, we change our cluster to RandomPartitioner, use secondary indexes, and never look back...) > Implement wrapping intersections for ConfigHelper's InputKeyRange > - > > Key: CASSANDRA-3137 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3137 > Project: Cassandra > Issue Type: Improvement > Components: Hadoop >Affects Versions: 0.8.5 >Reporter: Mck SembWever >Assignee: Mck SembWever > Attachments: CASSANDRA-3137.patch, CASSANDRA-3137.patch > > > Before there was no support for multiple intersections between the split's > range and the job's configured range. > After CASSANDRA-3108 it is now possible. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira