[jira] [Commented] (CASSANDRA-12922) Bloom filter miss counts are not measured correctly
[ https://issues.apache.org/jira/browse/CASSANDRA-12922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15685877#comment-15685877 ] Branimir Lambov commented on CASSANDRA-12922: - The fix is ok, but we also need a test. Something like [{{SSTableReaderTest.testGetPositionsKeyCacheStats}}|https://github.com/mm-binary/cassandra/blob/845daa181f2a48a1c5c799266ac1205e70c5f351/test/unit/org/apache/cassandra/io/sstable/SSTableReaderTest.java#L294], using a small bloom filter or {{AlwaysPresentFilter}} and counting the false positives. > Bloom filter miss counts are not measured correctly > --- > > Key: CASSANDRA-12922 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12922 > Project: Cassandra > Issue Type: Bug >Reporter: Branimir Lambov >Assignee: Mahdi Mohammadi > > Bloom filter hits and misses are evaluated incorrectly in > {{BigTableReader.getPosition}}: we properly record hits, but not misses. In > particular, if we don't find a match for a key in the index, which is where > almost all non-matches will be rejected, [we don't record a bloom filter > false > positive|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/big/BigTableReader.java#L228]. > This leads to very misleading output from e.g. {{nodetool tablestats}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-12942) ClassCastException during Status
Chris Donati created CASSANDRA-12942: Summary: ClassCastException during Status Key: CASSANDRA-12942 URL: https://issues.apache.org/jira/browse/CASSANDRA-12942 Project: Cassandra Issue Type: Bug Components: Tools Environment: Cassandra 3.7 OpenJDK 8 Ubuntu 14.04 Reporter: Chris Donati Priority: Minor I often encounter a ClassCastException when trying to run `nodetool status` on a particular cluster. Occasionally, the command will work on one of the nodes (and report all of the nodes as 'UN'), but the majority of the time, nodetool raises the following exception: {noformat} error: null -- StackTrace -- java.lang.ClassCastException {noformat} A couple of times, I've gotten lucky and nodetool has provided a more verbose error message: {noformat} error: org.apache.cassandra.dht.LocalPartitioner$LocalToken cannot be cast to org.apache.cassandra.dht.ByteOrderedPartitioner$BytesToken -- StackTrace -- java.lang.ClassCastException: org.apache.cassandra.dht.LocalPartitioner$LocalToken cannot be cast to org.apache.cassandra.dht.ByteOrderedPartitioner$BytesToken at org.apache.cassandra.dht.ByteOrderedPartitioner$BytesToken.compareTo(ByteOrderedPartitioner.java:79) at org.apache.cassandra.dht.ByteOrderedPartitioner$BytesToken.compareTo(ByteOrderedPartitioner.java:55) at org.apache.cassandra.dht.Token$KeyBound.compareTo(Token.java:166) at org.apache.cassandra.dht.Token$KeyBound.compareTo(Token.java:145) at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:93) at org.apache.cassandra.io.sstable.IndexSummary.binarySearch(IndexSummary.java:122) at org.apache.cassandra.io.sstable.format.SSTableReader.getSampleIndexesForRanges(SSTableReader.java:1345) at org.apache.cassandra.io.sstable.format.SSTableReader.getKeySamples(SSTableReader.java:1379) at org.apache.cassandra.db.ColumnFamilyStore.keySamples(ColumnFamilyStore.java:2058) at org.apache.cassandra.service.StorageService.keySamples(StorageService.java:3722) at org.apache.cassandra.service.StorageService.getSplits(StorageService.java:3678) at org.apache.cassandra.dht.ByteOrderedPartitioner.describeOwnership(ByteOrderedPartitioner.java:284) at org.apache.cassandra.service.StorageService.effectiveOwnership(StorageService.java:4460) at org.apache.cassandra.service.StorageService.effectiveOwnership(StorageService.java:184) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46) at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468) at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324) at sun.rmi.transport.Transport$1.run(Transport.java:200) at sun.rmi.transport.Transport$1.run(Transport.java:197) at java.security.Access
[jira] [Comment Edited] (CASSANDRA-12922) Bloom filter miss counts are not measured correctly
[ https://issues.apache.org/jira/browse/CASSANDRA-12922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15685644#comment-15685644 ] Mahdi Mohammadi edited comment on CASSANDRA-12922 at 11/22/16 4:24 AM: --- ||trunk|3.X|| |[branch|https://github.com/apache/cassandra/compare/trunk...mm-binary:trunk-12922]|[branch|https://github.com/apache/cassandra/compare/trunk...mm-binary:12922-3.X]| was (Author: mahdix): ||trunk|3.X|| |[branch|https://github.com/apache/cassandra/compare/trunk...mm-binary:trunk-12922]| |[branch|https://github.com/apache/cassandra/compare/trunk...mm-binary:12922-3.X]| > Bloom filter miss counts are not measured correctly > --- > > Key: CASSANDRA-12922 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12922 > Project: Cassandra > Issue Type: Bug >Reporter: Branimir Lambov >Assignee: Mahdi Mohammadi > > Bloom filter hits and misses are evaluated incorrectly in > {{BigTableReader.getPosition}}: we properly record hits, but not misses. In > particular, if we don't find a match for a key in the index, which is where > almost all non-matches will be rejected, [we don't record a bloom filter > false > positive|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/big/BigTableReader.java#L228]. > This leads to very misleading output from e.g. {{nodetool tablestats}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12922) Bloom filter miss counts are not measured correctly
[ https://issues.apache.org/jira/browse/CASSANDRA-12922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15685653#comment-15685653 ] Mahdi Mohammadi commented on CASSANDRA-12922: - Can someone take a look? > Bloom filter miss counts are not measured correctly > --- > > Key: CASSANDRA-12922 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12922 > Project: Cassandra > Issue Type: Bug >Reporter: Branimir Lambov >Assignee: Mahdi Mohammadi > > Bloom filter hits and misses are evaluated incorrectly in > {{BigTableReader.getPosition}}: we properly record hits, but not misses. In > particular, if we don't find a match for a key in the index, which is where > almost all non-matches will be rejected, [we don't record a bloom filter > false > positive|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/big/BigTableReader.java#L228]. > This leads to very misleading output from e.g. {{nodetool tablestats}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12922) Bloom filter miss counts are not measured correctly
[ https://issues.apache.org/jira/browse/CASSANDRA-12922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahdi Mohammadi updated CASSANDRA-12922: Status: Patch Available (was: Open) > Bloom filter miss counts are not measured correctly > --- > > Key: CASSANDRA-12922 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12922 > Project: Cassandra > Issue Type: Bug >Reporter: Branimir Lambov >Assignee: Mahdi Mohammadi > > Bloom filter hits and misses are evaluated incorrectly in > {{BigTableReader.getPosition}}: we properly record hits, but not misses. In > particular, if we don't find a match for a key in the index, which is where > almost all non-matches will be rejected, [we don't record a bloom filter > false > positive|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/big/BigTableReader.java#L228]. > This leads to very misleading output from e.g. {{nodetool tablestats}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-12922) Bloom filter miss counts are not measured correctly
[ https://issues.apache.org/jira/browse/CASSANDRA-12922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15685644#comment-15685644 ] Mahdi Mohammadi edited comment on CASSANDRA-12922 at 11/22/16 4:24 AM: --- ||trunk||3.X|| |[branch|https://github.com/apache/cassandra/compare/trunk...mm-binary:trunk-12922]|[branch|https://github.com/apache/cassandra/compare/trunk...mm-binary:12922-3.X]| was (Author: mahdix): ||trunk|3.X|| |[branch|https://github.com/apache/cassandra/compare/trunk...mm-binary:trunk-12922]|[branch|https://github.com/apache/cassandra/compare/trunk...mm-binary:12922-3.X]| > Bloom filter miss counts are not measured correctly > --- > > Key: CASSANDRA-12922 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12922 > Project: Cassandra > Issue Type: Bug >Reporter: Branimir Lambov >Assignee: Mahdi Mohammadi > > Bloom filter hits and misses are evaluated incorrectly in > {{BigTableReader.getPosition}}: we properly record hits, but not misses. In > particular, if we don't find a match for a key in the index, which is where > almost all non-matches will be rejected, [we don't record a bloom filter > false > positive|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/big/BigTableReader.java#L228]. > This leads to very misleading output from e.g. {{nodetool tablestats}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-12922) Bloom filter miss counts are not measured correctly
[ https://issues.apache.org/jira/browse/CASSANDRA-12922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15685644#comment-15685644 ] Mahdi Mohammadi edited comment on CASSANDRA-12922 at 11/22/16 4:23 AM: --- ||trunk|3.X|| |[branch|https://github.com/apache/cassandra/compare/trunk...mm-binary:trunk-12922]| |[branch|https://github.com/apache/cassandra/compare/trunk...mm-binary:12922-3.X]| was (Author: mahdix): ||trunk|| |[branch|https://github.com/apache/cassandra/compare/trunk...mm-binary:trunk-12922]| > Bloom filter miss counts are not measured correctly > --- > > Key: CASSANDRA-12922 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12922 > Project: Cassandra > Issue Type: Bug >Reporter: Branimir Lambov >Assignee: Mahdi Mohammadi > > Bloom filter hits and misses are evaluated incorrectly in > {{BigTableReader.getPosition}}: we properly record hits, but not misses. In > particular, if we don't find a match for a key in the index, which is where > almost all non-matches will be rejected, [we don't record a bloom filter > false > positive|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/big/BigTableReader.java#L228]. > This leads to very misleading output from e.g. {{nodetool tablestats}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12922) Bloom filter miss counts are not measured correctly
[ https://issues.apache.org/jira/browse/CASSANDRA-12922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15685644#comment-15685644 ] Mahdi Mohammadi commented on CASSANDRA-12922: - ||trunk|| |[branch|https://github.com/apache/cassandra/compare/trunk...mm-binary:trunk-12922]| > Bloom filter miss counts are not measured correctly > --- > > Key: CASSANDRA-12922 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12922 > Project: Cassandra > Issue Type: Bug >Reporter: Branimir Lambov >Assignee: Mahdi Mohammadi > > Bloom filter hits and misses are evaluated incorrectly in > {{BigTableReader.getPosition}}: we properly record hits, but not misses. In > particular, if we don't find a match for a key in the index, which is where > almost all non-matches will be rejected, [we don't record a bloom filter > false > positive|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/big/BigTableReader.java#L228]. > This leads to very misleading output from e.g. {{nodetool tablestats}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-12922) Bloom filter miss counts are not measured correctly
[ https://issues.apache.org/jira/browse/CASSANDRA-12922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahdi Mohammadi reassigned CASSANDRA-12922: --- Assignee: Mahdi Mohammadi > Bloom filter miss counts are not measured correctly > --- > > Key: CASSANDRA-12922 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12922 > Project: Cassandra > Issue Type: Bug >Reporter: Branimir Lambov >Assignee: Mahdi Mohammadi > > Bloom filter hits and misses are evaluated incorrectly in > {{BigTableReader.getPosition}}: we properly record hits, but not misses. In > particular, if we don't find a match for a key in the index, which is where > almost all non-matches will be rejected, [we don't record a bloom filter > false > positive|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/big/BigTableReader.java#L228]. > This leads to very misleading output from e.g. {{nodetool tablestats}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-12941) Backport CASSANDRA-9967
[ https://issues.apache.org/jira/browse/CASSANDRA-12941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15685116#comment-15685116 ] Haijun Cao edited comment on CASSANDRA-12941 at 11/22/16 1:52 AM: -- @carlyeks is the author of CASSANDRA-9967 was (Author: haijuncao): [~carlyeks] can you take a look? thanks! > Backport CASSANDRA-9967 > --- > > Key: CASSANDRA-12941 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12941 > Project: Cassandra > Issue Type: Improvement > Components: Coordination, Observability >Reporter: Haijun Cao >Priority: Trivial > Fix For: 3.0.x > > Attachments: 12941-3.0.txt > > > Backport CASSANDRA-9967 > Materialized view is available for use in 3.0.x, it would be nice to check > view built status by issuing one CQL query against system_distributed table, > hence back port CASSANDRA-9967 to 3.0.x. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12941) Backport CASSANDRA-9967
[ https://issues.apache.org/jira/browse/CASSANDRA-12941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15685116#comment-15685116 ] Haijun Cao commented on CASSANDRA-12941: [~carlyeks] can you take a look? thanks! > Backport CASSANDRA-9967 > --- > > Key: CASSANDRA-12941 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12941 > Project: Cassandra > Issue Type: Improvement > Components: Coordination, Observability >Reporter: Haijun Cao >Priority: Trivial > Fix For: 3.0.x > > Attachments: 12941-3.0.txt > > > Backport CASSANDRA-9967 > Materialized view is available for use in 3.0.x, it would be nice to check > view built status by issuing one CQL query against system_distributed table, > hence back port CASSANDRA-9967 to 3.0.x. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9967) Determine if a Materialized View is finished building, without having to query each node
[ https://issues.apache.org/jira/browse/CASSANDRA-9967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15684901#comment-15684901 ] Simon Zhou commented on CASSANDRA-9967: --- [~carlyeks] Is it possible to backport this patch to 3.0? Thanks. > Determine if a Materialized View is finished building, without having to > query each node > > > Key: CASSANDRA-9967 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9967 > Project: Cassandra > Issue Type: New Feature > Components: Coordination, Observability >Reporter: Alan Boudreault >Assignee: Carl Yeksigian >Priority: Minor > Labels: lhf > Fix For: 3.6 > > > Since MVs are eventually consistent with its base table, It would nice if we > could easily know the state of the MV after its creation, so we could wait > until the MV is built before doing some operations. > // cc [~mbroecheler] [~tjake] [~carlyeks] [~enigmacurry] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12941) Backport CASSANDRA-9967
[ https://issues.apache.org/jira/browse/CASSANDRA-12941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haijun Cao updated CASSANDRA-12941: --- Attachment: 12941-3.0.txt > Backport CASSANDRA-9967 > --- > > Key: CASSANDRA-12941 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12941 > Project: Cassandra > Issue Type: Improvement > Components: Coordination, Observability >Reporter: Haijun Cao >Priority: Trivial > Fix For: 3.0.x > > Attachments: 12941-3.0.txt > > > Backport CASSANDRA-9967 > Materialized view is available for use in 3.0.x, it would be nice to check > view built status by issuing one CQL query against system_distributed table, > hence back port CASSANDRA-9967 to 3.0.x. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12941) Backport CASSANDRA-9967
[ https://issues.apache.org/jira/browse/CASSANDRA-12941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haijun Cao updated CASSANDRA-12941: --- Status: Patch Available (was: Open) > Backport CASSANDRA-9967 > --- > > Key: CASSANDRA-12941 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12941 > Project: Cassandra > Issue Type: Improvement > Components: Coordination, Observability >Reporter: Haijun Cao >Priority: Trivial > Fix For: 3.0.x > > Attachments: 12941-3.0.txt > > > Backport CASSANDRA-9967 > Materialized view is available for use in 3.0.x, it would be nice to check > view built status by issuing one CQL query against system_distributed table, > hence back port CASSANDRA-9967 to 3.0.x. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-12941) Backport CASSANDRA-9967
Haijun Cao created CASSANDRA-12941: -- Summary: Backport CASSANDRA-9967 Key: CASSANDRA-12941 URL: https://issues.apache.org/jira/browse/CASSANDRA-12941 Project: Cassandra Issue Type: Improvement Components: Coordination, Observability Reporter: Haijun Cao Priority: Trivial Fix For: 3.0.x Backport CASSANDRA-9967 Materialized view is available for use in 3.0.x, it would be nice to check view built status by issuing one CQL query against system_distributed table, hence back port CASSANDRA-9967 to 3.0.x. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10145) Change protocol to allow sending key space independent of query string
[ https://issues.apache.org/jira/browse/CASSANDRA-10145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15684806#comment-15684806 ] Jeremiah Jordan commented on CASSANDRA-10145: - [~stamhankar999] dtest CI has a bunch of errors. > Change protocol to allow sending key space independent of query string > -- > > Key: CASSANDRA-10145 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10145 > Project: Cassandra > Issue Type: Improvement >Reporter: Vishy Kasar >Assignee: Sandeep Tamhankar > Fix For: 3.x > > Attachments: 10145-trunk.txt > > > Currently keyspace is either embedded in the query string or set through "use > keyspace" on a connection by client driver. > There are practical use cases where client user has query and keyspace > independently. In order for that scenario to work, they will have to create > one client session per keyspace or have to resort to some string replace > hackery. > It will be nice if protocol allowed sending keyspace separately from the > query. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12940) Large compaction backlogs should slow down repairs
[ https://issues.apache.org/jira/browse/CASSANDRA-12940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15684800#comment-15684800 ] Tom van der Woerdt commented on CASSANDRA-12940: To clarify: I'm asking for slower repairs, not faster compaction, or repairs that stream less. No repair should go faster than the node can cope with, and right now repairs are just too fast :) > Large compaction backlogs should slow down repairs > -- > > Key: CASSANDRA-12940 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12940 > Project: Cassandra > Issue Type: Improvement >Reporter: Tom van der Woerdt > > Repairs cause a flood of small sstables. In some situations the small > sstables come in so fast that it takes longer to commit the compaction > transaction than it takes to stream in the tables. This will cause a buildup > in sstables, and this buildup causes compaction to go even slower (see > CASSANDRA-12764). > For a cluster of mine this means running into nodes with >100 loadavg, with > tables that have 10k sstables. After the repair finishes the nodes go back to > normal, but it takes a while and affects query latency a lot. > The compaction paths could probably be faster, though I'm more interested in > making repairs wait for compaction. When we have a L0 with 1+ tables, the > repair path should probably wait a minute. > All I did was run 'nodetool repair' : > {noformat} > SSTable count: 11755 > SSTables in each level: [11709/4, 23/10, 50, 0, 0, 0, 0, 0, 0] > {noformat} > `nodetool compactionstats' shows 17 pending tasks (seems a bit low) and > `nodetool netstats' shows 1861 lines of text over 138 stream sessions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10145) Change protocol to allow sending key space independent of query string
[ https://issues.apache.org/jira/browse/CASSANDRA-10145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-10145: Status: Patch Available (was: Awaiting Feedback) > Change protocol to allow sending key space independent of query string > -- > > Key: CASSANDRA-10145 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10145 > Project: Cassandra > Issue Type: Improvement >Reporter: Vishy Kasar >Assignee: Sandeep Tamhankar > Fix For: 3.x > > Attachments: 10145-trunk.txt > > > Currently keyspace is either embedded in the query string or set through "use > keyspace" on a connection by client driver. > There are practical use cases where client user has query and keyspace > independently. In order for that scenario to work, they will have to create > one client session per keyspace or have to resort to some string replace > hackery. > It will be nice if protocol allowed sending keyspace separately from the > query. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12940) Large compaction backlogs should slow down repairs
[ https://issues.apache.org/jira/browse/CASSANDRA-12940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom van der Woerdt updated CASSANDRA-12940: --- Reproduced In: 3.0.9 > Large compaction backlogs should slow down repairs > -- > > Key: CASSANDRA-12940 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12940 > Project: Cassandra > Issue Type: Improvement >Reporter: Tom van der Woerdt > > Repairs cause a flood of small sstables. In some situations the small > sstables come in so fast that it takes longer to commit the compaction > transaction than it takes to stream in the tables. This will cause a buildup > in sstables, and this buildup causes compaction to go even slower (see > CASSANDRA-12764). > For a cluster of mine this means running into nodes with >100 loadavg, with > tables that have 10k sstables. After the repair finishes the nodes go back to > normal, but it takes a while and affects query latency a lot. > The compaction paths could probably be faster, though I'm more interested in > making repairs wait for compaction. When we have a L0 with 1+ tables, the > repair path should probably wait a minute. > All I did was run 'nodetool repair' : > {noformat} > SSTable count: 11755 > SSTables in each level: [11709/4, 23/10, 50, 0, 0, 0, 0, 0, 0] > {noformat} > `nodetool compactionstats' shows 17 pending tasks (seems a bit low) and > `nodetool netstats' shows 1861 lines of text over 138 stream sessions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-12940) Large compaction backlogs should slow down repairs
Tom van der Woerdt created CASSANDRA-12940: -- Summary: Large compaction backlogs should slow down repairs Key: CASSANDRA-12940 URL: https://issues.apache.org/jira/browse/CASSANDRA-12940 Project: Cassandra Issue Type: Improvement Reporter: Tom van der Woerdt Repairs cause a flood of small sstables. In some situations the small sstables come in so fast that it takes longer to commit the compaction transaction than it takes to stream in the tables. This will cause a buildup in sstables, and this buildup causes compaction to go even slower (see CASSANDRA-12764). For a cluster of mine this means running into nodes with >100 loadavg, with tables that have 10k sstables. After the repair finishes the nodes go back to normal, but it takes a while and affects query latency a lot. The compaction paths could probably be faster, though I'm more interested in making repairs wait for compaction. When we have a L0 with 1+ tables, the repair path should probably wait a minute. All I did was run 'nodetool repair' : {noformat} SSTable count: 11755 SSTables in each level: [11709/4, 23/10, 50, 0, 0, 0, 0, 0, 0] {noformat} `nodetool compactionstats' shows 17 pending tasks (seems a bit low) and `nodetool netstats' shows 1861 lines of text over 138 stream sessions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11946) Use the return type when resolving function on ambiguous calls
[ https://issues.apache.org/jira/browse/CASSANDRA-11946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-11946: --- Fix Version/s: (was: 3.x) > Use the return type when resolving function on ambiguous calls > -- > > Key: CASSANDRA-11946 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11946 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Sylvain Lebresne > > Currently, when we have multiple overloads of a function we only use the > argument to try to resolve the function. When that resolution is ambiguous, > we currently throw an error, but in many case (in the {{WHERE}} clause at > least) we know which type the result is supposed to be so we could use that > information to try to disambiguate. > The main use case I'm thinking of is the {{now()}} function. Currently, we > have it only for {{timeuuid}}. But we should likely provide the equivalent > for other time-base types ({{timestamp}}, {{date}} and {{time}}). Except that > currently we'd have to use other names that {{now}} and that would probably > be a bit ugly. If we implement what's above, we'll be able to have overloads > of {{now()}} for all date types and in many case it'll work how users want > out of the bose (that is, {{WHERE t = now()}} will work whatever date-based > type {{t}} is). And in the cases where you can't disambiguate, having to do > {{(time)now()}} is not really worth than if we had a {{timeNow()}} function > specific to the {{time}} type. > Also, in principle the change is just a few lines of code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-11946) Use the return type when resolving function on ambiguous calls
[ https://issues.apache.org/jira/browse/CASSANDRA-11946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer resolved CASSANDRA-11946. Resolution: Duplicate > Use the return type when resolving function on ambiguous calls > -- > > Key: CASSANDRA-11946 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11946 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Sylvain Lebresne > Fix For: 3.x > > > Currently, when we have multiple overloads of a function we only use the > argument to try to resolve the function. When that resolution is ambiguous, > we currently throw an error, but in many case (in the {{WHERE}} clause at > least) we know which type the result is supposed to be so we could use that > information to try to disambiguate. > The main use case I'm thinking of is the {{now()}} function. Currently, we > have it only for {{timeuuid}}. But we should likely provide the equivalent > for other time-base types ({{timestamp}}, {{date}} and {{time}}). Except that > currently we'd have to use other names that {{now}} and that would probably > be a bit ugly. If we implement what's above, we'll be able to have overloads > of {{now()}} for all date types and in many case it'll work how users want > out of the bose (that is, {{WHERE t = now()}} will work whatever date-based > type {{t}} is). And in the cases where you can't disambiguate, having to do > {{(time)now()}} is not really worth than if we had a {{timeNow()}} function > specific to the {{time}} type. > Also, in principle the change is just a few lines of code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11935) Add support for arithmetic operators
[ https://issues.apache.org/jira/browse/CASSANDRA-11935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-11935: --- Resolution: Fixed Status: Resolved (was: Patch Available) Committed into 3.X at 8b3de2f4908c4651491b0f20b80f7bb96cff26ed and merged into trunk > Add support for arithmetic operators > > > Key: CASSANDRA-11935 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11935 > Project: Cassandra > Issue Type: Sub-task > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > Fix For: 3.x > > > The goal of this ticket is to add support for arithmetic operators: > * {{-}}: Change the sign of the argument > * {{+}}: Addition operator > * {{-}}: Minus operator > * {{*}}: Multiplication operator > * {{/}}: Division operator > * {{%}}: Modulo operator > This ticket we should focus on adding operator only for numeric types to keep > the scope as small as possible. Dates and string operations will be adressed > in follow up tickets. > The operation precedence should be: > # {{*}}, {{/}}, {{%}} > # {{+}}, {{-}} > Some implicit data conversion should be performed when operations are > performed on different types (e.g. double + int). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10722) Error in system.log file about the compaction
[ https://issues.apache.org/jira/browse/CASSANDRA-10722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-10722: Labels: (was: test) > Error in system.log file about the compaction > - > > Key: CASSANDRA-10722 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10722 > Project: Cassandra > Issue Type: Test > Environment: 2nodes, > each 8GB RAM > 500GB Disk > dual core >Reporter: Arunsandu >Priority: Minor > Attachments: Error_Compaction_systemlog.txt > > > Was performing load testing on my tables using cassandra-stress and after the > test I did drop the keyspace(autogeneratedtest). I keep getting the message > in system.log every minute. > component_tracking_by_scid-ka-6-Data.db > component_tracking_by_scid-ka-7-Data.db > component_tracking_by_scid-ka-8-Data.db > component_tracking_by_scid-ka-9-Data.db > component_tracking_by_scid-ka-10-Data.db > These SSTables no longer exists in my data directory. Just wanted to know if > deleting saved_caches of this keyspace would fix my issue. If so, is that > good practice to delete saved_caches? > - > INFO [CompactionExecutor:5] 2015-11-17 09:27:58,894 CompactionTask.java:141 > - Compacting > [SSTableReader(path='/apps/apg-data.cassandra/data/autogeneratedtest/component_tracking_by_scid-bb55a0818a6111e59c5e677600703f12/autogeneratedtest-component_tracking_by_scid-ka-8-Data.db'), > > SSTableReader(path='/apps/apg-data.cassandra/data/autogeneratedtest/component_tracking_by_scid-bb55a0818a6111e59c5e677600703f12/autogeneratedtest-component_tracking_by_scid-ka-7-Data.db'), > > SSTableReader(path='/apps/apg-data.cassandra/data/autogeneratedtest/component_tracking_by_scid-bb55a0818a6111e59c5e677600703f12/autogeneratedtest-component_tracking_by_scid-ka-6-Data.db'), > > SSTableReader(path='/apps/apg-data.cassandra/data/autogeneratedtest/component_tracking_by_scid-bb55a0818a6111e59c5e677600703f12/autogeneratedtest-component_tracking_by_scid-ka-9-Data.db'), > > SSTableReader(path='/apps/apg-data.cassandra/data/autogeneratedtest/component_tracking_by_scid-bb55a0818a6111e59c5e677600703f12/autogeneratedtest-component_tracking_by_scid-ka-10-Data.db')] > ERROR [CompactionExecutor:5] 2015-11-17 09:27:58,895 > CassandraDaemon.java:222 - Exception in thread > Thread[CompactionExecutor:5,1,main] > java.lang.AssertionError: > /apps/apg-data.cassandra/data/autogeneratedtest/component_tracking_by_scid-bb55a0818a6111e59c5e677600703f12/autogeneratedtest-component_tracking_by_scid-ka-8-Data.db > at > org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:279) > ~[cassandra-all-2.1.8.689.jar:2.1.8.689] > at > org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:151) > ~[cassandra-all-2.1.8.689.jar:2.1.8.689] > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > ~[cassandra-all-2.1.8.689.jar:2.1.8.689] > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73) > ~[cassandra-all-2.1.8.689.jar:2.1.8.689] > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) > ~[cassandra-all-2.1.8.689.jar:2.1.8.689] > at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:236) > ~[cassandra-all-2.1.8.689.jar:2.1.8.689] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_31] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[na:1.8.0_31] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > ~[na:1.8.0_31] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_31] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_31] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12654) Query Validation Error : CQL IN operator over last partitioning|clustering column (valid) is rejected if a query fetches collection columns
[ https://issues.apache.org/jira/browse/CASSANDRA-12654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov updated CASSANDRA-12654: Fix Version/s: 4.x 3.0.x Status: Patch Available (was: Open) The limitation was imposed by the fact that in pre-[CASSANDRA-8099] storage collections were implemented by using {{ColumnToCollectionType}} and composites, which means that the very last part of the composite was collection key identifier. After 8099, each row has it's own clustering key and collection elements are now cells that have path element, so this limitation is now gone. Note on the patch: {{selectsComplexColumn}} is now removed from the {{StatementRestrictions}} public API. Alternatively, we could deprecate this constructor now and remove later. |[3.0|https://github.com/ifesdjeen/cassandra/tree/12654-3.0]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12654-3.0-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12654-3.0-dtest/]| |[3.X|https://github.com/ifesdjeen/cassandra/tree/12654-3.X]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12654-3.X-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12654-3.X-dtest/]| |[trunk|https://github.com/ifesdjeen/cassandra/tree/12654-trunk]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12654-trunk-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12654-trunk-dtest/]| > Query Validation Error : CQL IN operator over last partitioning|clustering > column (valid) is rejected if a query fetches collection columns > --- > > Key: CASSANDRA-12654 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12654 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Samba Siva Rao Kolusu >Assignee: Alex Petrov >Priority: Minor > Labels: easyfix > Fix For: 3.0.x, 3.x, 4.x > > > although IN operator is allowed over the last clustering or partitioning > columns, the CQL Query Validator is rejecting queries when they attempt to > fetch collection columns in their result set. > It seems a similar bug (CASSANDRA-5376) was raised some time ago, and a fix > (rather mask) was provided to give a better error message to such queries in > 1.2.4. > Considering that Cassandra & CQL has evolved a great deal from that period, > it now seems possible to provide an actual fix to this problem, i.e. allowing > queries to fetch collection columns even when IN operator is used. > please read the following mail thread to understand the context : > https://lists.apache.org/thread.html/8e1765d14bd9798bf9c0938a793f1dbc9c9349062a8705db2e28d291@%3Cuser.cassandra.apache.org%3E -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12809) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x.boolean_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15684423#comment-15684423 ] Philip Thompson commented on CASSANDRA-12809: - Also, my debug output is still not being printed, bizarrely. > dtest failure in > upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x.boolean_test > --- > > Key: CASSANDRA-12809 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12809 > Project: Cassandra > Issue Type: Test >Reporter: Sean McCarthy >Assignee: Philip Thompson > Labels: dtest, test-failure > > example failure: > http://cassci.datastax.com/job/cassandra-3.0_dtest_upgrade/64/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x/boolean_test > {code} > Error Message > Problem starting node node1 due to [Errno 2] No such file or directory: > '/tmp/dtest-QXmxBV/test/node1/cassandra.pid' > {code} > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line > 2206, in boolean_test > for is_upgraded, cursor in self.do_upgrade(cursor): > File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line > 153, in do_upgrade > node1.start(wait_for_binary_proto=True, wait_other_notice=True) > File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 648, in > start > self._update_pid(process) > File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1780, in > _update_pid > raise NodeError('Problem starting node %s due to %s' % (self.name, e), > process) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12809) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x.boolean_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15684416#comment-15684416 ] Philip Thompson commented on CASSANDRA-12809: - Okay, turns out that is also an upgrade test. This might be an upgrade specific problem in the test harness. > dtest failure in > upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x.boolean_test > --- > > Key: CASSANDRA-12809 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12809 > Project: Cassandra > Issue Type: Test >Reporter: Sean McCarthy >Assignee: Philip Thompson > Labels: dtest, test-failure > > example failure: > http://cassci.datastax.com/job/cassandra-3.0_dtest_upgrade/64/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x/boolean_test > {code} > Error Message > Problem starting node node1 due to [Errno 2] No such file or directory: > '/tmp/dtest-QXmxBV/test/node1/cassandra.pid' > {code} > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line > 2206, in boolean_test > for is_upgraded, cursor in self.do_upgrade(cursor): > File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line > 153, in do_upgrade > node1.start(wait_for_binary_proto=True, wait_other_notice=True) > File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 648, in > start > self._update_pid(process) > File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1780, in > _update_pid > raise NodeError('Problem starting node %s due to %s' % (self.name, e), > process) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12931) dtest failure in batch_test.TestBatch.logged_batch_doesnt_throw_uae_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15684388#comment-15684388 ] Philip Thompson commented on CASSANDRA-12931: - Testing here: http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/367/ > dtest failure in batch_test.TestBatch.logged_batch_doesnt_throw_uae_test > > > Key: CASSANDRA-12931 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12931 > Project: Cassandra > Issue Type: Test >Reporter: Michael Shuler >Assignee: Philip Thompson > Labels: dtest, test-failure > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log > > > example failure: > http://cassci.datastax.com/job/cassandra-3.X_dtest/37/testReport/batch_test/TestBatch/logged_batch_doesnt_throw_uae_test > {noformat} > Error Message > Error from server: code=1000 [Unavailable exception] message="Cannot achieve > consistency level ALL" info={'required_replicas': 3, 'alive_replicas': 2, > 'consistency': 'ALL'} > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /tmp/dtest-Ysb5Cf > dtest: DEBUG: Done setting configuration options: > { 'initial_token': None, > 'num_tokens': '32', > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > cassandra.policies: INFO: Using datacenter 'datacenter1' for > DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify > a local_dc to the constructor, or limit contact points to local cluster nodes > cassandra.cluster: INFO: New Cassandra host > discovered > cassandra.cluster: INFO: New Cassandra host > discovered > dtest: DEBUG: Creating schema... > dtest: DEBUG: Retrying request after UE. Attempt #0 > dtest: DEBUG: Retrying request after UE. Attempt #1 > dtest: DEBUG: Retrying request after UE. Attempt #2 > dtest: DEBUG: Retrying request after UE. Attempt #3 > dtest: DEBUG: Retrying request after UE. Attempt #4 > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/batch_test.py", line 193, in > logged_batch_doesnt_throw_uae_test > cl=ConsistencyLevel.ALL) > File "/home/automaton/cassandra-dtest/tools/assertions.py", line 164, in > assert_all > res = session.execute(simple_query) > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 1998, in execute > return self.execute_async(query, parameters, trace, custom_payload, > timeout, execution_profile, paging_state).result() > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 3784, in result > raise self._final_exception > 'Error from server: code=1000 [Unavailable exception] message="Cannot achieve > consistency level ALL" info={\'required_replicas\': 3, \'alive_replicas\': 2, > \'consistency\': \'ALL\'}\n >> begin captured logging << > \ndtest: DEBUG: cluster ccm directory: > /tmp/dtest-Ysb5Cf\ndtest: DEBUG: Done setting configuration options:\n{ > \'initial_token\': None,\n\'num_tokens\': \'32\',\n > \'phi_convict_threshold\': 5,\n\'range_request_timeout_in_ms\': 1,\n > \'read_request_timeout_in_ms\': 1,\n\'request_timeout_in_ms\': > 1,\n\'truncate_request_timeout_in_ms\': 1,\n > \'write_request_timeout_in_ms\': 1}\ncassandra.policies: INFO: Using > datacenter \'datacenter1\' for DCAwareRoundRobinPolicy (via host > \'127.0.0.1\'); if incorrect, please specify a local_dc to the constructor, > or limit contact points to local cluster nodes\ncassandra.cluster: INFO: New > Cassandra host discovered\ncassandra.cluster: > INFO: New Cassandra host discovered\ndtest: > DEBUG: Creating schema...\ndtest: DEBUG: Retrying request after UE. Attempt > #0\ndtest: DEBUG: Retrying request after UE. Attempt #1\ndtest: DEBUG: > Retrying request after UE. Attempt #2\ndtest: DEBUG: Retrying request after > UE. Attempt #3\ndtest: DEBUG: Retrying request after UE. Attempt > #4\n- >> end captured logging << -' > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-12931) dtest failure in batch_test.TestBatch.logged_batch_doesnt_throw_uae_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson reassigned CASSANDRA-12931: --- Assignee: Philip Thompson (was: DS Test Eng) > dtest failure in batch_test.TestBatch.logged_batch_doesnt_throw_uae_test > > > Key: CASSANDRA-12931 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12931 > Project: Cassandra > Issue Type: Test >Reporter: Michael Shuler >Assignee: Philip Thompson > Labels: dtest, test-failure > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log > > > example failure: > http://cassci.datastax.com/job/cassandra-3.X_dtest/37/testReport/batch_test/TestBatch/logged_batch_doesnt_throw_uae_test > {noformat} > Error Message > Error from server: code=1000 [Unavailable exception] message="Cannot achieve > consistency level ALL" info={'required_replicas': 3, 'alive_replicas': 2, > 'consistency': 'ALL'} > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /tmp/dtest-Ysb5Cf > dtest: DEBUG: Done setting configuration options: > { 'initial_token': None, > 'num_tokens': '32', > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > cassandra.policies: INFO: Using datacenter 'datacenter1' for > DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify > a local_dc to the constructor, or limit contact points to local cluster nodes > cassandra.cluster: INFO: New Cassandra host > discovered > cassandra.cluster: INFO: New Cassandra host > discovered > dtest: DEBUG: Creating schema... > dtest: DEBUG: Retrying request after UE. Attempt #0 > dtest: DEBUG: Retrying request after UE. Attempt #1 > dtest: DEBUG: Retrying request after UE. Attempt #2 > dtest: DEBUG: Retrying request after UE. Attempt #3 > dtest: DEBUG: Retrying request after UE. Attempt #4 > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/batch_test.py", line 193, in > logged_batch_doesnt_throw_uae_test > cl=ConsistencyLevel.ALL) > File "/home/automaton/cassandra-dtest/tools/assertions.py", line 164, in > assert_all > res = session.execute(simple_query) > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 1998, in execute > return self.execute_async(query, parameters, trace, custom_payload, > timeout, execution_profile, paging_state).result() > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 3784, in result > raise self._final_exception > 'Error from server: code=1000 [Unavailable exception] message="Cannot achieve > consistency level ALL" info={\'required_replicas\': 3, \'alive_replicas\': 2, > \'consistency\': \'ALL\'}\n >> begin captured logging << > \ndtest: DEBUG: cluster ccm directory: > /tmp/dtest-Ysb5Cf\ndtest: DEBUG: Done setting configuration options:\n{ > \'initial_token\': None,\n\'num_tokens\': \'32\',\n > \'phi_convict_threshold\': 5,\n\'range_request_timeout_in_ms\': 1,\n > \'read_request_timeout_in_ms\': 1,\n\'request_timeout_in_ms\': > 1,\n\'truncate_request_timeout_in_ms\': 1,\n > \'write_request_timeout_in_ms\': 1}\ncassandra.policies: INFO: Using > datacenter \'datacenter1\' for DCAwareRoundRobinPolicy (via host > \'127.0.0.1\'); if incorrect, please specify a local_dc to the constructor, > or limit contact points to local cluster nodes\ncassandra.cluster: INFO: New > Cassandra host discovered\ncassandra.cluster: > INFO: New Cassandra host discovered\ndtest: > DEBUG: Creating schema...\ndtest: DEBUG: Retrying request after UE. Attempt > #0\ndtest: DEBUG: Retrying request after UE. Attempt #1\ndtest: DEBUG: > Retrying request after UE. Attempt #2\ndtest: DEBUG: Retrying request after > UE. Attempt #3\ndtest: DEBUG: Retrying request after UE. Attempt > #4\n- >> end captured logging << -' > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12931) dtest failure in batch_test.TestBatch.logged_batch_doesnt_throw_uae_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15684339#comment-15684339 ] Philip Thompson commented on CASSANDRA-12931: - Actually, I think node3 just hasn't finished starting up. Will test a fix soon > dtest failure in batch_test.TestBatch.logged_batch_doesnt_throw_uae_test > > > Key: CASSANDRA-12931 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12931 > Project: Cassandra > Issue Type: Test >Reporter: Michael Shuler >Assignee: DS Test Eng > Labels: dtest, test-failure > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log > > > example failure: > http://cassci.datastax.com/job/cassandra-3.X_dtest/37/testReport/batch_test/TestBatch/logged_batch_doesnt_throw_uae_test > {noformat} > Error Message > Error from server: code=1000 [Unavailable exception] message="Cannot achieve > consistency level ALL" info={'required_replicas': 3, 'alive_replicas': 2, > 'consistency': 'ALL'} > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /tmp/dtest-Ysb5Cf > dtest: DEBUG: Done setting configuration options: > { 'initial_token': None, > 'num_tokens': '32', > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > cassandra.policies: INFO: Using datacenter 'datacenter1' for > DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify > a local_dc to the constructor, or limit contact points to local cluster nodes > cassandra.cluster: INFO: New Cassandra host > discovered > cassandra.cluster: INFO: New Cassandra host > discovered > dtest: DEBUG: Creating schema... > dtest: DEBUG: Retrying request after UE. Attempt #0 > dtest: DEBUG: Retrying request after UE. Attempt #1 > dtest: DEBUG: Retrying request after UE. Attempt #2 > dtest: DEBUG: Retrying request after UE. Attempt #3 > dtest: DEBUG: Retrying request after UE. Attempt #4 > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/batch_test.py", line 193, in > logged_batch_doesnt_throw_uae_test > cl=ConsistencyLevel.ALL) > File "/home/automaton/cassandra-dtest/tools/assertions.py", line 164, in > assert_all > res = session.execute(simple_query) > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 1998, in execute > return self.execute_async(query, parameters, trace, custom_payload, > timeout, execution_profile, paging_state).result() > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 3784, in result > raise self._final_exception > 'Error from server: code=1000 [Unavailable exception] message="Cannot achieve > consistency level ALL" info={\'required_replicas\': 3, \'alive_replicas\': 2, > \'consistency\': \'ALL\'}\n >> begin captured logging << > \ndtest: DEBUG: cluster ccm directory: > /tmp/dtest-Ysb5Cf\ndtest: DEBUG: Done setting configuration options:\n{ > \'initial_token\': None,\n\'num_tokens\': \'32\',\n > \'phi_convict_threshold\': 5,\n\'range_request_timeout_in_ms\': 1,\n > \'read_request_timeout_in_ms\': 1,\n\'request_timeout_in_ms\': > 1,\n\'truncate_request_timeout_in_ms\': 1,\n > \'write_request_timeout_in_ms\': 1}\ncassandra.policies: INFO: Using > datacenter \'datacenter1\' for DCAwareRoundRobinPolicy (via host > \'127.0.0.1\'); if incorrect, please specify a local_dc to the constructor, > or limit contact points to local cluster nodes\ncassandra.cluster: INFO: New > Cassandra host discovered\ncassandra.cluster: > INFO: New Cassandra host discovered\ndtest: > DEBUG: Creating schema...\ndtest: DEBUG: Retrying request after UE. Attempt > #0\ndtest: DEBUG: Retrying request after UE. Attempt #1\ndtest: DEBUG: > Retrying request after UE. Attempt #2\ndtest: DEBUG: Retrying request after > UE. Attempt #3\ndtest: DEBUG: Retrying request after UE. Attempt > #4\n- >> end captured logging << -' > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12931) dtest failure in batch_test.TestBatch.logged_batch_doesnt_throw_uae_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15684323#comment-15684323 ] Philip Thompson commented on CASSANDRA-12931: - I see this in the logs: {code} DEBUG [MessagingService-Outgoing-/127.0.0.3-Gossip] 2016-11-17 04:04:17,318 OutboundTcpConnection.java:494 - Unable to connect to /127.0.0.3 java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) ~[na:1.8.0_45] at sun.nio.ch.Net.connect(Net.java:458) ~[na:1.8.0_45] at sun.nio.ch.Net.connect(Net.java:450) ~[na:1.8.0_45] at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648) ~[na:1.8.0_45] at org.apache.cassandra.net.OutboundTcpConnectionPool.newSocket(OutboundTcpConnectionPool.java:151) ~[main/:na] at org.apache.cassandra.net.OutboundTcpConnectionPool.newSocket(OutboundTcpConnectionPool.java:132) ~[main/:na] at org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:396) [main/:na] at org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:233) [main/:na] {code} So it's a failed connectivity issue. I think we've seen this elsewhere recently as well > dtest failure in batch_test.TestBatch.logged_batch_doesnt_throw_uae_test > > > Key: CASSANDRA-12931 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12931 > Project: Cassandra > Issue Type: Test >Reporter: Michael Shuler >Assignee: DS Test Eng > Labels: dtest, test-failure > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log > > > example failure: > http://cassci.datastax.com/job/cassandra-3.X_dtest/37/testReport/batch_test/TestBatch/logged_batch_doesnt_throw_uae_test > {noformat} > Error Message > Error from server: code=1000 [Unavailable exception] message="Cannot achieve > consistency level ALL" info={'required_replicas': 3, 'alive_replicas': 2, > 'consistency': 'ALL'} > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /tmp/dtest-Ysb5Cf > dtest: DEBUG: Done setting configuration options: > { 'initial_token': None, > 'num_tokens': '32', > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > cassandra.policies: INFO: Using datacenter 'datacenter1' for > DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify > a local_dc to the constructor, or limit contact points to local cluster nodes > cassandra.cluster: INFO: New Cassandra host > discovered > cassandra.cluster: INFO: New Cassandra host > discovered > dtest: DEBUG: Creating schema... > dtest: DEBUG: Retrying request after UE. Attempt #0 > dtest: DEBUG: Retrying request after UE. Attempt #1 > dtest: DEBUG: Retrying request after UE. Attempt #2 > dtest: DEBUG: Retrying request after UE. Attempt #3 > dtest: DEBUG: Retrying request after UE. Attempt #4 > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/batch_test.py", line 193, in > logged_batch_doesnt_throw_uae_test > cl=ConsistencyLevel.ALL) > File "/home/automaton/cassandra-dtest/tools/assertions.py", line 164, in > assert_all > res = session.execute(simple_query) > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 1998, in execute > return self.execute_async(query, parameters, trace, custom_payload, > timeout, execution_profile, paging_state).result() > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 3784, in result > raise self._final_exception > 'Error from server: code=1000 [Unavailable exception] message="Cannot achieve > consistency level ALL" info={\'required_replicas\': 3, \'alive_replicas\': 2, > \'consistency\': \'ALL\'}\n >> begin captured logging << > \ndtest: DEBUG: cluster ccm directory: > /tmp/dtest-Ysb5Cf\ndtest: DEBUG: Done setting configuration options:\n{ > \'initial_token\': None,\n\'num_tokens\': \'32\',\n > \'phi_convict_threshold\': 5,\n\'range_request_timeout_in_ms\': 1,\n > \'read_request_timeout_in_ms\': 1,\n\'request_timeout_in_ms\': > 1,\n\'truncate_request_timeout_in_ms\': 1,\n > \'write_request_timeout_in_ms\': 1}\ncassandra.policies: INFO: Using > datacenter \'datacenter1\' for DCAware
[jira] [Updated] (CASSANDRA-12931) dtest failure in batch_test.TestBatch.logged_batch_doesnt_throw_uae_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-12931: Attachment: node3.log node3_gc.log node3_debug.log node2.log node2_gc.log node2_debug.log node1.log node1_gc.log node1_debug.log > dtest failure in batch_test.TestBatch.logged_batch_doesnt_throw_uae_test > > > Key: CASSANDRA-12931 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12931 > Project: Cassandra > Issue Type: Test >Reporter: Michael Shuler >Assignee: DS Test Eng > Labels: dtest, test-failure > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log > > > example failure: > http://cassci.datastax.com/job/cassandra-3.X_dtest/37/testReport/batch_test/TestBatch/logged_batch_doesnt_throw_uae_test > {noformat} > Error Message > Error from server: code=1000 [Unavailable exception] message="Cannot achieve > consistency level ALL" info={'required_replicas': 3, 'alive_replicas': 2, > 'consistency': 'ALL'} > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /tmp/dtest-Ysb5Cf > dtest: DEBUG: Done setting configuration options: > { 'initial_token': None, > 'num_tokens': '32', > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > cassandra.policies: INFO: Using datacenter 'datacenter1' for > DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify > a local_dc to the constructor, or limit contact points to local cluster nodes > cassandra.cluster: INFO: New Cassandra host > discovered > cassandra.cluster: INFO: New Cassandra host > discovered > dtest: DEBUG: Creating schema... > dtest: DEBUG: Retrying request after UE. Attempt #0 > dtest: DEBUG: Retrying request after UE. Attempt #1 > dtest: DEBUG: Retrying request after UE. Attempt #2 > dtest: DEBUG: Retrying request after UE. Attempt #3 > dtest: DEBUG: Retrying request after UE. Attempt #4 > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/batch_test.py", line 193, in > logged_batch_doesnt_throw_uae_test > cl=ConsistencyLevel.ALL) > File "/home/automaton/cassandra-dtest/tools/assertions.py", line 164, in > assert_all > res = session.execute(simple_query) > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 1998, in execute > return self.execute_async(query, parameters, trace, custom_payload, > timeout, execution_profile, paging_state).result() > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 3784, in result > raise self._final_exception > 'Error from server: code=1000 [Unavailable exception] message="Cannot achieve > consistency level ALL" info={\'required_replicas\': 3, \'alive_replicas\': 2, > \'consistency\': \'ALL\'}\n >> begin captured logging << > \ndtest: DEBUG: cluster ccm directory: > /tmp/dtest-Ysb5Cf\ndtest: DEBUG: Done setting configuration options:\n{ > \'initial_token\': None,\n\'num_tokens\': \'32\',\n > \'phi_convict_threshold\': 5,\n\'range_request_timeout_in_ms\': 1,\n > \'read_request_timeout_in_ms\': 1,\n\'request_timeout_in_ms\': > 1,\n\'truncate_request_timeout_in_ms\': 1,\n > \'write_request_timeout_in_ms\': 1}\ncassandra.policies: INFO: Using > datacenter \'datacenter1\' for DCAwareRoundRobinPolicy (via host > \'127.0.0.1\'); if incorrect, please specify a local_dc to the constructor, > or limit contact points to local cluster nodes\ncassandra.cluster: INFO: New > Cassandra host discovered\ncassandra.cluster: > INFO: New Cassandra host discovered\ndtest: > DEBUG: Creating schema...\ndtest: DEBUG: Retrying request after UE. Attempt > #0\ndtest: DEBUG: Retrying request after UE. Attempt #1\ndtest: DEBUG: > Retrying request after UE. Attempt #2\ndtest: DEBUG: Retrying request after > UE. Attempt #3\ndtest: DEBUG: Retrying request after UE. Attempt > #4\n- >> end captured logging << -' > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12932) dtest failure in cql_tests.StorageProxyCQLTester.type_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15684298#comment-15684298 ] Philip Thompson commented on CASSANDRA-12932: - I assume the lack of connection and the missing logs mean the nodes were never created. > dtest failure in cql_tests.StorageProxyCQLTester.type_test > -- > > Key: CASSANDRA-12932 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12932 > Project: Cassandra > Issue Type: Test >Reporter: Michael Shuler >Assignee: DS Test Eng > Labels: dtest, test-failure > > example failure: > http://cassci.datastax.com/job/cassandra-3.X_novnode_dtest/10/testReport/cql_tests/StorageProxyCQLTester/type_test > {noformat} > Error Message > ('Unable to connect to any servers', {'127.0.0.1': error(111, "Tried > connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")}) > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /tmp/dtest-7XmxR8 > dtest: DEBUG: Done setting configuration options: > { 'num_tokens': None, > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/cql_tests.py", line 195, in type_test > session = self.prepare() > File "/home/automaton/cassandra-dtest/cql_tests.py", line 54, in prepare > session = self.patient_cql_connection(node1, > protocol_version=protocol_version, user=user, password=password) > File "/home/automaton/cassandra-dtest/dtest.py", line 507, in > patient_cql_connection > bypassed_exception=NoHostAvailable > File "/home/automaton/cassandra-dtest/dtest.py", line 200, in > retry_till_success > return fun(*args, **kwargs) > File "/home/automaton/cassandra-dtest/dtest.py", line 440, in cql_connection > protocol_version, port=port, ssl_opts=ssl_opts) > File "/home/automaton/cassandra-dtest/dtest.py", line 468, in > _create_session > session = cluster.connect(wait_for_all_pools=True) > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 1180, in connect > self.control_connection.connect() > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 2597, in connect > self._set_new_connection(self._reconnect_internal()) > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 2634, in _reconnect_internal > raise NoHostAvailable("Unable to connect to any servers", errors) > '(\'Unable to connect to any servers\', {\'127.0.0.1\': error(111, "Tried > connecting to [(\'127.0.0.1\', 9042)]. Last error: Connection > refused")})\n >> begin captured logging << > \ndtest: DEBUG: cluster ccm directory: > /tmp/dtest-7XmxR8\ndtest: DEBUG: Done setting configuration options:\n{ > \'num_tokens\': None,\n\'phi_convict_threshold\': 5,\n > \'range_request_timeout_in_ms\': 1,\n\'read_request_timeout_in_ms\': > 1,\n\'request_timeout_in_ms\': 1,\n > \'truncate_request_timeout_in_ms\': 1,\n > \'write_request_timeout_in_ms\': 1}\n- >> end > captured logging << -' > {noformat} > (generated no ccm node log at all) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12939) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_with_backoff
[ https://issues.apache.org/jira/browse/CASSANDRA-12939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15684291#comment-15684291 ] Philip Thompson commented on CASSANDRA-12939: - Looks like a straightforward timeout. Perhaps we should try multiple attempts on failure. > dtest failure in > cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_with_backoff > - > > Key: CASSANDRA-12939 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12939 > Project: Cassandra > Issue Type: Test >Reporter: Sean McCarthy >Assignee: DS Test Eng > Labels: dtest, test-failure > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log > > > example failure: > http://cassci.datastax.com/job/cassandra-3.X_dtest/40/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_with_backoff > {code} > Error Message > 25 != 244475 > ... > dtest: DEBUG: Errors: > Using CQL driver: '/home/automaton/cassandra/bin/../lib/cassandra-driver-internal-only-3.7.0.post0-2481531.zip/cassandra-driver-3.7.0.post0-2481531/cassandra/__init__.py'> > Using connect timeout: 5 seconds > Using 'utf-8' encoding > Using ssl: False > :3:Error for (2730718820402670492, 3207787379576163567): > OperationTimedOut - errors={'127.0.0.1': 'Client request timeout. See > Session.execute[_async](timeout)'}, last_host=127.0.0.1 (permanently given up > after 1000 rows and 1 attempts) > :3:Exported 96 ranges out of 97 total ranges, some records might be > missing > {code}{code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/dtest.py", line 1099, in wrapped > f(obj) > File "/home/automaton/cassandra-dtest/tools/decorators.py", line 46, in > wrapped > f(obj) > File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", > line 2613, in test_bulk_round_trip_with_backoff > copy_from_options={'MAXINFLIGHTMESSAGES': 64, 'MAXPENDINGCHUNKS': 1}) > File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", > line 2508, in _test_bulk_round_trip > sum(1 for _ in open(tempfile2.name))) > File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual > assertion_func(first, second, msg=msg) > File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual > raise self.failureException(msg) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[5/5] cassandra git commit: Merge branch cassandra-3.X into trunk
Merge branch cassandra-3.X into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f782f148 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f782f148 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f782f148 Branch: refs/heads/trunk Commit: f782f148bdde962adf11d268db3435231dba5083 Parents: 58cf4c9 8b3de2f Author: Benjamin Lerer Authored: Mon Nov 21 18:54:34 2016 +0100 Committer: Benjamin Lerer Committed: Mon Nov 21 18:54:34 2016 +0100 -- CHANGES.txt | 5 + doc/source/cql/changes.rst | 4 +- doc/source/cql/definitions.rst | 4 +- doc/source/cql/index.rst| 1 + doc/source/cql/operators.rst| 57 ++ pylib/cqlshlib/cql3handling.py | 2 +- src/antlr/Lexer.g | 6 +- src/antlr/Parser.g | 248 +-- .../org/apache/cassandra/cql3/Constants.java| 58 +- src/java/org/apache/cassandra/cql3/Lists.java | 85 ++- src/java/org/apache/cassandra/cql3/Maps.java| 122 ++- src/java/org/apache/cassandra/cql3/Sets.java| 95 ++- src/java/org/apache/cassandra/cql3/Tuples.java | 147 +++- .../org/apache/cassandra/cql3/UserTypes.java| 115 ++- .../cassandra/cql3/functions/FunctionCall.java | 37 +- .../cql3/functions/FunctionResolver.java| 91 ++- .../cassandra/cql3/functions/OperationFcts.java | 380 ++ .../cql3/selection/CollectionFactory.java | 91 +++ .../cql3/selection/ForwardingFactory.java | 90 +++ .../cassandra/cql3/selection/ListSelector.java | 104 +++ .../cassandra/cql3/selection/MapSelector.java | 195 + .../cql3/selection/ScalarFunctionSelector.java | 9 - .../cassandra/cql3/selection/Selectable.java| 647 +++- .../cassandra/cql3/selection/Selector.java | 11 - .../cassandra/cql3/selection/SetSelector.java | 106 +++ .../cassandra/cql3/selection/TupleSelector.java | 101 +++ .../cql3/selection/UserTypeSelector.java| 177 + .../org/apache/cassandra/db/SystemKeyspace.java | 1 + .../cassandra/db/marshal/AbstractType.java | 13 +- .../cassandra/db/marshal/BooleanType.java | 2 +- .../apache/cassandra/db/marshal/ByteType.java | 56 +- .../cassandra/db/marshal/CounterColumnType.java | 40 +- .../apache/cassandra/db/marshal/DateType.java | 2 +- .../cassandra/db/marshal/DecimalType.java | 76 +- .../apache/cassandra/db/marshal/DoubleType.java | 69 +- .../apache/cassandra/db/marshal/EmptyType.java | 2 +- .../apache/cassandra/db/marshal/FloatType.java | 61 +- .../apache/cassandra/db/marshal/Int32Type.java | 48 +- .../cassandra/db/marshal/IntegerType.java | 69 +- .../cassandra/db/marshal/LexicalUUIDType.java | 2 +- .../apache/cassandra/db/marshal/LongType.java | 52 +- .../apache/cassandra/db/marshal/NumberType.java | 223 ++ .../cassandra/db/marshal/ReversedType.java | 2 +- .../apache/cassandra/db/marshal/ShortType.java | 51 +- .../cassandra/db/marshal/TimeUUIDType.java | 2 +- .../cassandra/db/marshal/TimestampType.java | 2 +- .../apache/cassandra/db/marshal/TupleType.java | 5 + .../apache/cassandra/db/marshal/UUIDType.java | 2 +- .../apache/cassandra/db/marshal/UserType.java | 5 + .../exceptions/OperationExecutionException.java | 57 ++ .../cassandra/serializers/ByteSerializer.java | 4 +- .../apache/cassandra/utils/ByteBufferUtil.java | 17 + .../org/apache/cassandra/cql3/CQLTester.java| 4 +- .../cql3/functions/OperationFctsTest.java | 744 +++ .../selection/SelectionColumnMappingTest.java | 94 +++ .../cql3/selection/TermSelectionTest.java | 386 +- .../cql3/validation/operations/SelectTest.java | 10 + 57 files changed, 4769 insertions(+), 320 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f782f148/CHANGES.txt -- diff --cc CHANGES.txt index fa9233a,1f1625c..6cd725c --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,10 -1,8 +1,15 @@@ +4.0 + * Add column definition kind to dropped columns in schema (CASSANDRA-12705) + * Add (automate) Nodetool Documentation (CASSANDRA-12672) + * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736) + * Reject invalid replication settings when creating or altering a keyspace (CASSANDRA-12681) + * Clean up the SSTableReader#getScanner API wrt removal of RateLimiter (CASSANDRA-12422) + + 3.12 + * Add support for arithmetic operators (CASSANDRA-11935) + + 3.11 + * AnticompactionRequestSerializer serializedSize is incorrect (CASSANDRA-12934) 3.10 * Don't shut down socket input/output
[3/5] cassandra git commit: Add support for arithmetic operators
http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b3de2f4/src/java/org/apache/cassandra/cql3/functions/OperationFcts.java -- diff --git a/src/java/org/apache/cassandra/cql3/functions/OperationFcts.java b/src/java/org/apache/cassandra/cql3/functions/OperationFcts.java new file mode 100644 index 000..1f115a9 --- /dev/null +++ b/src/java/org/apache/cassandra/cql3/functions/OperationFcts.java @@ -0,0 +1,380 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.cassandra.cql3.functions; + +import java.nio.ByteBuffer; +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; + +import org.apache.cassandra.config.SchemaConstants; +import org.apache.cassandra.db.marshal.*; +import org.apache.cassandra.exceptions.OperationExecutionException; +import org.apache.cassandra.transport.ProtocolVersion; + +/** + * Operation functions (Mathematics). + * + */ +public final class OperationFcts +{ +private static enum OPERATION +{ +ADDITION('+', "_add") +{ +protected ByteBuffer execute(NumberType resultType, + NumberType leftType, + ByteBuffer left, + NumberType rightType, + ByteBuffer right) +{ +return resultType.add(leftType, left, rightType, right); +} +}, +SUBSTRACTION('-', "_substract") +{ +protected ByteBuffer execute(NumberType resultType, + NumberType leftType, + ByteBuffer left, + NumberType rightType, + ByteBuffer right) +{ +return resultType.substract(leftType, left, rightType, right); +} +}, +MULTIPLICATION('*', "_multiply") +{ +protected ByteBuffer execute(NumberType resultType, + NumberType leftType, + ByteBuffer left, + NumberType rightType, + ByteBuffer right) +{ +return resultType.multiply(leftType, left, rightType, right); +} +}, +DIVISION('/', "_divide") +{ +protected ByteBuffer execute(NumberType resultType, + NumberType leftType, + ByteBuffer left, + NumberType rightType, + ByteBuffer right) +{ +return resultType.divide(leftType, left, rightType, right); +} +}, +MODULO('%', "_modulo") +{ +protected ByteBuffer execute(NumberType resultType, + NumberType leftType, + ByteBuffer left, + NumberType rightType, + ByteBuffer right) +{ +return resultType.mod(leftType, left, rightType, right); +} +}; + +/** + * The operator symbol. + */ +private final char symbol; + +/** + * The name of the function associated to this operation + */ +private final String functionName; + +private OPERATION(char symbol, String functionName) +{ +this.symbol = symbol; +this.functionName = functionName; +} + +/** + * Executes the operation between the specified operand. + * + * @param resultType the result ype of the operation + * @param leftType the type of the left operand + * @param left the left operand + * @param rightType the type of the right operand + * @param right the right operand + * @return the operat
[2/5] cassandra git commit: Add support for arithmetic operators
http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b3de2f4/src/java/org/apache/cassandra/db/marshal/DecimalType.java -- diff --git a/src/java/org/apache/cassandra/db/marshal/DecimalType.java b/src/java/org/apache/cassandra/db/marshal/DecimalType.java index f1586e0..b98bf00 100644 --- a/src/java/org/apache/cassandra/db/marshal/DecimalType.java +++ b/src/java/org/apache/cassandra/db/marshal/DecimalType.java @@ -18,6 +18,8 @@ package org.apache.cassandra.db.marshal; import java.math.BigDecimal; +import java.math.BigInteger; +import java.math.MathContext; import java.nio.ByteBuffer; import org.apache.cassandra.cql3.CQL3Type; @@ -29,7 +31,7 @@ import org.apache.cassandra.serializers.MarshalException; import org.apache.cassandra.transport.ProtocolVersion; import org.apache.cassandra.utils.ByteBufferUtil; -public class DecimalType extends AbstractType +public class DecimalType extends NumberType { public static final DecimalType instance = new DecimalType(); @@ -40,6 +42,12 @@ public class DecimalType extends AbstractType return true; } +@Override +public boolean isFloatingPoint() +{ +return true; +} + public int compareCustom(ByteBuffer o1, ByteBuffer o2) { if (!o1.hasRemaining() || !o2.hasRemaining()) @@ -95,4 +103,70 @@ public class DecimalType extends AbstractType { return DecimalSerializer.instance; } + +@Override +protected int toInt(ByteBuffer value) +{ +throw new UnsupportedOperationException(); +} + +@Override +protected float toFloat(ByteBuffer value) +{ +throw new UnsupportedOperationException(); +} + +@Override +protected long toLong(ByteBuffer value) +{ +throw new UnsupportedOperationException(); +} + +@Override +protected double toDouble(ByteBuffer value) +{ +throw new UnsupportedOperationException(); +} + +@Override +protected BigInteger toBigInteger(ByteBuffer value) +{ +throw new UnsupportedOperationException(); +} + +@Override +protected BigDecimal toBigDecimal(ByteBuffer value) +{ +return compose(value); +} + +public ByteBuffer add(NumberType leftType, ByteBuffer left, NumberType rightType, ByteBuffer right) +{ +return decompose(leftType.toBigDecimal(left).add(rightType.toBigDecimal(right), MathContext.DECIMAL128)); +} + +public ByteBuffer substract(NumberType leftType, ByteBuffer left, NumberType rightType, ByteBuffer right) +{ +return decompose(leftType.toBigDecimal(left).subtract(rightType.toBigDecimal(right), MathContext.DECIMAL128)); +} + +public ByteBuffer multiply(NumberType leftType, ByteBuffer left, NumberType rightType, ByteBuffer right) +{ +return decompose(leftType.toBigDecimal(left).multiply(rightType.toBigDecimal(right), MathContext.DECIMAL128)); +} + +public ByteBuffer divide(NumberType leftType, ByteBuffer left, NumberType rightType, ByteBuffer right) +{ +return decompose(leftType.toBigDecimal(left).divide(rightType.toBigDecimal(right), MathContext.DECIMAL128)); +} + +public ByteBuffer mod(NumberType leftType, ByteBuffer left, NumberType rightType, ByteBuffer right) +{ +return decompose(leftType.toBigDecimal(left).remainder(rightType.toBigDecimal(right), MathContext.DECIMAL128)); +} + +public ByteBuffer negate(ByteBuffer input) +{ +return decompose(toBigDecimal(input).negate()); +} } http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b3de2f4/src/java/org/apache/cassandra/db/marshal/DoubleType.java -- diff --git a/src/java/org/apache/cassandra/db/marshal/DoubleType.java b/src/java/org/apache/cassandra/db/marshal/DoubleType.java index d2309ee..b72d3e9 100644 --- a/src/java/org/apache/cassandra/db/marshal/DoubleType.java +++ b/src/java/org/apache/cassandra/db/marshal/DoubleType.java @@ -28,7 +28,7 @@ import org.apache.cassandra.serializers.MarshalException; import org.apache.cassandra.transport.ProtocolVersion; import org.apache.cassandra.utils.ByteBufferUtil; -public class DoubleType extends AbstractType +public class DoubleType extends NumberType { public static final DoubleType instance = new DoubleType(); @@ -39,6 +39,12 @@ public class DoubleType extends AbstractType return true; } +@Override +public boolean isFloatingPoint() +{ +return true; +} + public int compareCustom(ByteBuffer o1, ByteBuffer o2) { if (!o1.hasRemaining() || !o2.hasRemaining()) @@ -53,17 +59,14 @@ public class DoubleType extends AbstractType if (source.isEmpty()) return ByteBufferUtil.EMPTY_BYTE_BUFFER; - Double d; try { - d = Double.valueOf(source); + return decomp
[4/4] cassandra git commit: Add support for arithmetic operators
Add support for arithmetic operators patch by Benjamin Lerer; reviewed by Sylvain Lebresne for CASSANDRA-11935 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8b3de2f4 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8b3de2f4 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8b3de2f4 Branch: refs/heads/cassandra-3.X Commit: 8b3de2f4908c4651491b0f20b80f7bb96cff26ed Parents: 075539a Author: Benjamin Lerer Authored: Mon Nov 21 18:04:42 2016 +0100 Committer: Benjamin Lerer Committed: Mon Nov 21 18:04:42 2016 +0100 -- CHANGES.txt | 3 + doc/source/cql/changes.rst | 4 +- doc/source/cql/definitions.rst | 4 +- doc/source/cql/index.rst| 1 + doc/source/cql/operators.rst| 57 ++ pylib/cqlshlib/cql3handling.py | 2 +- src/antlr/Lexer.g | 6 +- src/antlr/Parser.g | 248 +-- .../org/apache/cassandra/cql3/Constants.java| 58 +- src/java/org/apache/cassandra/cql3/Lists.java | 85 ++- src/java/org/apache/cassandra/cql3/Maps.java| 122 ++- src/java/org/apache/cassandra/cql3/Sets.java| 95 ++- src/java/org/apache/cassandra/cql3/Tuples.java | 147 +++- .../org/apache/cassandra/cql3/UserTypes.java| 115 ++- .../cassandra/cql3/functions/FunctionCall.java | 37 +- .../cql3/functions/FunctionResolver.java| 91 ++- .../cassandra/cql3/functions/OperationFcts.java | 380 ++ .../cql3/selection/CollectionFactory.java | 91 +++ .../cql3/selection/ForwardingFactory.java | 90 +++ .../cassandra/cql3/selection/ListSelector.java | 104 +++ .../cassandra/cql3/selection/MapSelector.java | 195 + .../cql3/selection/ScalarFunctionSelector.java | 9 - .../cassandra/cql3/selection/Selectable.java| 647 +++- .../cassandra/cql3/selection/Selector.java | 11 - .../cassandra/cql3/selection/SetSelector.java | 106 +++ .../cassandra/cql3/selection/TupleSelector.java | 101 +++ .../cql3/selection/UserTypeSelector.java| 177 + .../org/apache/cassandra/db/SystemKeyspace.java | 1 + .../cassandra/db/marshal/AbstractType.java | 13 +- .../cassandra/db/marshal/BooleanType.java | 2 +- .../apache/cassandra/db/marshal/ByteType.java | 56 +- .../cassandra/db/marshal/CounterColumnType.java | 40 +- .../apache/cassandra/db/marshal/DateType.java | 2 +- .../cassandra/db/marshal/DecimalType.java | 76 +- .../apache/cassandra/db/marshal/DoubleType.java | 69 +- .../apache/cassandra/db/marshal/EmptyType.java | 2 +- .../apache/cassandra/db/marshal/FloatType.java | 61 +- .../apache/cassandra/db/marshal/Int32Type.java | 48 +- .../cassandra/db/marshal/IntegerType.java | 69 +- .../cassandra/db/marshal/LexicalUUIDType.java | 2 +- .../apache/cassandra/db/marshal/LongType.java | 52 +- .../apache/cassandra/db/marshal/NumberType.java | 223 ++ .../cassandra/db/marshal/ReversedType.java | 2 +- .../apache/cassandra/db/marshal/ShortType.java | 51 +- .../cassandra/db/marshal/TimeUUIDType.java | 2 +- .../cassandra/db/marshal/TimestampType.java | 2 +- .../apache/cassandra/db/marshal/TupleType.java | 5 + .../apache/cassandra/db/marshal/UUIDType.java | 2 +- .../apache/cassandra/db/marshal/UserType.java | 5 + .../exceptions/OperationExecutionException.java | 57 ++ .../cassandra/serializers/ByteSerializer.java | 4 +- .../apache/cassandra/utils/ByteBufferUtil.java | 17 + .../org/apache/cassandra/cql3/CQLTester.java| 4 +- .../cql3/functions/OperationFctsTest.java | 744 +++ .../selection/SelectionColumnMappingTest.java | 94 +++ .../cql3/selection/TermSelectionTest.java | 386 +- .../cql3/validation/operations/SelectTest.java | 10 + 57 files changed, 4767 insertions(+), 320 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b3de2f4/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 24641a6..1f1625c 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,3 +1,6 @@ +3.12 + * Add support for arithmetic operators (CASSANDRA-11935) + 3.11 * AnticompactionRequestSerializer serializedSize is incorrect (CASSANDRA-12934) http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b3de2f4/doc/source/cql/changes.rst -- diff --git a/doc/source/cql/changes.rst b/doc/source/cql/changes.rst index 913bdb4..a33bb63 100644 --- a/doc/source/cql/changes.rst +++ b/doc/source/cql/changes.rst @@ -27,8 +27,8 @@ The following describes the change
[2/4] cassandra git commit: Add support for arithmetic operators
http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b3de2f4/src/java/org/apache/cassandra/db/marshal/DecimalType.java -- diff --git a/src/java/org/apache/cassandra/db/marshal/DecimalType.java b/src/java/org/apache/cassandra/db/marshal/DecimalType.java index f1586e0..b98bf00 100644 --- a/src/java/org/apache/cassandra/db/marshal/DecimalType.java +++ b/src/java/org/apache/cassandra/db/marshal/DecimalType.java @@ -18,6 +18,8 @@ package org.apache.cassandra.db.marshal; import java.math.BigDecimal; +import java.math.BigInteger; +import java.math.MathContext; import java.nio.ByteBuffer; import org.apache.cassandra.cql3.CQL3Type; @@ -29,7 +31,7 @@ import org.apache.cassandra.serializers.MarshalException; import org.apache.cassandra.transport.ProtocolVersion; import org.apache.cassandra.utils.ByteBufferUtil; -public class DecimalType extends AbstractType +public class DecimalType extends NumberType { public static final DecimalType instance = new DecimalType(); @@ -40,6 +42,12 @@ public class DecimalType extends AbstractType return true; } +@Override +public boolean isFloatingPoint() +{ +return true; +} + public int compareCustom(ByteBuffer o1, ByteBuffer o2) { if (!o1.hasRemaining() || !o2.hasRemaining()) @@ -95,4 +103,70 @@ public class DecimalType extends AbstractType { return DecimalSerializer.instance; } + +@Override +protected int toInt(ByteBuffer value) +{ +throw new UnsupportedOperationException(); +} + +@Override +protected float toFloat(ByteBuffer value) +{ +throw new UnsupportedOperationException(); +} + +@Override +protected long toLong(ByteBuffer value) +{ +throw new UnsupportedOperationException(); +} + +@Override +protected double toDouble(ByteBuffer value) +{ +throw new UnsupportedOperationException(); +} + +@Override +protected BigInteger toBigInteger(ByteBuffer value) +{ +throw new UnsupportedOperationException(); +} + +@Override +protected BigDecimal toBigDecimal(ByteBuffer value) +{ +return compose(value); +} + +public ByteBuffer add(NumberType leftType, ByteBuffer left, NumberType rightType, ByteBuffer right) +{ +return decompose(leftType.toBigDecimal(left).add(rightType.toBigDecimal(right), MathContext.DECIMAL128)); +} + +public ByteBuffer substract(NumberType leftType, ByteBuffer left, NumberType rightType, ByteBuffer right) +{ +return decompose(leftType.toBigDecimal(left).subtract(rightType.toBigDecimal(right), MathContext.DECIMAL128)); +} + +public ByteBuffer multiply(NumberType leftType, ByteBuffer left, NumberType rightType, ByteBuffer right) +{ +return decompose(leftType.toBigDecimal(left).multiply(rightType.toBigDecimal(right), MathContext.DECIMAL128)); +} + +public ByteBuffer divide(NumberType leftType, ByteBuffer left, NumberType rightType, ByteBuffer right) +{ +return decompose(leftType.toBigDecimal(left).divide(rightType.toBigDecimal(right), MathContext.DECIMAL128)); +} + +public ByteBuffer mod(NumberType leftType, ByteBuffer left, NumberType rightType, ByteBuffer right) +{ +return decompose(leftType.toBigDecimal(left).remainder(rightType.toBigDecimal(right), MathContext.DECIMAL128)); +} + +public ByteBuffer negate(ByteBuffer input) +{ +return decompose(toBigDecimal(input).negate()); +} } http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b3de2f4/src/java/org/apache/cassandra/db/marshal/DoubleType.java -- diff --git a/src/java/org/apache/cassandra/db/marshal/DoubleType.java b/src/java/org/apache/cassandra/db/marshal/DoubleType.java index d2309ee..b72d3e9 100644 --- a/src/java/org/apache/cassandra/db/marshal/DoubleType.java +++ b/src/java/org/apache/cassandra/db/marshal/DoubleType.java @@ -28,7 +28,7 @@ import org.apache.cassandra.serializers.MarshalException; import org.apache.cassandra.transport.ProtocolVersion; import org.apache.cassandra.utils.ByteBufferUtil; -public class DoubleType extends AbstractType +public class DoubleType extends NumberType { public static final DoubleType instance = new DoubleType(); @@ -39,6 +39,12 @@ public class DoubleType extends AbstractType return true; } +@Override +public boolean isFloatingPoint() +{ +return true; +} + public int compareCustom(ByteBuffer o1, ByteBuffer o2) { if (!o1.hasRemaining() || !o2.hasRemaining()) @@ -53,17 +59,14 @@ public class DoubleType extends AbstractType if (source.isEmpty()) return ByteBufferUtil.EMPTY_BYTE_BUFFER; - Double d; try { - d = Double.valueOf(source); + return decomp
[1/5] cassandra git commit: Add support for arithmetic operators
Repository: cassandra Updated Branches: refs/heads/trunk 58cf4c907 -> f782f148b http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b3de2f4/test/unit/org/apache/cassandra/cql3/selection/SelectionColumnMappingTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/selection/SelectionColumnMappingTest.java b/test/unit/org/apache/cassandra/cql3/selection/SelectionColumnMappingTest.java index ece2d1d..975eb8e 100644 --- a/test/unit/org/apache/cassandra/cql3/selection/SelectionColumnMappingTest.java +++ b/test/unit/org/apache/cassandra/cql3/selection/SelectionColumnMappingTest.java @@ -39,6 +39,7 @@ import org.apache.cassandra.service.ClientState; import org.apache.cassandra.service.QueryState; import org.apache.cassandra.utils.ByteBufferUtil; +import static java.util.Arrays.asList; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; @@ -102,6 +103,14 @@ public class SelectionColumnMappingTest extends CQLTester testMixedColumnTypes(); testMultipleUnaliasedSelectionOfSameColumn(); testUserDefinedAggregate(); +testListLitteral(); +testEmptyListLitteral(); +testSetLitteral(); +testEmptySetLitteral(); +testMapLitteral(); +testEmptyMapLitteral(); +testUDTLitteral(); +testTupleLitteral(); } @Test @@ -407,6 +416,91 @@ public class SelectionColumnMappingTest extends CQLTester verify(expected, "SELECT v1, v1 FROM %s"); } +private void testListLitteral() throws Throwable +{ +ColumnSpecification listSpec = columnSpecification("[k, v1]", ListType.getInstance(Int32Type.instance, false)); +SelectionColumnMapping expected = SelectionColumnMapping.newMapping() + .addMapping(listSpec, asList(columnDefinition("k"), + columnDefinition("v1"))); + +verify(expected, "SELECT [k, v1] FROM %s"); +} + +private void testEmptyListLitteral() throws Throwable +{ +ColumnSpecification listSpec = columnSpecification("(list)[]", ListType.getInstance(Int32Type.instance, false)); +SelectionColumnMapping expected = SelectionColumnMapping.newMapping() + .addMapping(listSpec, (ColumnDefinition) null); + +verify(expected, "SELECT (list)[] FROM %s"); +} + +private void testSetLitteral() throws Throwable +{ +ColumnSpecification setSpec = columnSpecification("{k, v1}", SetType.getInstance(Int32Type.instance, false)); +SelectionColumnMapping expected = SelectionColumnMapping.newMapping() + .addMapping(setSpec, asList(columnDefinition("k"), + columnDefinition("v1"))); + +verify(expected, "SELECT {k, v1} FROM %s"); +} + +private void testEmptySetLitteral() throws Throwable +{ +ColumnSpecification setSpec = columnSpecification("(set){}", SetType.getInstance(Int32Type.instance, false)); +SelectionColumnMapping expected = SelectionColumnMapping.newMapping() + .addMapping(setSpec, (ColumnDefinition) null); + +verify(expected, "SELECT (set){} FROM %s"); +} + +private void testMapLitteral() throws Throwable +{ +ColumnSpecification mapSpec = columnSpecification("(map){'min': system.min(v1), 'max': system.max(v1)}", MapType.getInstance(UTF8Type.instance, Int32Type.instance, false)); +SelectionColumnMapping expected = SelectionColumnMapping.newMapping() + .addMapping(mapSpec, asList(columnDefinition("v1"))); + +verify(expected, "SELECT (map){'min': min(v1), 'max': max(v1)} FROM %s"); +} + +private void testEmptyMapLitteral() throws Throwable +{ +ColumnSpecification mapSpec = columnSpecification("(map){}", MapType.getInstance(UTF8Type.instance, Int32Type.instance, false)); +SelectionColumnMapping expected = SelectionColumnMapping.newMapping() + .addMapping(mapSpec, (ColumnDefinition) null); + +verify(expected, "SELECT (map){} FROM %s"); +} + +private void testUDTLitteral() throws Throwable +{ +UserType type = new UserType(KEYSPACE, ByteBufferUtil.bytes(typeName), + asList(FieldIdentifier.forUnquoted("f1"), + FieldIdentifier.forUnquoted("f2")), + asList(Int32Type.instance, +
[4/5] cassandra git commit: Add support for arithmetic operators
Add support for arithmetic operators patch by Benjamin Lerer; reviewed by Sylvain Lebresne for CASSANDRA-11935 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8b3de2f4 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8b3de2f4 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8b3de2f4 Branch: refs/heads/trunk Commit: 8b3de2f4908c4651491b0f20b80f7bb96cff26ed Parents: 075539a Author: Benjamin Lerer Authored: Mon Nov 21 18:04:42 2016 +0100 Committer: Benjamin Lerer Committed: Mon Nov 21 18:04:42 2016 +0100 -- CHANGES.txt | 3 + doc/source/cql/changes.rst | 4 +- doc/source/cql/definitions.rst | 4 +- doc/source/cql/index.rst| 1 + doc/source/cql/operators.rst| 57 ++ pylib/cqlshlib/cql3handling.py | 2 +- src/antlr/Lexer.g | 6 +- src/antlr/Parser.g | 248 +-- .../org/apache/cassandra/cql3/Constants.java| 58 +- src/java/org/apache/cassandra/cql3/Lists.java | 85 ++- src/java/org/apache/cassandra/cql3/Maps.java| 122 ++- src/java/org/apache/cassandra/cql3/Sets.java| 95 ++- src/java/org/apache/cassandra/cql3/Tuples.java | 147 +++- .../org/apache/cassandra/cql3/UserTypes.java| 115 ++- .../cassandra/cql3/functions/FunctionCall.java | 37 +- .../cql3/functions/FunctionResolver.java| 91 ++- .../cassandra/cql3/functions/OperationFcts.java | 380 ++ .../cql3/selection/CollectionFactory.java | 91 +++ .../cql3/selection/ForwardingFactory.java | 90 +++ .../cassandra/cql3/selection/ListSelector.java | 104 +++ .../cassandra/cql3/selection/MapSelector.java | 195 + .../cql3/selection/ScalarFunctionSelector.java | 9 - .../cassandra/cql3/selection/Selectable.java| 647 +++- .../cassandra/cql3/selection/Selector.java | 11 - .../cassandra/cql3/selection/SetSelector.java | 106 +++ .../cassandra/cql3/selection/TupleSelector.java | 101 +++ .../cql3/selection/UserTypeSelector.java| 177 + .../org/apache/cassandra/db/SystemKeyspace.java | 1 + .../cassandra/db/marshal/AbstractType.java | 13 +- .../cassandra/db/marshal/BooleanType.java | 2 +- .../apache/cassandra/db/marshal/ByteType.java | 56 +- .../cassandra/db/marshal/CounterColumnType.java | 40 +- .../apache/cassandra/db/marshal/DateType.java | 2 +- .../cassandra/db/marshal/DecimalType.java | 76 +- .../apache/cassandra/db/marshal/DoubleType.java | 69 +- .../apache/cassandra/db/marshal/EmptyType.java | 2 +- .../apache/cassandra/db/marshal/FloatType.java | 61 +- .../apache/cassandra/db/marshal/Int32Type.java | 48 +- .../cassandra/db/marshal/IntegerType.java | 69 +- .../cassandra/db/marshal/LexicalUUIDType.java | 2 +- .../apache/cassandra/db/marshal/LongType.java | 52 +- .../apache/cassandra/db/marshal/NumberType.java | 223 ++ .../cassandra/db/marshal/ReversedType.java | 2 +- .../apache/cassandra/db/marshal/ShortType.java | 51 +- .../cassandra/db/marshal/TimeUUIDType.java | 2 +- .../cassandra/db/marshal/TimestampType.java | 2 +- .../apache/cassandra/db/marshal/TupleType.java | 5 + .../apache/cassandra/db/marshal/UUIDType.java | 2 +- .../apache/cassandra/db/marshal/UserType.java | 5 + .../exceptions/OperationExecutionException.java | 57 ++ .../cassandra/serializers/ByteSerializer.java | 4 +- .../apache/cassandra/utils/ByteBufferUtil.java | 17 + .../org/apache/cassandra/cql3/CQLTester.java| 4 +- .../cql3/functions/OperationFctsTest.java | 744 +++ .../selection/SelectionColumnMappingTest.java | 94 +++ .../cql3/selection/TermSelectionTest.java | 386 +- .../cql3/validation/operations/SelectTest.java | 10 + 57 files changed, 4767 insertions(+), 320 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b3de2f4/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 24641a6..1f1625c 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,3 +1,6 @@ +3.12 + * Add support for arithmetic operators (CASSANDRA-11935) + 3.11 * AnticompactionRequestSerializer serializedSize is incorrect (CASSANDRA-12934) http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b3de2f4/doc/source/cql/changes.rst -- diff --git a/doc/source/cql/changes.rst b/doc/source/cql/changes.rst index 913bdb4..a33bb63 100644 --- a/doc/source/cql/changes.rst +++ b/doc/source/cql/changes.rst @@ -27,8 +27,8 @@ The following describes the changes in eac
[3/4] cassandra git commit: Add support for arithmetic operators
http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b3de2f4/src/java/org/apache/cassandra/cql3/functions/OperationFcts.java -- diff --git a/src/java/org/apache/cassandra/cql3/functions/OperationFcts.java b/src/java/org/apache/cassandra/cql3/functions/OperationFcts.java new file mode 100644 index 000..1f115a9 --- /dev/null +++ b/src/java/org/apache/cassandra/cql3/functions/OperationFcts.java @@ -0,0 +1,380 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.cassandra.cql3.functions; + +import java.nio.ByteBuffer; +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; + +import org.apache.cassandra.config.SchemaConstants; +import org.apache.cassandra.db.marshal.*; +import org.apache.cassandra.exceptions.OperationExecutionException; +import org.apache.cassandra.transport.ProtocolVersion; + +/** + * Operation functions (Mathematics). + * + */ +public final class OperationFcts +{ +private static enum OPERATION +{ +ADDITION('+', "_add") +{ +protected ByteBuffer execute(NumberType resultType, + NumberType leftType, + ByteBuffer left, + NumberType rightType, + ByteBuffer right) +{ +return resultType.add(leftType, left, rightType, right); +} +}, +SUBSTRACTION('-', "_substract") +{ +protected ByteBuffer execute(NumberType resultType, + NumberType leftType, + ByteBuffer left, + NumberType rightType, + ByteBuffer right) +{ +return resultType.substract(leftType, left, rightType, right); +} +}, +MULTIPLICATION('*', "_multiply") +{ +protected ByteBuffer execute(NumberType resultType, + NumberType leftType, + ByteBuffer left, + NumberType rightType, + ByteBuffer right) +{ +return resultType.multiply(leftType, left, rightType, right); +} +}, +DIVISION('/', "_divide") +{ +protected ByteBuffer execute(NumberType resultType, + NumberType leftType, + ByteBuffer left, + NumberType rightType, + ByteBuffer right) +{ +return resultType.divide(leftType, left, rightType, right); +} +}, +MODULO('%', "_modulo") +{ +protected ByteBuffer execute(NumberType resultType, + NumberType leftType, + ByteBuffer left, + NumberType rightType, + ByteBuffer right) +{ +return resultType.mod(leftType, left, rightType, right); +} +}; + +/** + * The operator symbol. + */ +private final char symbol; + +/** + * The name of the function associated to this operation + */ +private final String functionName; + +private OPERATION(char symbol, String functionName) +{ +this.symbol = symbol; +this.functionName = functionName; +} + +/** + * Executes the operation between the specified operand. + * + * @param resultType the result ype of the operation + * @param leftType the type of the left operand + * @param left the left operand + * @param rightType the type of the right operand + * @param right the right operand + * @return the operat
[1/4] cassandra git commit: Add support for arithmetic operators
Repository: cassandra Updated Branches: refs/heads/cassandra-3.X 075539a5b -> 8b3de2f49 http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b3de2f4/test/unit/org/apache/cassandra/cql3/selection/SelectionColumnMappingTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/selection/SelectionColumnMappingTest.java b/test/unit/org/apache/cassandra/cql3/selection/SelectionColumnMappingTest.java index ece2d1d..975eb8e 100644 --- a/test/unit/org/apache/cassandra/cql3/selection/SelectionColumnMappingTest.java +++ b/test/unit/org/apache/cassandra/cql3/selection/SelectionColumnMappingTest.java @@ -39,6 +39,7 @@ import org.apache.cassandra.service.ClientState; import org.apache.cassandra.service.QueryState; import org.apache.cassandra.utils.ByteBufferUtil; +import static java.util.Arrays.asList; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; @@ -102,6 +103,14 @@ public class SelectionColumnMappingTest extends CQLTester testMixedColumnTypes(); testMultipleUnaliasedSelectionOfSameColumn(); testUserDefinedAggregate(); +testListLitteral(); +testEmptyListLitteral(); +testSetLitteral(); +testEmptySetLitteral(); +testMapLitteral(); +testEmptyMapLitteral(); +testUDTLitteral(); +testTupleLitteral(); } @Test @@ -407,6 +416,91 @@ public class SelectionColumnMappingTest extends CQLTester verify(expected, "SELECT v1, v1 FROM %s"); } +private void testListLitteral() throws Throwable +{ +ColumnSpecification listSpec = columnSpecification("[k, v1]", ListType.getInstance(Int32Type.instance, false)); +SelectionColumnMapping expected = SelectionColumnMapping.newMapping() + .addMapping(listSpec, asList(columnDefinition("k"), + columnDefinition("v1"))); + +verify(expected, "SELECT [k, v1] FROM %s"); +} + +private void testEmptyListLitteral() throws Throwable +{ +ColumnSpecification listSpec = columnSpecification("(list)[]", ListType.getInstance(Int32Type.instance, false)); +SelectionColumnMapping expected = SelectionColumnMapping.newMapping() + .addMapping(listSpec, (ColumnDefinition) null); + +verify(expected, "SELECT (list)[] FROM %s"); +} + +private void testSetLitteral() throws Throwable +{ +ColumnSpecification setSpec = columnSpecification("{k, v1}", SetType.getInstance(Int32Type.instance, false)); +SelectionColumnMapping expected = SelectionColumnMapping.newMapping() + .addMapping(setSpec, asList(columnDefinition("k"), + columnDefinition("v1"))); + +verify(expected, "SELECT {k, v1} FROM %s"); +} + +private void testEmptySetLitteral() throws Throwable +{ +ColumnSpecification setSpec = columnSpecification("(set){}", SetType.getInstance(Int32Type.instance, false)); +SelectionColumnMapping expected = SelectionColumnMapping.newMapping() + .addMapping(setSpec, (ColumnDefinition) null); + +verify(expected, "SELECT (set){} FROM %s"); +} + +private void testMapLitteral() throws Throwable +{ +ColumnSpecification mapSpec = columnSpecification("(map){'min': system.min(v1), 'max': system.max(v1)}", MapType.getInstance(UTF8Type.instance, Int32Type.instance, false)); +SelectionColumnMapping expected = SelectionColumnMapping.newMapping() + .addMapping(mapSpec, asList(columnDefinition("v1"))); + +verify(expected, "SELECT (map){'min': min(v1), 'max': max(v1)} FROM %s"); +} + +private void testEmptyMapLitteral() throws Throwable +{ +ColumnSpecification mapSpec = columnSpecification("(map){}", MapType.getInstance(UTF8Type.instance, Int32Type.instance, false)); +SelectionColumnMapping expected = SelectionColumnMapping.newMapping() + .addMapping(mapSpec, (ColumnDefinition) null); + +verify(expected, "SELECT (map){} FROM %s"); +} + +private void testUDTLitteral() throws Throwable +{ +UserType type = new UserType(KEYSPACE, ByteBufferUtil.bytes(typeName), + asList(FieldIdentifier.forUnquoted("f1"), + FieldIdentifier.forUnquoted("f2")), + asList(Int32Type.instance, +
[jira] [Assigned] (CASSANDRA-12938) cassandra-stress hangs on error
[ https://issues.apache.org/jira/browse/CASSANDRA-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eduard Tudenhoefner reassigned CASSANDRA-12938: --- Assignee: Eduard Tudenhoefner > cassandra-stress hangs on error > --- > > Key: CASSANDRA-12938 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12938 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: James Falcon >Assignee: Eduard Tudenhoefner > > After encountering a fatal error, cassandra-stress hangs. Having not run a > previous stress write, can be reproduced with: > {code} > cassandra-stress read n=1000 -rate threads=2 > {code} > Here's the full output > {code} > Stress Settings > Command: > Type: read > Count: 1,000 > No Warmup: false > Consistency Level: LOCAL_ONE > Target Uncertainty: not applicable > Key Size (bytes): 10 > Counter Increment Distibution: add=fixed(1) > Rate: > Auto: false > Thread Count: 2 > OpsPer Sec: 0 > Population: > Distribution: Gaussian: min=1,max=1000,mean=500.50,stdev=166.50 > Order: ARBITRARY > Wrap: false > Insert: > Revisits: Uniform: min=1,max=100 > Visits: Fixed: key=1 > Row Population Ratio: Ratio: divisor=1.00;delegate=Fixed: key=1 > Batch Type: not batching > Columns: > Max Columns Per Key: 5 > Column Names: [C0, C1, C2, C3, C4] > Comparator: AsciiType > Timestamp: null > Variable Column Count: false > Slice: false > Size Distribution: Fixed: key=34 > Count Distribution: Fixed: key=5 > Errors: > Ignore: false > Tries: 10 > Log: > No Summary: false > No Settings: false > File: null > Interval Millis: 1000 > Level: NORMAL > Mode: > API: JAVA_DRIVER_NATIVE > Connection Style: CQL_PREPARED > CQL Version: CQL3 > Protocol Version: V4 > Username: null > Password: null > Auth Provide Class: null > Max Pending Per Connection: 128 > Connections Per Host: 8 > Compression: NONE > Node: > Nodes: [localhost] > Is White List: false > Datacenter: null > Schema: > Keyspace: keyspace1 > Replication Strategy: org.apache.cassandra.locator.SimpleStrategy > Replication Strategy Pptions: {replication_factor=1} > Table Compression: null > Table Compaction Strategy: null > Table Compaction Strategy Options: {} > Transport: > factory=org.apache.cassandra.thrift.TFramedTransportFactory; > truststore=null; truststore-password=null; keystore=null; > keystore-password=null; ssl-protocol=TLS; ssl-alg=SunX509; store-type=JKS; > ssl-ciphers=TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA; > Port: > Native Port: 9042 > Thrift Port: 9160 > JMX Port: 9042 > Send To Daemon: > *not set* > Graph: > File: null > Revision: unknown > Title: null > Operation: READ > TokenRange: > Wrap: false > Split Factor: 1 > Sleeping 2s... > Warming up READ with 250 iterations... > Connected to cluster: falcon-test2, max pending requests per connection 128, > max connections per host 8 > Datatacenter: Cassandra; Host: localhost/127.0.0.1; Rack: rack1 > Failed to connect over JMX; not collecting these stats > Connected to cluster: falcon-test2, max pending requests per connection 128, > max connections per host 8 > Datatacenter: Cassandra; Host: localhost/127.0.0.1; Rack: rack1 > com.datastax.driver.core.exceptions.InvalidQueryException: Keyspace > 'keyspace1' does not exist > Connected to cluster: falcon-test2, max pending requests per connection 128, > max connections per host 8 > Datatacenter: Cassandra; Host: localhost/127.0.0.1; Rack: rack1 > com.datastax.driver.core.exceptions.InvalidQueryException: Keyspace > 'keyspace1' does not exist > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8911) Consider Mutation-based Repairs
[ https://issues.apache.org/jira/browse/CASSANDRA-8911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-8911: --- Assignee: (was: Marcus Eriksson) > Consider Mutation-based Repairs > --- > > Key: CASSANDRA-8911 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8911 > Project: Cassandra > Issue Type: Improvement >Reporter: Tyler Hobbs > Fix For: 3.x > > > We should consider a mutation-based repair to replace the existing streaming > repair. While we're at it, we could do away with a lot of the complexity > around merkle trees. > I have not planned this out in detail, but here's roughly what I'm thinking: > * Instead of building an entire merkle tree up front, just send the "leaves" > one-by-one. Instead of dealing with token ranges, make the leaves primary > key ranges. The PK ranges would need to be contiguous, so that the start of > each range would match the end of the previous range. (The first and last > leaves would need to be open-ended on one end of the PK range.) This would be > similar to doing a read with paging. > * Once one page of data is read, compute a hash of it and send it to the > other replicas along with the PK range that it covers and a row count. > * When the replicas receive the hash, the perform a read over the same PK > range (using a LIMIT of the row count + 1) and compare hashes (unless the row > counts don't match, in which case this can be skipped). > * If there is a mismatch, the replica will send a mutation covering that > page's worth of data (ignoring the row count this time) to the source node. > Here are the advantages that I can think of: > * With the current repair behavior of streaming, vnode-enabled clusters may > need to stream hundreds of small SSTables. This results in increased compact > ion load on the receiving node. With the mutation-based approach, memtables > would naturally merge these. > * It's simple to throttle. For example, you could give a number of rows/sec > that should be repaired. > * It's easy to see what PK range has been repaired so far. This could make > it simpler to resume a repair that fails midway. > * Inconsistencies start to be repaired almost right away. > * Less special code \(?\) > * Wide partitions are no longer a problem. > There are a few problems I can think of: > * Counters. I don't know if this can be made safe, or if they need to be > skipped. > * To support incremental repair, we need to be able to read from only > repaired sstables. Probably not too difficult to do. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-12939) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_with_backoff
Sean McCarthy created CASSANDRA-12939: - Summary: dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_with_backoff Key: CASSANDRA-12939 URL: https://issues.apache.org/jira/browse/CASSANDRA-12939 Project: Cassandra Issue Type: Test Reporter: Sean McCarthy Assignee: DS Test Eng Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log example failure: http://cassci.datastax.com/job/cassandra-3.X_dtest/40/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_with_backoff {code} Error Message 25 != 244475 ... dtest: DEBUG: Errors: Using CQL driver: Using connect timeout: 5 seconds Using 'utf-8' encoding Using ssl: False :3:Error for (2730718820402670492, 3207787379576163567): OperationTimedOut - errors={'127.0.0.1': 'Client request timeout. See Session.execute[_async](timeout)'}, last_host=127.0.0.1 (permanently given up after 1000 rows and 1 attempts) :3:Exported 96 ranges out of 97 total ranges, some records might be missing {code}{code} Stacktrace File "/usr/lib/python2.7/unittest/case.py", line 329, in run testMethod() File "/home/automaton/cassandra-dtest/dtest.py", line 1099, in wrapped f(obj) File "/home/automaton/cassandra-dtest/tools/decorators.py", line 46, in wrapped f(obj) File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", line 2613, in test_bulk_round_trip_with_backoff copy_from_options={'MAXINFLIGHTMESSAGES': 64, 'MAXPENDINGCHUNKS': 1}) File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", line 2508, in _test_bulk_round_trip sum(1 for _ in open(tempfile2.name))) File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual assertion_func(first, second, msg=msg) File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual raise self.failureException(msg) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12938) cassandra-stress hangs on error
[ https://issues.apache.org/jira/browse/CASSANDRA-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Falcon updated CASSANDRA-12938: - Description: After encountering a fatal error, cassandra-stress hangs. Having not run a previous stress write, can be reproduced with: {code} cassandra-stress read n=1000 -rate threads=2 {code} Here's the full output {code} Stress Settings Command: Type: read Count: 1,000 No Warmup: false Consistency Level: LOCAL_ONE Target Uncertainty: not applicable Key Size (bytes): 10 Counter Increment Distibution: add=fixed(1) Rate: Auto: false Thread Count: 2 OpsPer Sec: 0 Population: Distribution: Gaussian: min=1,max=1000,mean=500.50,stdev=166.50 Order: ARBITRARY Wrap: false Insert: Revisits: Uniform: min=1,max=100 Visits: Fixed: key=1 Row Population Ratio: Ratio: divisor=1.00;delegate=Fixed: key=1 Batch Type: not batching Columns: Max Columns Per Key: 5 Column Names: [C0, C1, C2, C3, C4] Comparator: AsciiType Timestamp: null Variable Column Count: false Slice: false Size Distribution: Fixed: key=34 Count Distribution: Fixed: key=5 Errors: Ignore: false Tries: 10 Log: No Summary: false No Settings: false File: null Interval Millis: 1000 Level: NORMAL Mode: API: JAVA_DRIVER_NATIVE Connection Style: CQL_PREPARED CQL Version: CQL3 Protocol Version: V4 Username: null Password: null Auth Provide Class: null Max Pending Per Connection: 128 Connections Per Host: 8 Compression: NONE Node: Nodes: [localhost] Is White List: false Datacenter: null Schema: Keyspace: keyspace1 Replication Strategy: org.apache.cassandra.locator.SimpleStrategy Replication Strategy Pptions: {replication_factor=1} Table Compression: null Table Compaction Strategy: null Table Compaction Strategy Options: {} Transport: factory=org.apache.cassandra.thrift.TFramedTransportFactory; truststore=null; truststore-password=null; keystore=null; keystore-password=null; ssl-protocol=TLS; ssl-alg=SunX509; store-type=JKS; ssl-ciphers=TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA; Port: Native Port: 9042 Thrift Port: 9160 JMX Port: 9042 Send To Daemon: *not set* Graph: File: null Revision: unknown Title: null Operation: READ TokenRange: Wrap: false Split Factor: 1 Sleeping 2s... Warming up READ with 250 iterations... Connected to cluster: falcon-test2, max pending requests per connection 128, max connections per host 8 Datatacenter: Cassandra; Host: localhost/127.0.0.1; Rack: rack1 Failed to connect over JMX; not collecting these stats Connected to cluster: falcon-test2, max pending requests per connection 128, max connections per host 8 Datatacenter: Cassandra; Host: localhost/127.0.0.1; Rack: rack1 com.datastax.driver.core.exceptions.InvalidQueryException: Keyspace 'keyspace1' does not exist Connected to cluster: falcon-test2, max pending requests per connection 128, max connections per host 8 Datatacenter: Cassandra; Host: localhost/127.0.0.1; Rack: rack1 com.datastax.driver.core.exceptions.InvalidQueryException: Keyspace 'keyspace1' does not exist {code} was: After encountering a fatal error, cassandra-stress hangs. Can be reproduced with: {code} cassandra-stress read n=1000 -rate threads=2 {code} Here's the full output {code} Stress Settings Command: Type: read Count: 1,000 No Warmup: false Consistency Level: LOCAL_ONE Target Uncertainty: not applicable Key Size (bytes): 10 Counter Increment Distibution: add=fixed(1) Rate: Auto: false Thread Count: 2 OpsPer Sec: 0 Population: Distribution: Gaussian: min=1,max=1000,mean=500.50,stdev=166.50 Order: ARBITRARY Wrap: false Insert: Revisits: Uniform: min=1,max=100 Visits: Fixed: key=1 Row Population Ratio: Ratio: divisor=1.00;delegate=Fixed: key=1 Batch Type: not batching Columns: Max Columns Per Key: 5 Column Names: [C0, C1, C2, C3, C4] Comparator: AsciiType Timestamp: null Variable Column Count: false Slice: false Size Distribution: Fixed: key=34 Count Distribution: Fixed: key=5 Errors: Ignore: false Tries: 10 Log: No Summary: false No Settings: false File: null Interval Millis: 1000 Level: NORMAL Mode: API: JAVA_DRIVER_NATIVE Connection Style: CQL_PREPARED CQL Version: CQL3 Protocol Version: V4 Username: null Password: null Auth Provide Class: null Max Pending Per Connection: 128 Connections Per Host: 8 Compression: NONE Node: Nodes: [localhost] Is White List: false Datacenter: null Schema: Keyspace: keyspace1 Replication Strategy: org.apache.cassandra.locator.SimpleStrategy Replication Strategy Pptions: {replication_factor=1} Table Compression: null Table Compaction Strategy: null Table Compaction Strategy Options: {} Transport
[jira] [Created] (CASSANDRA-12938) cassandra-stress hangs on error
James Falcon created CASSANDRA-12938: Summary: cassandra-stress hangs on error Key: CASSANDRA-12938 URL: https://issues.apache.org/jira/browse/CASSANDRA-12938 Project: Cassandra Issue Type: Bug Components: Tools Reporter: James Falcon After encountering a fatal error, cassandra-stress hangs. Can be reproduced with: {code} cassandra-stress read n=1000 -rate threads=2 {code} Here's the full output {code} Stress Settings Command: Type: read Count: 1,000 No Warmup: false Consistency Level: LOCAL_ONE Target Uncertainty: not applicable Key Size (bytes): 10 Counter Increment Distibution: add=fixed(1) Rate: Auto: false Thread Count: 2 OpsPer Sec: 0 Population: Distribution: Gaussian: min=1,max=1000,mean=500.50,stdev=166.50 Order: ARBITRARY Wrap: false Insert: Revisits: Uniform: min=1,max=100 Visits: Fixed: key=1 Row Population Ratio: Ratio: divisor=1.00;delegate=Fixed: key=1 Batch Type: not batching Columns: Max Columns Per Key: 5 Column Names: [C0, C1, C2, C3, C4] Comparator: AsciiType Timestamp: null Variable Column Count: false Slice: false Size Distribution: Fixed: key=34 Count Distribution: Fixed: key=5 Errors: Ignore: false Tries: 10 Log: No Summary: false No Settings: false File: null Interval Millis: 1000 Level: NORMAL Mode: API: JAVA_DRIVER_NATIVE Connection Style: CQL_PREPARED CQL Version: CQL3 Protocol Version: V4 Username: null Password: null Auth Provide Class: null Max Pending Per Connection: 128 Connections Per Host: 8 Compression: NONE Node: Nodes: [localhost] Is White List: false Datacenter: null Schema: Keyspace: keyspace1 Replication Strategy: org.apache.cassandra.locator.SimpleStrategy Replication Strategy Pptions: {replication_factor=1} Table Compression: null Table Compaction Strategy: null Table Compaction Strategy Options: {} Transport: factory=org.apache.cassandra.thrift.TFramedTransportFactory; truststore=null; truststore-password=null; keystore=null; keystore-password=null; ssl-protocol=TLS; ssl-alg=SunX509; store-type=JKS; ssl-ciphers=TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA; Port: Native Port: 9042 Thrift Port: 9160 JMX Port: 9042 Send To Daemon: *not set* Graph: File: null Revision: unknown Title: null Operation: READ TokenRange: Wrap: false Split Factor: 1 Sleeping 2s... Warming up READ with 250 iterations... Connected to cluster: falcon-test2, max pending requests per connection 128, max connections per host 8 Datatacenter: Cassandra; Host: localhost/127.0.0.1; Rack: rack1 Failed to connect over JMX; not collecting these stats Connected to cluster: falcon-test2, max pending requests per connection 128, max connections per host 8 Datatacenter: Cassandra; Host: localhost/127.0.0.1; Rack: rack1 com.datastax.driver.core.exceptions.InvalidQueryException: Keyspace 'keyspace1' does not exist Connected to cluster: falcon-test2, max pending requests per connection 128, max connections per host 8 Datatacenter: Cassandra; Host: localhost/127.0.0.1; Rack: rack1 com.datastax.driver.core.exceptions.InvalidQueryException: Keyspace 'keyspace1' does not exist {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12915) SASI: Index intersection can be very inefficient
[ https://issues.apache.org/jira/browse/CASSANDRA-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15684006#comment-15684006 ] Corentin Chary commented on CASSANDRA-12915: Attempt at making it better: https://github.com/iksaif/biggraphite/commit/bc54f7ae176e9314190c49d1780fb87e26b62728 > SASI: Index intersection can be very inefficient > > > Key: CASSANDRA-12915 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12915 > Project: Cassandra > Issue Type: Bug > Components: sasi >Reporter: Corentin Chary > > It looks like RangeIntersectionIterator.java and be pretty inefficient in > some cases. Let's take the following query: > SELECT data FROM table WHERE index1 = 'foo' AND index2 = 'bar'; > In this case: > * index1 = 'foo' will match 2 items > * index2 = 'bar' will match ~300k items > On my setup, the query will take ~1 sec, most of the time being spent in > disk.TokenTree.getTokenAt(). > if I patch RangeIntersectionIterator so that it doesn't try to do the > intersection (and effectively only use 'index1') the query will run in a few > tenth of milliseconds. > I see multiple solutions for that: > * Add a static thresold to avoid the use of the index for the intersection > when we know it will be slow. Probably when the range size factor is very > small and the range size is big. > * CASSANDRA-10765 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[cassandra] Git Push Summary
Repository: cassandra Updated Branches: refs/heads/cassandra-3.11 [created] 075539a5b
[cassandra] Git Push Summary
Repository: cassandra Updated Branches: refs/heads/cassandra-3.11 [deleted] 075539a5b
[cassandra] Git Push Summary
Repository: cassandra Updated Branches: refs/heads/cassandra-3.11 [created] 075539a5b
[jira] [Commented] (CASSANDRA-12868) Reject default_time_to_live option when creating or altering MVs
[ https://issues.apache.org/jira/browse/CASSANDRA-12868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683898#comment-15683898 ] Sylvain Lebresne commented on CASSANDRA-12868: -- Thanks for the patch, which lgtm, though there is no reason not to commit this to 3.0 onwards so took the liberty to rebase and run the test below (slightly amended the error message fyi): | [12868-3.0|https://github.com/pcmanus/cassandra/commits/12868-3.0] | [utests|http://cassci.datastax.com/job/pcmanus-12868-3.0-testall] | [dtests|http://cassci.datastax.com/job/pcmanus-12868-3.0-dtest] | | [12868-3.X|https://github.com/pcmanus/cassandra/commits/12868-3.X] | [utests|http://cassci.datastax.com/job/pcmanus-12868-3.X-testall] | [dtests|http://cassci.datastax.com/job/pcmanus-12868-3.X-dtest] | I'll commit once the test result are in (unless they show a problem obviously). > Reject default_time_to_live option when creating or altering MVs > > > Key: CASSANDRA-12868 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12868 > Project: Cassandra > Issue Type: Bug >Reporter: Srinivasarao Daruna >Assignee: Sundar Srinivasan >Priority: Minor > Labels: lhf > Attachments: 12868-trunk.txt > > > Hi, > By default, materialized views are using the TTL of primary table, > irrespective of the configured value provided in materialized view creation. > For eg: > table: > CREATE TABLE test2(id text, date text, col1 text,col2 text, PRIMARY > KEY(id,date)) WITH default_time_to_live = 60 AND CLUSTERING ORDER BY (date > DESC); > CREATE MATERIALIZED VIEW test3_view AS > SELECT id, date, col1 > FROM test3 > WHERE id IS NOT NULL AND date IS NOT NULL > PRIMARY KEY(id,date) WITH default_time_to_live = 30; > The queries are accepted in CQL. As per the detail, it should use 30 seconds > for Materialized view and 60 seconds for parent table. > But, it is always 60 seconds (as the parent table) > case 1: > parent table and materialized view with different TTL > MV will always have the TTL of parent. > case 2: > Parent table without TTL but materialized view with TTL > MV does not have the TTL even though the configuration has been accepted in > the table creation. > Expected: > Either the TTL configuration should not be accepted in the materialized view > creation, if it is of no value. > Or > TTL has to be applied differently for both Materialized View and Table if the > configuration is added. > If no configuration, TTL has to be taken from the parent table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12868) Reject default_time_to_live option when creating or altering MVs
[ https://issues.apache.org/jira/browse/CASSANDRA-12868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-12868: - Summary: Reject default_time_to_live option when creating or altering MVs (was: MV creation allows a 'default_time_to_live' option, but ignores it) > Reject default_time_to_live option when creating or altering MVs > > > Key: CASSANDRA-12868 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12868 > Project: Cassandra > Issue Type: Bug >Reporter: Srinivasarao Daruna >Priority: Minor > Labels: lhf > Attachments: 12868-trunk.txt > > > Hi, > By default, materialized views are using the TTL of primary table, > irrespective of the configured value provided in materialized view creation. > For eg: > table: > CREATE TABLE test2(id text, date text, col1 text,col2 text, PRIMARY > KEY(id,date)) WITH default_time_to_live = 60 AND CLUSTERING ORDER BY (date > DESC); > CREATE MATERIALIZED VIEW test3_view AS > SELECT id, date, col1 > FROM test3 > WHERE id IS NOT NULL AND date IS NOT NULL > PRIMARY KEY(id,date) WITH default_time_to_live = 30; > The queries are accepted in CQL. As per the detail, it should use 30 seconds > for Materialized view and 60 seconds for parent table. > But, it is always 60 seconds (as the parent table) > case 1: > parent table and materialized view with different TTL > MV will always have the TTL of parent. > case 2: > Parent table without TTL but materialized view with TTL > MV does not have the TTL even though the configuration has been accepted in > the table creation. > Expected: > Either the TTL configuration should not be accepted in the materialized view > creation, if it is of no value. > Or > TTL has to be applied differently for both Materialized View and Table if the > configuration is added. > If no configuration, TTL has to be taken from the parent table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12868) Reject default_time_to_live option when creating or altering MVs
[ https://issues.apache.org/jira/browse/CASSANDRA-12868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-12868: - Assignee: Sundar Srinivasan > Reject default_time_to_live option when creating or altering MVs > > > Key: CASSANDRA-12868 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12868 > Project: Cassandra > Issue Type: Bug >Reporter: Srinivasarao Daruna >Assignee: Sundar Srinivasan >Priority: Minor > Labels: lhf > Attachments: 12868-trunk.txt > > > Hi, > By default, materialized views are using the TTL of primary table, > irrespective of the configured value provided in materialized view creation. > For eg: > table: > CREATE TABLE test2(id text, date text, col1 text,col2 text, PRIMARY > KEY(id,date)) WITH default_time_to_live = 60 AND CLUSTERING ORDER BY (date > DESC); > CREATE MATERIALIZED VIEW test3_view AS > SELECT id, date, col1 > FROM test3 > WHERE id IS NOT NULL AND date IS NOT NULL > PRIMARY KEY(id,date) WITH default_time_to_live = 30; > The queries are accepted in CQL. As per the detail, it should use 30 seconds > for Materialized view and 60 seconds for parent table. > But, it is always 60 seconds (as the parent table) > case 1: > parent table and materialized view with different TTL > MV will always have the TTL of parent. > case 2: > Parent table without TTL but materialized view with TTL > MV does not have the TTL even though the configuration has been accepted in > the table creation. > Expected: > Either the TTL configuration should not be accepted in the materialized view > creation, if it is of no value. > Or > TTL has to be applied differently for both Materialized View and Table if the > configuration is added. > If no configuration, TTL has to be taken from the parent table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12768) CQL often queries static columns unnecessarily
[ https://issues.apache.org/jira/browse/CASSANDRA-12768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683815#comment-15683815 ] Sylvain Lebresne commented on CASSANDRA-12768: -- I rebased the branches (no change), but maybe a gentle ping for review and this kind of block CASSANDRA-12694 right now (we could do without this, but it's better with it and it's easy enough I feel). > CQL often queries static columns unnecessarily > -- > > Key: CASSANDRA-12768 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12768 > Project: Cassandra > Issue Type: Bug >Reporter: Sylvain Lebresne >Assignee: Sylvain Lebresne > Fix For: 3.0.x, 3.x > > > While looking at CASSANDRA-12694 (which isn't directly related, but some of > the results in this ticket are explained by this), I realized that CQL was > always querying static columns even in cases where this is unnecessary. > More precisely, for reasons long described elsewhere, we have to query all > the columns for a row (we have optimizations, see CASSANDRA-10657, but they > don't change that general fact) to be able to distinguish between the case > where a row doesn't exist from when it exists but has no values for the > columns selected by the query. *However*, this really only extend to > "regular" columns (static columns play no role in deciding whether a > particular row exists or not) but the implementation in 3.x, which is in > {{ColumnFilter}}, still always query all static columns. > We shouldn't do that and it's arguably a performance regression from 2.x. > Which is why I'm tentatively marking this a bug and for the 3.0 line. It's a > tiny bit scary for 3.0 though so really more asking for other opinions and > I'd be happy to stick to 3.x. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows
[ https://issues.apache.org/jira/browse/CASSANDRA-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683790#comment-15683790 ] Sylvain Lebresne commented on CASSANDRA-6538: - Feeling bad having drop the review on what's basically an ok patch, so took on myself to rebase below: | [6538-3.X|https://github.com/pcmanus/cassandra/commits/6538-3.X] | [utests|http://cassci.datastax.com/job/pcmanus-6538-3.X-testall] | [dtests|http://cassci.datastax.com/job/pcmanus-6538-3.X-dtest] | I added a simple unit test and I put the function in {{BytesConversionFcts}}, which I renamed in {{BytesFcts}}, to keep function declarations somewhat grouped. [~snazy], since you commented somewhat recently, mind having a quick double-check on that rebase? > Provide a read-time CQL function to display the data size of columns and rows > - > > Key: CASSANDRA-6538 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6538 > Project: Cassandra > Issue Type: Improvement >Reporter: Johnny Miller >Priority: Minor > Labels: cql > Attachments: 6538-v2.patch, 6538.patch, CodeSnippet.txt, sizeFzt.PNG > > > It would be extremely useful to be able to work out the size of rows and > columns via CQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12934) AnticompactionRequestSerializer serializedSize is incorrect
[ https://issues.apache.org/jira/browse/CASSANDRA-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Brown updated CASSANDRA-12934: Resolution: Fixed Fix Version/s: (was: 3.0.x) 4.0 3.0.11 Status: Resolved (was: Patch Available) Addressed [~slebresne]'s nit and commited as 3fd4c68803ddf0d20d23b37d4b936258f8420209 > AnticompactionRequestSerializer serializedSize is incorrect > --- > > Key: CASSANDRA-12934 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12934 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Jason Brown >Assignee: Jason Brown > Fix For: 3.0.11, 4.0, 3.x > > > {{AnticompactionRequestSerializer#serializedSize}} does not add the size of > the {{#successfulRanges}} list to the total byte count. > This incorrectly calculated size shouldn't affect anything in the current > release AFAICT, but it is a blocker for CASSANDRA-8457. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[3/6] cassandra git commit: AnticompactionRequestSerializer serializedSize is incorrect
AnticompactionRequestSerializer serializedSize is incorrect patch by jasobrown; reviewed by pcmanus for CASSANDRA-12934 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3fd4c688 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3fd4c688 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3fd4c688 Branch: refs/heads/trunk Commit: 3fd4c68803ddf0d20d23b37d4b936258f8420209 Parents: 59b40b3 Author: Jason Brown Authored: Fri Nov 18 18:13:45 2016 -0800 Committer: Jason Brown Committed: Mon Nov 21 06:37:56 2016 -0800 -- CHANGES.txt | 1 + .../repair/messages/AnticompactionRequest.java | 19 ++ .../repair/messages/CleanupMessage.java | 17 ++ .../repair/messages/PrepareMessage.java | 22 +++ .../repair/messages/SnapshotMessage.java| 16 ++ .../cassandra/repair/messages/SyncComplete.java | 19 ++ .../cassandra/repair/messages/SyncRequest.java | 21 +++ .../repair/messages/ValidationComplete.java | 18 ++ .../RepairMessageSerializationsTest.java| 187 +++ 9 files changed, 320 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fd4c688/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index bcd0b5c..e613d7c 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.0.11 + * AnticompactionRequestSerializer serializedSize is incorrect (CASSANDRA-12934) * Prevent reloading of logback.xml from UDF sandbox (CASSANDRA-12535) Merged from 2.2: * Avoid blocking gossip during pending range calculation (CASSANDRA-12281) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fd4c688/src/java/org/apache/cassandra/repair/messages/AnticompactionRequest.java -- diff --git a/src/java/org/apache/cassandra/repair/messages/AnticompactionRequest.java b/src/java/org/apache/cassandra/repair/messages/AnticompactionRequest.java index 3e47374..a29cc87 100644 --- a/src/java/org/apache/cassandra/repair/messages/AnticompactionRequest.java +++ b/src/java/org/apache/cassandra/repair/messages/AnticompactionRequest.java @@ -21,6 +21,7 @@ import java.io.IOException; import java.util.ArrayList; import java.util.Collection; import java.util.List; +import java.util.Objects; import java.util.UUID; import org.apache.cassandra.dht.Range; @@ -46,6 +47,23 @@ public class AnticompactionRequest extends RepairMessage this.successfulRanges = ranges; } +@Override +public boolean equals(Object o) +{ +if (!(o instanceof AnticompactionRequest)) +return false; +AnticompactionRequest other = (AnticompactionRequest)o; +return messageType == other.messageType && + parentRepairSession.equals(other.parentRepairSession) && + successfulRanges.equals(other.successfulRanges); +} + +@Override +public int hashCode() +{ +return Objects.hash(messageType, parentRepairSession, successfulRanges); +} + public static class AnticompactionRequestSerializer implements MessageSerializer { public void serialize(AnticompactionRequest message, DataOutputPlus out, int version) throws IOException @@ -72,6 +90,7 @@ public class AnticompactionRequest extends RepairMessage public long serializedSize(AnticompactionRequest message, int version) { long size = UUIDSerializer.serializer.serializedSize(message.parentRepairSession, version); +size += Integer.BYTES; // count of items in successfulRanges for (Range r : message.successfulRanges) size += Range.tokenSerializer.serializedSize(r, version); return size; http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fd4c688/src/java/org/apache/cassandra/repair/messages/CleanupMessage.java -- diff --git a/src/java/org/apache/cassandra/repair/messages/CleanupMessage.java b/src/java/org/apache/cassandra/repair/messages/CleanupMessage.java index 43a8f02..69d147a 100644 --- a/src/java/org/apache/cassandra/repair/messages/CleanupMessage.java +++ b/src/java/org/apache/cassandra/repair/messages/CleanupMessage.java @@ -18,6 +18,7 @@ package org.apache.cassandra.repair.messages; import java.io.IOException; +import java.util.Objects; import java.util.UUID; import org.apache.cassandra.io.util.DataInputPlus; @@ -40,6 +41,22 @@ public class CleanupMessage extends RepairMessage this.parentRepairSession = parentRepairSession; } +@Override +public boolean equals(Object o) +{ +if (!(o instanceof CleanupMessage)) +
[1/6] cassandra git commit: AnticompactionRequestSerializer serializedSize is incorrect
Repository: cassandra Updated Branches: refs/heads/cassandra-3.0 59b40b317 -> 3fd4c6880 refs/heads/cassandra-3.X 96d67b109 -> 075539a5b refs/heads/trunk f1c3aac76 -> 58cf4c907 AnticompactionRequestSerializer serializedSize is incorrect patch by jasobrown; reviewed by pcmanus for CASSANDRA-12934 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3fd4c688 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3fd4c688 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3fd4c688 Branch: refs/heads/cassandra-3.0 Commit: 3fd4c68803ddf0d20d23b37d4b936258f8420209 Parents: 59b40b3 Author: Jason Brown Authored: Fri Nov 18 18:13:45 2016 -0800 Committer: Jason Brown Committed: Mon Nov 21 06:37:56 2016 -0800 -- CHANGES.txt | 1 + .../repair/messages/AnticompactionRequest.java | 19 ++ .../repair/messages/CleanupMessage.java | 17 ++ .../repair/messages/PrepareMessage.java | 22 +++ .../repair/messages/SnapshotMessage.java| 16 ++ .../cassandra/repair/messages/SyncComplete.java | 19 ++ .../cassandra/repair/messages/SyncRequest.java | 21 +++ .../repair/messages/ValidationComplete.java | 18 ++ .../RepairMessageSerializationsTest.java| 187 +++ 9 files changed, 320 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fd4c688/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index bcd0b5c..e613d7c 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.0.11 + * AnticompactionRequestSerializer serializedSize is incorrect (CASSANDRA-12934) * Prevent reloading of logback.xml from UDF sandbox (CASSANDRA-12535) Merged from 2.2: * Avoid blocking gossip during pending range calculation (CASSANDRA-12281) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fd4c688/src/java/org/apache/cassandra/repair/messages/AnticompactionRequest.java -- diff --git a/src/java/org/apache/cassandra/repair/messages/AnticompactionRequest.java b/src/java/org/apache/cassandra/repair/messages/AnticompactionRequest.java index 3e47374..a29cc87 100644 --- a/src/java/org/apache/cassandra/repair/messages/AnticompactionRequest.java +++ b/src/java/org/apache/cassandra/repair/messages/AnticompactionRequest.java @@ -21,6 +21,7 @@ import java.io.IOException; import java.util.ArrayList; import java.util.Collection; import java.util.List; +import java.util.Objects; import java.util.UUID; import org.apache.cassandra.dht.Range; @@ -46,6 +47,23 @@ public class AnticompactionRequest extends RepairMessage this.successfulRanges = ranges; } +@Override +public boolean equals(Object o) +{ +if (!(o instanceof AnticompactionRequest)) +return false; +AnticompactionRequest other = (AnticompactionRequest)o; +return messageType == other.messageType && + parentRepairSession.equals(other.parentRepairSession) && + successfulRanges.equals(other.successfulRanges); +} + +@Override +public int hashCode() +{ +return Objects.hash(messageType, parentRepairSession, successfulRanges); +} + public static class AnticompactionRequestSerializer implements MessageSerializer { public void serialize(AnticompactionRequest message, DataOutputPlus out, int version) throws IOException @@ -72,6 +90,7 @@ public class AnticompactionRequest extends RepairMessage public long serializedSize(AnticompactionRequest message, int version) { long size = UUIDSerializer.serializer.serializedSize(message.parentRepairSession, version); +size += Integer.BYTES; // count of items in successfulRanges for (Range r : message.successfulRanges) size += Range.tokenSerializer.serializedSize(r, version); return size; http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fd4c688/src/java/org/apache/cassandra/repair/messages/CleanupMessage.java -- diff --git a/src/java/org/apache/cassandra/repair/messages/CleanupMessage.java b/src/java/org/apache/cassandra/repair/messages/CleanupMessage.java index 43a8f02..69d147a 100644 --- a/src/java/org/apache/cassandra/repair/messages/CleanupMessage.java +++ b/src/java/org/apache/cassandra/repair/messages/CleanupMessage.java @@ -18,6 +18,7 @@ package org.apache.cassandra.repair.messages; import java.io.IOException; +import java.util.Objects; import java.util.UUID; import org.apache.cassandra.io.util.DataInputPlus; @@ -40,6 +41,22 @@ public class CleanupMessage extends Repai
[2/6] cassandra git commit: AnticompactionRequestSerializer serializedSize is incorrect
AnticompactionRequestSerializer serializedSize is incorrect patch by jasobrown; reviewed by pcmanus for CASSANDRA-12934 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3fd4c688 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3fd4c688 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3fd4c688 Branch: refs/heads/cassandra-3.X Commit: 3fd4c68803ddf0d20d23b37d4b936258f8420209 Parents: 59b40b3 Author: Jason Brown Authored: Fri Nov 18 18:13:45 2016 -0800 Committer: Jason Brown Committed: Mon Nov 21 06:37:56 2016 -0800 -- CHANGES.txt | 1 + .../repair/messages/AnticompactionRequest.java | 19 ++ .../repair/messages/CleanupMessage.java | 17 ++ .../repair/messages/PrepareMessage.java | 22 +++ .../repair/messages/SnapshotMessage.java| 16 ++ .../cassandra/repair/messages/SyncComplete.java | 19 ++ .../cassandra/repair/messages/SyncRequest.java | 21 +++ .../repair/messages/ValidationComplete.java | 18 ++ .../RepairMessageSerializationsTest.java| 187 +++ 9 files changed, 320 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fd4c688/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index bcd0b5c..e613d7c 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.0.11 + * AnticompactionRequestSerializer serializedSize is incorrect (CASSANDRA-12934) * Prevent reloading of logback.xml from UDF sandbox (CASSANDRA-12535) Merged from 2.2: * Avoid blocking gossip during pending range calculation (CASSANDRA-12281) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fd4c688/src/java/org/apache/cassandra/repair/messages/AnticompactionRequest.java -- diff --git a/src/java/org/apache/cassandra/repair/messages/AnticompactionRequest.java b/src/java/org/apache/cassandra/repair/messages/AnticompactionRequest.java index 3e47374..a29cc87 100644 --- a/src/java/org/apache/cassandra/repair/messages/AnticompactionRequest.java +++ b/src/java/org/apache/cassandra/repair/messages/AnticompactionRequest.java @@ -21,6 +21,7 @@ import java.io.IOException; import java.util.ArrayList; import java.util.Collection; import java.util.List; +import java.util.Objects; import java.util.UUID; import org.apache.cassandra.dht.Range; @@ -46,6 +47,23 @@ public class AnticompactionRequest extends RepairMessage this.successfulRanges = ranges; } +@Override +public boolean equals(Object o) +{ +if (!(o instanceof AnticompactionRequest)) +return false; +AnticompactionRequest other = (AnticompactionRequest)o; +return messageType == other.messageType && + parentRepairSession.equals(other.parentRepairSession) && + successfulRanges.equals(other.successfulRanges); +} + +@Override +public int hashCode() +{ +return Objects.hash(messageType, parentRepairSession, successfulRanges); +} + public static class AnticompactionRequestSerializer implements MessageSerializer { public void serialize(AnticompactionRequest message, DataOutputPlus out, int version) throws IOException @@ -72,6 +90,7 @@ public class AnticompactionRequest extends RepairMessage public long serializedSize(AnticompactionRequest message, int version) { long size = UUIDSerializer.serializer.serializedSize(message.parentRepairSession, version); +size += Integer.BYTES; // count of items in successfulRanges for (Range r : message.successfulRanges) size += Range.tokenSerializer.serializedSize(r, version); return size; http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fd4c688/src/java/org/apache/cassandra/repair/messages/CleanupMessage.java -- diff --git a/src/java/org/apache/cassandra/repair/messages/CleanupMessage.java b/src/java/org/apache/cassandra/repair/messages/CleanupMessage.java index 43a8f02..69d147a 100644 --- a/src/java/org/apache/cassandra/repair/messages/CleanupMessage.java +++ b/src/java/org/apache/cassandra/repair/messages/CleanupMessage.java @@ -18,6 +18,7 @@ package org.apache.cassandra.repair.messages; import java.io.IOException; +import java.util.Objects; import java.util.UUID; import org.apache.cassandra.io.util.DataInputPlus; @@ -40,6 +41,22 @@ public class CleanupMessage extends RepairMessage this.parentRepairSession = parentRepairSession; } +@Override +public boolean equals(Object o) +{ +if (!(o instanceof CleanupMessage)) +
[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.X
Merge branch 'cassandra-3.0' into cassandra-3.X Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/075539a5 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/075539a5 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/075539a5 Branch: refs/heads/trunk Commit: 075539a5b0fee385457b8b580650e1557297e0d0 Parents: 96d67b1 3fd4c68 Author: Jason Brown Authored: Mon Nov 21 06:45:41 2016 -0800 Committer: Jason Brown Committed: Mon Nov 21 06:47:17 2016 -0800 -- CHANGES.txt | 3 + .../repair/messages/AnticompactionRequest.java | 19 ++ .../repair/messages/CleanupMessage.java | 17 ++ .../repair/messages/PrepareMessage.java | 22 +++ .../repair/messages/SnapshotMessage.java| 16 ++ .../cassandra/repair/messages/SyncComplete.java | 19 ++ .../cassandra/repair/messages/SyncRequest.java | 21 +++ .../repair/messages/ValidationComplete.java | 18 ++ .../RepairMessageSerializationsTest.java| 188 +++ 9 files changed, 323 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/075539a5/CHANGES.txt -- diff --cc CHANGES.txt index 2826011,e613d7c..24641a6 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,115 -1,20 +1,118 @@@ -3.0.11 ++3.11 + * AnticompactionRequestSerializer serializedSize is incorrect (CASSANDRA-12934) - * Prevent reloading of logback.xml from UDF sandbox (CASSANDRA-12535) -Merged from 2.2: - * Avoid blocking gossip during pending range calculation (CASSANDRA-12281) + - -3.0.10 - * Disallow offheap_buffers memtable allocation (CASSANDRA-11039) +3.10 + * Don't shut down socket input/output on StreamSession (CASSANDRA-12903) + * Fix Murmur3PartitionerTest (CASSANDRA-12858) + * Move cqlsh syntax rules into separate module and allow easier customization (CASSANDRA-12897) * Fix CommitLogSegmentManagerTest (CASSANDRA-12283) + * Fix cassandra-stress truncate option (CASSANDRA-12695) + * Fix crossNode value when receiving messages (CASSANDRA-12791) + * Don't load MX4J beans twice (CASSANDRA-12869) + * Extend native protocol request flags, add versions to SUPPORTED, and introduce ProtocolVersion enum (CASSANDRA-12838) + * Set JOINING mode when running pre-join tasks (CASSANDRA-12836) + * remove net.mintern.primitive library due to license issue (CASSANDRA-12845) + * Properly format IPv6 addresses when logging JMX service URL (CASSANDRA-12454) + * Optimize the vnode allocation for single replica per DC (CASSANDRA-12777) + * Use non-token restrictions for bounds when token restrictions are overridden (CASSANDRA-12419) + * Fix CQLSH auto completion for PER PARTITION LIMIT (CASSANDRA-12803) + * Use different build directories for Eclipse and Ant (CASSANDRA-12466) + * Avoid potential AttributeError in cqlsh due to no table metadata (CASSANDRA-12815) + * Fix RandomReplicationAwareTokenAllocatorTest.testExistingCluster (CASSANDRA-12812) + * Upgrade commons-codec to 1.9 (CASSANDRA-12790) + * Make the fanout size for LeveledCompactionStrategy to be configurable (CASSANDRA-11550) + * Add duration data type (CASSANDRA-11873) + * Fix timeout in ReplicationAwareTokenAllocatorTest (CASSANDRA-12784) + * Improve sum aggregate functions (CASSANDRA-12417) + * Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in CASSANDRA-10876 (CASSANDRA-12761) + * cqlsh fails to format collections when using aliases (CASSANDRA-11534) + * Check for hash conflicts in prepared statements (CASSANDRA-12733) + * Exit query parsing upon first error (CASSANDRA-12598) + * Fix cassandra-stress to use single seed in UUID generation (CASSANDRA-12729) + * CQLSSTableWriter does not allow Update statement (CASSANDRA-12450) + * Config class uses boxed types but DD exposes primitive types (CASSANDRA-12199) + * Add pre- and post-shutdown hooks to Storage Service (CASSANDRA-12461) + * Add hint delivery metrics (CASSANDRA-12693) + * Remove IndexInfo cache from FileIndexInfoRetriever (CASSANDRA-12731) + * ColumnIndex does not reuse buffer (CASSANDRA-12502) + * cdc column addition still breaks schema migration tasks (CASSANDRA-12697) + * Upgrade metrics-reporter dependencies (CASSANDRA-12089) + * Tune compaction thread count via nodetool (CASSANDRA-12248) + * Add +=/-= shortcut syntax for update queries (CASSANDRA-12232) + * Include repair session IDs in repair start message (CASSANDRA-12532) + * Add a blocking task to Index, run before joining the ring (CASSANDRA-12039) + * Fix NPE when using CQLSSTableWriter (CASSANDRA-12667) + * Support optional backpressure strategies at the coordinator (CASSANDRA-9318) + * Make randompartitioner work with new vnode allocation (CASSANDRA-126
[6/6] cassandra git commit: Merge branch 'cassandra-3.X' into trunk
Merge branch 'cassandra-3.X' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/58cf4c90 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/58cf4c90 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/58cf4c90 Branch: refs/heads/trunk Commit: 58cf4c90742a628853132aacf2333ccc94eb480e Parents: f1c3aac 075539a Author: Jason Brown Authored: Mon Nov 21 06:47:44 2016 -0800 Committer: Jason Brown Committed: Mon Nov 21 06:48:52 2016 -0800 -- CHANGES.txt | 1 + .../repair/messages/AnticompactionRequest.java | 19 ++ .../repair/messages/CleanupMessage.java | 17 ++ .../repair/messages/PrepareMessage.java | 22 +++ .../repair/messages/SnapshotMessage.java| 16 ++ .../cassandra/repair/messages/SyncComplete.java | 19 ++ .../cassandra/repair/messages/SyncRequest.java | 21 +++ .../repair/messages/ValidationComplete.java | 18 ++ .../RepairMessageSerializationsTest.java| 188 +++ 9 files changed, 321 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/58cf4c90/CHANGES.txt -- diff --cc CHANGES.txt index 9ad67d1,24641a6..fa9233a --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -109,8 -104,8 +109,9 @@@ * Remove pre-startup check for open JMX port (CASSANDRA-12074) * Remove compaction Severity from DynamicEndpointSnitch (CASSANDRA-11738) * Restore resumable hints delivery (CASSANDRA-11960) - * Properly report LWT contention (CASSANDRA-12626) + * Properly record CAS contention (CASSANDRA-12626) Merged from 3.0: ++ * AnticompactionRequestSerializer serializedSize is incorrect (CASSANDRA-12934) * Prevent reloading of logback.xml from UDF sandbox (CASSANDRA-12535) * Pass root cause to CorruptBlockException when uncompression failed (CASSANDRA-12889) * Batch with multiple conditional updates for the same partition causes AssertionError (CASSANDRA-12867)
[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.X
Merge branch 'cassandra-3.0' into cassandra-3.X Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/075539a5 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/075539a5 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/075539a5 Branch: refs/heads/cassandra-3.X Commit: 075539a5b0fee385457b8b580650e1557297e0d0 Parents: 96d67b1 3fd4c68 Author: Jason Brown Authored: Mon Nov 21 06:45:41 2016 -0800 Committer: Jason Brown Committed: Mon Nov 21 06:47:17 2016 -0800 -- CHANGES.txt | 3 + .../repair/messages/AnticompactionRequest.java | 19 ++ .../repair/messages/CleanupMessage.java | 17 ++ .../repair/messages/PrepareMessage.java | 22 +++ .../repair/messages/SnapshotMessage.java| 16 ++ .../cassandra/repair/messages/SyncComplete.java | 19 ++ .../cassandra/repair/messages/SyncRequest.java | 21 +++ .../repair/messages/ValidationComplete.java | 18 ++ .../RepairMessageSerializationsTest.java| 188 +++ 9 files changed, 323 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/075539a5/CHANGES.txt -- diff --cc CHANGES.txt index 2826011,e613d7c..24641a6 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,115 -1,20 +1,118 @@@ -3.0.11 ++3.11 + * AnticompactionRequestSerializer serializedSize is incorrect (CASSANDRA-12934) - * Prevent reloading of logback.xml from UDF sandbox (CASSANDRA-12535) -Merged from 2.2: - * Avoid blocking gossip during pending range calculation (CASSANDRA-12281) + - -3.0.10 - * Disallow offheap_buffers memtable allocation (CASSANDRA-11039) +3.10 + * Don't shut down socket input/output on StreamSession (CASSANDRA-12903) + * Fix Murmur3PartitionerTest (CASSANDRA-12858) + * Move cqlsh syntax rules into separate module and allow easier customization (CASSANDRA-12897) * Fix CommitLogSegmentManagerTest (CASSANDRA-12283) + * Fix cassandra-stress truncate option (CASSANDRA-12695) + * Fix crossNode value when receiving messages (CASSANDRA-12791) + * Don't load MX4J beans twice (CASSANDRA-12869) + * Extend native protocol request flags, add versions to SUPPORTED, and introduce ProtocolVersion enum (CASSANDRA-12838) + * Set JOINING mode when running pre-join tasks (CASSANDRA-12836) + * remove net.mintern.primitive library due to license issue (CASSANDRA-12845) + * Properly format IPv6 addresses when logging JMX service URL (CASSANDRA-12454) + * Optimize the vnode allocation for single replica per DC (CASSANDRA-12777) + * Use non-token restrictions for bounds when token restrictions are overridden (CASSANDRA-12419) + * Fix CQLSH auto completion for PER PARTITION LIMIT (CASSANDRA-12803) + * Use different build directories for Eclipse and Ant (CASSANDRA-12466) + * Avoid potential AttributeError in cqlsh due to no table metadata (CASSANDRA-12815) + * Fix RandomReplicationAwareTokenAllocatorTest.testExistingCluster (CASSANDRA-12812) + * Upgrade commons-codec to 1.9 (CASSANDRA-12790) + * Make the fanout size for LeveledCompactionStrategy to be configurable (CASSANDRA-11550) + * Add duration data type (CASSANDRA-11873) + * Fix timeout in ReplicationAwareTokenAllocatorTest (CASSANDRA-12784) + * Improve sum aggregate functions (CASSANDRA-12417) + * Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in CASSANDRA-10876 (CASSANDRA-12761) + * cqlsh fails to format collections when using aliases (CASSANDRA-11534) + * Check for hash conflicts in prepared statements (CASSANDRA-12733) + * Exit query parsing upon first error (CASSANDRA-12598) + * Fix cassandra-stress to use single seed in UUID generation (CASSANDRA-12729) + * CQLSSTableWriter does not allow Update statement (CASSANDRA-12450) + * Config class uses boxed types but DD exposes primitive types (CASSANDRA-12199) + * Add pre- and post-shutdown hooks to Storage Service (CASSANDRA-12461) + * Add hint delivery metrics (CASSANDRA-12693) + * Remove IndexInfo cache from FileIndexInfoRetriever (CASSANDRA-12731) + * ColumnIndex does not reuse buffer (CASSANDRA-12502) + * cdc column addition still breaks schema migration tasks (CASSANDRA-12697) + * Upgrade metrics-reporter dependencies (CASSANDRA-12089) + * Tune compaction thread count via nodetool (CASSANDRA-12248) + * Add +=/-= shortcut syntax for update queries (CASSANDRA-12232) + * Include repair session IDs in repair start message (CASSANDRA-12532) + * Add a blocking task to Index, run before joining the ring (CASSANDRA-12039) + * Fix NPE when using CQLSSTableWriter (CASSANDRA-12667) + * Support optional backpressure strategies at the coordinator (CASSANDRA-9318) + * Make randompartitioner work with new vnode allocation (CASSA
[jira] [Updated] (CASSANDRA-10358) Allow CQLSSTableWriter.Builder to use custom AbstractSSTableSimpleWriter
[ https://issues.apache.org/jira/browse/CASSANDRA-10358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-10358: - Resolution: Not A Problem Status: Resolved (was: Patch Available) I have to apologize for letting this drop completely. Looking at this again, I admit I'm not in love of the {{SSTableWriterCreationStrategy}} because it somewhat exposes the {{SSTableWriter}} API which is fairly low-level and internal, while {{CQLSSTableWriter}} is meant to be a high level user API. In practice, the {{SSTableWriter}} class changes often enough that any code relying on it is gonna be broken all the time. Plus that feels overkill for the need expressed: * controlling the name could likely be dealt with in a simpler way, or really externally by some simple renaming. * controlling the level actually doesn't feel like something that should be dealt with at that {{CQLSSTableWriter}} level: it's a standalone tool but proper leveling depends on a concrete set of sstables on a node, and messing with the level is dangerous. If there is a real need here, it would make more sense to me to provide a (hopefully safer) tool to change the level of a sstable. Now, it's been a year since the last comment so I'd assume you found another way to deal with this, and no-one else really came up with a similar need. So I'm going to close this largely for the lack of activity and because as said above, I'm not sure this is the right approach and this can be largely work-around externally. Feel free to re-open if you strongly disagree, or someone else has additional motivation for this. > Allow CQLSSTableWriter.Builder to use custom AbstractSSTableSimpleWriter > - > > Key: CASSANDRA-10358 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10358 > Project: Cassandra > Issue Type: Improvement >Reporter: Andre Turgeon >Priority: Minor > Attachments: SSTableWriterCreationStrategy.patch, patch.txt > > > I've created a patch for your consideration. > This change to CQLSSTableWriter allows for a custom > AbstractSSTableSimpleWriter to be specified. > I needed this for a bulkload process I wrote. I believe the change would be > beneficial for other people as well. > Below are the reasons I needed a custom implementation of > AbstractSSTableSimpleWriter: > 1) The available implementations of AbstractSSTableSimpleWriter do not > provide a way to specify the filename (or rather revision) of the sstable. I > needed to control the name because my bulkload process write sstables in > parallel (on multiple machines) and I wish to avoid name collisions. > 2) I discovered a problem with SSTableSimpleUnsortedWriter where it creates > invalid level-compaction-style sstables; It allows a partition to span 2 > sstables which violates the "no overlap of token ranges" constraint of level > compaction. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12934) AnticompactionRequestSerializer serializedSize is incorrect
[ https://issues.apache.org/jira/browse/CASSANDRA-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683619#comment-15683619 ] Sylvain Lebresne commented on CASSANDRA-12934: -- +1 (for the records, it does indeed no seem to have affect anything currently because it affects the message payload size, and said side is only use for serializers that extend {{MessagingService.CallbackDeterminedSerializer}}, which is not the case). Nit: in the {{equals}} methods, there is no need to test for {{null}} at the top of the methods since {{null instanceof X}} never returns {{true}}. > AnticompactionRequestSerializer serializedSize is incorrect > --- > > Key: CASSANDRA-12934 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12934 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Jason Brown >Assignee: Jason Brown > Fix For: 3.0.x, 3.x > > > {{AnticompactionRequestSerializer#serializedSize}} does not add the size of > the {{#successfulRanges}} list to the total byte count. > This incorrectly calculated size shouldn't affect anything in the current > release AFAICT, but it is a blocker for CASSANDRA-8457. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12929) dtest failure in bootstrap_test.TestBootstrap.simple_bootstrap_test_small_keepalive_period
[ https://issues.apache.org/jira/browse/CASSANDRA-12929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683418#comment-15683418 ] Paulo Motta commented on CASSANDRA-12929: - This is probably a good chance to modify this test to use byteman, so it will be faster since we will not need to load 50k rows. > dtest failure in > bootstrap_test.TestBootstrap.simple_bootstrap_test_small_keepalive_period > -- > > Key: CASSANDRA-12929 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12929 > Project: Cassandra > Issue Type: Test >Reporter: Michael Shuler >Assignee: DS Test Eng > Labels: dtest, test-failure > > example failure: > http://cassci.datastax.com/job/trunk_novnode_dtest/494/testReport/bootstrap_test/TestBootstrap/simple_bootstrap_test_small_keepalive_period > {noformat} > Error Message > Expected [['COMPLETED']] from SELECT bootstrapped FROM system.local WHERE > key='local', but got [[u'IN_PROGRESS']] > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /tmp/dtest-YmnyEI > dtest: DEBUG: Done setting configuration options: > { 'num_tokens': None, 'phi_convict_threshold': 5, 'start_rpc': 'true'} > cassandra.cluster: INFO: New Cassandra host > discovered > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/tools/decorators.py", line 46, in > wrapped > f(obj) > File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 163, in > simple_bootstrap_test_small_keepalive_period > assert_bootstrap_state(self, node2, 'COMPLETED') > File "/home/automaton/cassandra-dtest/tools/assertions.py", line 297, in > assert_bootstrap_state > assert_one(session, "SELECT bootstrapped FROM system.local WHERE > key='local'", [expected_bootstrap_state]) > File "/home/automaton/cassandra-dtest/tools/assertions.py", line 130, in > assert_one > assert list_res == [expected], "Expected {} from {}, but got > {}".format([expected], query, list_res) > "Expected [['COMPLETED']] from SELECT bootstrapped FROM system.local WHERE > key='local', but got [[u'IN_PROGRESS']]\n >> begin > captured logging << \ndtest: DEBUG: cluster ccm > directory: /tmp/dtest-YmnyEI\ndtest: DEBUG: Done setting configuration > options:\n{ 'num_tokens': None, 'phi_convict_threshold': 5, 'start_rpc': > 'true'}\ncassandra.cluster: INFO: New Cassandra host datacenter1> discovered\n- >> end captured logging << > -" > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12929) dtest failure in bootstrap_test.TestBootstrap.simple_bootstrap_test_small_keepalive_period
[ https://issues.apache.org/jira/browse/CASSANDRA-12929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683404#comment-15683404 ] Paulo Motta commented on CASSANDRA-12929: - I think the max keep-alive period of 2s is to small and may cause flakiness on unstable test clusters. We should probably increase this to 10s or 30s. > dtest failure in > bootstrap_test.TestBootstrap.simple_bootstrap_test_small_keepalive_period > -- > > Key: CASSANDRA-12929 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12929 > Project: Cassandra > Issue Type: Test >Reporter: Michael Shuler >Assignee: DS Test Eng > Labels: dtest, test-failure > > example failure: > http://cassci.datastax.com/job/trunk_novnode_dtest/494/testReport/bootstrap_test/TestBootstrap/simple_bootstrap_test_small_keepalive_period > {noformat} > Error Message > Expected [['COMPLETED']] from SELECT bootstrapped FROM system.local WHERE > key='local', but got [[u'IN_PROGRESS']] > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /tmp/dtest-YmnyEI > dtest: DEBUG: Done setting configuration options: > { 'num_tokens': None, 'phi_convict_threshold': 5, 'start_rpc': 'true'} > cassandra.cluster: INFO: New Cassandra host > discovered > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/tools/decorators.py", line 46, in > wrapped > f(obj) > File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 163, in > simple_bootstrap_test_small_keepalive_period > assert_bootstrap_state(self, node2, 'COMPLETED') > File "/home/automaton/cassandra-dtest/tools/assertions.py", line 297, in > assert_bootstrap_state > assert_one(session, "SELECT bootstrapped FROM system.local WHERE > key='local'", [expected_bootstrap_state]) > File "/home/automaton/cassandra-dtest/tools/assertions.py", line 130, in > assert_one > assert list_res == [expected], "Expected {} from {}, but got > {}".format([expected], query, list_res) > "Expected [['COMPLETED']] from SELECT bootstrapped FROM system.local WHERE > key='local', but got [[u'IN_PROGRESS']]\n >> begin > captured logging << \ndtest: DEBUG: cluster ccm > directory: /tmp/dtest-YmnyEI\ndtest: DEBUG: Done setting configuration > options:\n{ 'num_tokens': None, 'phi_convict_threshold': 5, 'start_rpc': > 'true'}\ncassandra.cluster: INFO: New Cassandra host datacenter1> discovered\n- >> end captured logging << > -" > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12781) Disable RPC_READY gossip flag when shutting down client servers
[ https://issues.apache.org/jira/browse/CASSANDRA-12781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta updated CASSANDRA-12781: Summary: Disable RPC_READY gossip flag when shutting down client servers (was: dtest failure in pushed_notifications_test.TestPushedNotifications.restart_node_test) > Disable RPC_READY gossip flag when shutting down client servers > --- > > Key: CASSANDRA-12781 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12781 > Project: Cassandra > Issue Type: Bug > Components: Distributed Metadata >Reporter: Sean McCarthy >Assignee: Stefania > Labels: dtest > Fix For: 3.0.x, 3.x > > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log > > > example failure: > http://cassci.datastax.com/job/cassandra-3.X_dtest/4/testReport/pushed_notifications_test/TestPushedNotifications/restart_node_test > {code} > Error Message > [{'change_type': u'DOWN', 'address': ('127.0.0.2', 9042)}, {'change_type': > u'UP', 'address': ('127.0.0.2', 9042)}, {'change_type': u'DOWN', 'address': > ('127.0.0.2', 9042)}] > {code} > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/pushed_notifications_test.py", line > 181, in restart_node_test > self.assertEquals(expected_notifications, len(notifications), > notifications) > File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual > assertion_func(first, second, msg=msg) > File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual > raise self.failureException(msg) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12186) anticompaction log message doesn't include the parent repair session id
[ https://issues.apache.org/jira/browse/CASSANDRA-12186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta updated CASSANDRA-12186: Status: Patch Available (was: Open) > anticompaction log message doesn't include the parent repair session id > --- > > Key: CASSANDRA-12186 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12186 > Project: Cassandra > Issue Type: Improvement > Components: Observability >Reporter: Wei Deng >Assignee: Tommy Stendahl >Priority: Minor > Labels: lhf > Fix For: 3.x > > Attachments: 12186.txt > > > It appears that even though incremental repair is now enabled by default post > C*-3.0 (which means at the end of each repair session, there is an > anti-compaction step that needs to be executed), we don't include the parent > repair session UUID in the log message of the anti-compaction log entries. > This makes observing all activities related to an incremental repair session > to be more difficult. See the following: > {noformat} > DEBUG [AntiEntropyStage:1] 2016-07-13 01:57:30,956 > RepairMessageVerbHandler.java:149 - Got anticompaction request > AnticompactionRequest{parentRepairSession=27103de0-489d-11e6-a6d6-cd06faa0aaa2} > org.apache.cassandra.repair.messages.AnticompactionRequest@34449ff4 > <...> > > <...> > INFO [CompactionExecutor:5] 2016-07-13 02:07:47,512 > CompactionManager.java:511 - Starting anticompaction for trivial_ks.weitest > on > 1/[BigTableReader(path='/var/lib/cassandra/data/trivial_ks/weitest-538b07d1489b11e6a9ef61c6ff848952/mb-1-big-Data.db')] > sstables > INFO [CompactionExecutor:5] 2016-07-13 02:07:47,513 > CompactionManager.java:540 - SSTable > BigTableReader(path='/var/lib/cassandra/data/trivial_ks/weitest-538b07d1489b11e6a9ef61c6ff848952/mb-1-big-Data.db') > fully contained in range (-9223372036854775808,-9223372036854775808], > mutating repairedAt instead of anticompacting > INFO [CompactionExecutor:5] 2016-07-13 02:07:47,570 > CompactionManager.java:578 - Completed anticompaction successfully > {noformat} > The initial submission of the anti-compaction task to the CompactionManager > still has reference to the parent repair session UUID, but subsequent > anti-compaction log entries are missing this parent repair session UUID. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12781) dtest failure in pushed_notifications_test.TestPushedNotifications.restart_node_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683352#comment-15683352 ] Paulo Motta commented on CASSANDRA-12781: - Sorry for the delay here, this review felt through the cracks. Your analysis seems correct and I agree setting {{RPC_READY}} to false should fix this hard-to-reproduce race. Will mark as ready to commit after CI results look good: * [testall|http://cassci.datastax.com/job/stef1927-12781-3.0-testall/1/] * [dtest|http://cassci.datastax.com/job/stef1927-12781-3.0-dtest/1/] > dtest failure in > pushed_notifications_test.TestPushedNotifications.restart_node_test > > > Key: CASSANDRA-12781 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12781 > Project: Cassandra > Issue Type: Bug > Components: Distributed Metadata >Reporter: Sean McCarthy >Assignee: Stefania > Labels: dtest > Fix For: 3.0.x, 3.x > > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log > > > example failure: > http://cassci.datastax.com/job/cassandra-3.X_dtest/4/testReport/pushed_notifications_test/TestPushedNotifications/restart_node_test > {code} > Error Message > [{'change_type': u'DOWN', 'address': ('127.0.0.2', 9042)}, {'change_type': > u'UP', 'address': ('127.0.0.2', 9042)}, {'change_type': u'DOWN', 'address': > ('127.0.0.2', 9042)}] > {code} > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/pushed_notifications_test.py", line > 181, in restart_node_test > self.assertEquals(expected_notifications, len(notifications), > notifications) > File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual > assertion_func(first, second, msg=msg) > File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual > raise self.failureException(msg) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12905) streaming issues with 3.9 (repair)
[ https://issues.apache.org/jira/browse/CASSANDRA-12905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683210#comment-15683210 ] Paulo Motta commented on CASSANDRA-12905: - Could you post full stack traces of the streaming errors you have noticed? In particular you should post errors of the source and destination nodes of the same stream session. > streaming issues with 3.9 (repair) > -- > > Key: CASSANDRA-12905 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12905 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging > Environment: centos 6.7 x86_64 >Reporter: Nir Zilka > Fix For: 3.9 > > > Hello, > I performed two upgrades to the current cluster (currently 15 nodes, 1 DC, > private VLAN), > first it was 2.2.5.1 and repair worked flawlessly, > second upgrade was to 3.0.9 (with upgradesstables) and also repair worked > well, > then i upgraded 2 weeks ago to 3.9 - and the repair problems started. > there are several errors types from the system.log (different nodes) : > - Sync failed between /xxx.xxx.xxx.xxx and /xxx.xxx.xxx.xxx > - Streaming error occurred on session with peer xxx.xxx.xxx.xxx Operation > timed out - received only 0 responses > - Remote peer xxx.xxx.xxx.xxx failed stream session > - Session completed with the following error > org.apache.cassandra.streaming.StreamException: Stream failed > > i use 3.9 default configuration with the cluster settings adjustments (3 > seeds, GossipingPropertyFileSnitch). > streaming_socket_timeout_in_ms is the default (8640). > i'm afraid from consistency problems while i'm not performing repair. > Any ideas? > Thanks, > Nir. -- This message was sent by Atlassian JIRA (v6.3.4#6332)