[jira] [Updated] (CASSANDRA-13066) Fast streaming with materialized views
[ https://issues.apache.org/jira/browse/CASSANDRA-13066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhaoYang updated CASSANDRA-13066: - Component/s: Streaming and Messaging Materialized Views > Fast streaming with materialized views > -- > > Key: CASSANDRA-13066 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13066 > Project: Cassandra > Issue Type: Improvement > Components: Materialized Views, Streaming and Messaging >Reporter: Benjamin Roth >Assignee: Benjamin Roth > Fix For: 4.0 > > > I propose adding a configuration option to send streams of tables with MVs > not through the regular write path. > This may be either a global option or better a CF option. > Background: > A repair of a CF with an MV that is much out of sync creates many streams. > These streams all go through the regular write path to assert local > consistency of the MV. This again causes a read before write for every single > mutation which again puts a lot of pressure on the node - much more than > simply streaming the SSTable down. > In some cases this can be avoided. Instead of only repairing the base table, > all base + mv tables would have to be repaired. But this can break eventual > consistency between base table and MV. The proposed behaviour is always safe, > when having append-only MVs. It also works when using CL_QUORUM writes but it > cannot be absolutely guaranteed, that a quorum write is applied atomically, > so this can also lead to inconsistencies, if a quorum write is started but > one node dies in the middle of a request. > So, this proposal can help a lot in some situations but also can break > consistency in others. That's why it should be left upon the operator if that > behaviour is appropriate for individual use cases. > This issue came up here: > https://issues.apache.org/jira/browse/CASSANDRA-12888?focusedCommentId=15736599=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15736599 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13592) Null Pointer exception at SELECT JSON statement
[ https://issues.apache.org/jira/browse/CASSANDRA-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16065895#comment-16065895 ] ZhaoYang commented on CASSANDRA-13592: -- || source || junit-result || dtest-result|| | [trunk|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13592] | [https://circleci.com/gh/jasonstack/cassandra/65] | | let the user of 'type.toJSONString()' to handle BB position changes. > Null Pointer exception at SELECT JSON statement > --- > > Key: CASSANDRA-13592 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13592 > Project: Cassandra > Issue Type: Bug > Components: CQL > Environment: Debian Linux >Reporter: Wyss Philipp >Assignee: ZhaoYang > Labels: beginner > Attachments: system.log > > > A Nulll pointer exception appears when the command > {code} > SELECT JSON * FROM examples.basic; > ---MORE--- > message="java.lang.NullPointerException"> > Examples.basic has the following description (DESC examples.basic;): > CREATE TABLE examples.basic ( > key frozen> PRIMARY KEY, > wert text > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > {code} > The error appears after the ---MORE--- line. > The field "wert" has a JSON formatted string. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13565) Materialized view usage of commit logs requires large mutation but commitlog_segment_size_in_mb=2048 causes exception
[ https://issues.apache.org/jira/browse/CASSANDRA-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16065884#comment-16065884 ] Kurt Greaves commented on CASSANDRA-13565: -- Certainly covered under CASSANDRA-13622. If anyone would like to fix it in this case fixing the overflow should be enough. Under either ticket is reasonable. > Materialized view usage of commit logs requires large mutation but > commitlog_segment_size_in_mb=2048 causes exception > - > > Key: CASSANDRA-13565 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13565 > Project: Cassandra > Issue Type: Bug > Components: Configuration, Materialized Views, Streaming and > Messaging > Environment: Cassandra 3.9.0, Windows >Reporter: Tania S Engel > Attachments: CQLforTable.png > > > We will be upgrading to 3.10 for CASSANDRA-11670. However, there is another > scenario (not applyunsafe during JOIN) which leads to : > java.lang.IllegalArgumentException: Mutation of 525.847MiB is too large > for the maximum size of 512.000MiB > at > org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:262) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Mutation.apply(Mutation.java:227) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:147) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:797) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.view.ViewBuilder.buildKey(ViewBuilder.java:96) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.view.ViewBuilder.run(ViewBuilder.java:165) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.compaction.CompactionManager$14.run(CompactionManager.java:1591) > [apache-cassandra-3.9.0.jar:3.9.0] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_66] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [na:1.8.0_66] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_66] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_66] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66] > Due to the relationship of max_mutation_size_in_kb and > commitlog_segment_size_in_mb, we increased commitlog_segment_size_in_mb and > left Cassandra to calculate max_mutation_size_in_kb as half the size > commitlog_segment_size_in_mb * 1024. > However, we have found that if we set commitlog_segment_size_in_mb=2048 we > get an exception upon starting Cassandra, when it is creating a new commit > log. > ERROR [COMMIT-LOG-ALLOCATOR] 2017-05-31 17:01:48,005 > JVMStabilityInspector.java:82 - Exiting due to error while processing commit > log during initialization. > org.apache.cassandra.io.FSWriteError: java.io.IOException: An attempt was > made to move the file pointer before the beginning of the file > Perhaps the index you are using is not big enough and it goes negative. > Is the relationship between max_mutation_size_in_kb and > commitlog_segment_size_in_mb important to preserve? In our limited stress > test we are finding mutation size already over 512mb and we expect more data > in our sstables and associated materialized views. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13565) Materialized view usage of commit logs requires large mutation but commitlog_segment_size_in_mb=2048 causes exception
[ https://issues.apache.org/jira/browse/CASSANDRA-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16065875#comment-16065875 ] Krishna Dattu Koneru commented on CASSANDRA-13565: -- This can be closed as duplicate of https://issues.apache.org/jira/browse/CASSANDRA-13622 , which is opened to fix similar possible overflows. May be [~KurtG] can comment. > Materialized view usage of commit logs requires large mutation but > commitlog_segment_size_in_mb=2048 causes exception > - > > Key: CASSANDRA-13565 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13565 > Project: Cassandra > Issue Type: Bug > Components: Configuration, Materialized Views, Streaming and > Messaging > Environment: Cassandra 3.9.0, Windows >Reporter: Tania S Engel > Attachments: CQLforTable.png > > > We will be upgrading to 3.10 for CASSANDRA-11670. However, there is another > scenario (not applyunsafe during JOIN) which leads to : > java.lang.IllegalArgumentException: Mutation of 525.847MiB is too large > for the maximum size of 512.000MiB > at > org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:262) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Mutation.apply(Mutation.java:227) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:147) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:797) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.view.ViewBuilder.buildKey(ViewBuilder.java:96) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.view.ViewBuilder.run(ViewBuilder.java:165) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.compaction.CompactionManager$14.run(CompactionManager.java:1591) > [apache-cassandra-3.9.0.jar:3.9.0] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_66] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [na:1.8.0_66] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_66] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_66] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66] > Due to the relationship of max_mutation_size_in_kb and > commitlog_segment_size_in_mb, we increased commitlog_segment_size_in_mb and > left Cassandra to calculate max_mutation_size_in_kb as half the size > commitlog_segment_size_in_mb * 1024. > However, we have found that if we set commitlog_segment_size_in_mb=2048 we > get an exception upon starting Cassandra, when it is creating a new commit > log. > ERROR [COMMIT-LOG-ALLOCATOR] 2017-05-31 17:01:48,005 > JVMStabilityInspector.java:82 - Exiting due to error while processing commit > log during initialization. > org.apache.cassandra.io.FSWriteError: java.io.IOException: An attempt was > made to move the file pointer before the beginning of the file > Perhaps the index you are using is not big enough and it goes negative. > Is the relationship between max_mutation_size_in_kb and > commitlog_segment_size_in_mb important to preserve? In our limited stress > test we are finding mutation size already over 512mb and we expect more data > in our sstables and associated materialized views. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13565) Materialized view usage of commit logs requires large mutation but commitlog_segment_size_in_mb=2048 causes exception
[ https://issues.apache.org/jira/browse/CASSANDRA-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16065841#comment-16065841 ] ZhaoYang commented on CASSANDRA-13565: -- That is likely due to INT-32 overflow.. a clearer error message should be thrown. {code} /** * size of commitlog segments to allocate */ public static int getCommitLogSegmentSize() { return conf.commitlog_segment_size_in_mb * 1024 * 1024; } {code} > Materialized view usage of commit logs requires large mutation but > commitlog_segment_size_in_mb=2048 causes exception > - > > Key: CASSANDRA-13565 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13565 > Project: Cassandra > Issue Type: Bug > Components: Configuration, Materialized Views, Streaming and > Messaging > Environment: Cassandra 3.9.0, Windows >Reporter: Tania S Engel > Attachments: CQLforTable.png > > > We will be upgrading to 3.10 for CASSANDRA-11670. However, there is another > scenario (not applyunsafe during JOIN) which leads to : > java.lang.IllegalArgumentException: Mutation of 525.847MiB is too large > for the maximum size of 512.000MiB > at > org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:262) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Mutation.apply(Mutation.java:227) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:147) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:797) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.view.ViewBuilder.buildKey(ViewBuilder.java:96) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.view.ViewBuilder.run(ViewBuilder.java:165) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.compaction.CompactionManager$14.run(CompactionManager.java:1591) > [apache-cassandra-3.9.0.jar:3.9.0] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_66] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [na:1.8.0_66] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_66] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_66] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66] > Due to the relationship of max_mutation_size_in_kb and > commitlog_segment_size_in_mb, we increased commitlog_segment_size_in_mb and > left Cassandra to calculate max_mutation_size_in_kb as half the size > commitlog_segment_size_in_mb * 1024. > However, we have found that if we set commitlog_segment_size_in_mb=2048 we > get an exception upon starting Cassandra, when it is creating a new commit > log. > ERROR [COMMIT-LOG-ALLOCATOR] 2017-05-31 17:01:48,005 > JVMStabilityInspector.java:82 - Exiting due to error while processing commit > log during initialization. > org.apache.cassandra.io.FSWriteError: java.io.IOException: An attempt was > made to move the file pointer before the beginning of the file > Perhaps the index you are using is not big enough and it goes negative. > Is the relationship between max_mutation_size_in_kb and > commitlog_segment_size_in_mb important to preserve? In our limited stress > test we are finding mutation size already over 512mb and we expect more data > in our sstables and associated materialized views. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13480) nodetool repair can hang forever if we lose the notification for the repair completing/failing
[ https://issues.apache.org/jira/browse/CASSANDRA-13480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Byrd updated CASSANDRA-13480: -- Reviewer: Chris Lohfink (was: Blake Eggleston) > nodetool repair can hang forever if we lose the notification for the repair > completing/failing > -- > > Key: CASSANDRA-13480 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13480 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Matt Byrd >Assignee: Matt Byrd >Priority: Minor > Labels: repair > Fix For: 4.x > > > When a Jmx lost notification occurs, sometimes the lost notification in > question is the notification which let's RepairRunner know that the repair is > finished (ProgressEventType.COMPLETE or even ERROR for that matter). > This results in nodetool process running the repair hanging forever. > I have a test which reproduces the issue here: > https://github.com/Jollyplum/cassandra-dtest/tree/repair_hang_test > To fix this, If on receiving a notification that notifications have been lost > (JMXConnectionNotification.NOTIFS_LOST), we instead query a new endpoint via > Jmx to receive all the relevant notifications we're interested in, we can > replay those we missed and avoid this scenario. > It's possible also that the JMXConnectionNotification.NOTIFS_LOST itself > might be lost and so for good measure I have made RepairRunner poll > periodically to see if there were any notifications that had been sent but we > didn't receive (scoped just to the particular tag for the given repair). > Users who don't use nodetool but go via jmx directly, can still use this new > endpoint and implement similar behaviour in their clients as desired. > I'm also expiring the notifications which have been kept on the server side. > Please let me know if you've any questions or can think of a different > approach, I also tried setting: > JVM_OPTS="$JVM_OPTS -Djmx.remote.x.notification.buffer.size=5000" > but this didn't fix the test. I suppose it might help under certain scenarios > but in this test we don't even send that many notifications so I'm not > surprised it doesn't fix it. > It seems like getting lost notifications is always a potential problem with > jmx as far as I can tell. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13464) Failed to create Materialized view with a specific token range
[ https://issues.apache.org/jira/browse/CASSANDRA-13464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krishna Dattu Koneru updated CASSANDRA-13464: - Status: Open (was: Patch Available) > Failed to create Materialized view with a specific token range > -- > > Key: CASSANDRA-13464 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13464 > Project: Cassandra > Issue Type: Improvement >Reporter: Natsumi Kojima >Assignee: Krishna Dattu Koneru >Priority: Minor > Labels: materializedviews > > Failed to create Materialized view with a specific token range. > Example : > {code:java} > $ ccm create "MaterializedView" -v 3.0.13 > $ ccm populate -n 3 > $ ccm start > $ ccm status > Cluster: 'MaterializedView' > --- > node1: UP > node3: UP > node2: UP > $ccm node1 cqlsh > Connected to MaterializedView at 127.0.0.1:9042. > [cqlsh 5.0.1 | Cassandra 3.0.13 | CQL spec 3.4.0 | Native protocol v4] > Use HELP for help. > cqlsh> CREATE KEYSPACE test WITH replication = {'class':'SimpleStrategy', > 'replication_factor':3}; > cqlsh> CREATE TABLE test.test ( id text PRIMARY KEY , value1 text , value2 > text, value3 text); > $ccm node1 ring test > Datacenter: datacenter1 > == > AddressRackStatus State LoadOwns > Token > > 3074457345618258602 > 127.0.0.1 rack1 Up Normal 64.86 KB100.00% > -9223372036854775808 > 127.0.0.2 rack1 Up Normal 86.49 KB100.00% > -3074457345618258603 > 127.0.0.3 rack1 Up Normal 89.04 KB100.00% > 3074457345618258602 > $ ccm node1 cqlsh > cqlsh> INSERT INTO test.test (id, value1 , value2, value3 ) VALUES ('aaa', > 'aaa', 'aaa' ,'aaa'); > cqlsh> INSERT INTO test.test (id, value1 , value2, value3 ) VALUES ('bbb', > 'bbb', 'bbb' ,'bbb'); > cqlsh> SELECT token(id),id,value1 FROM test.test; > system.token(id) | id | value1 > --+-+ > -4737872923231490581 | aaa |aaa > -3071845237020185195 | bbb |bbb > (2 rows) > cqlsh> CREATE MATERIALIZED VIEW test.test_view AS SELECT value1, id FROM > test.test WHERE id IS NOT NULL AND value1 IS NOT NULL AND TOKEN(id) > > -9223372036854775808 AND TOKEN(id) < -3074457345618258603 PRIMARY KEY(value1, > id) WITH CLUSTERING ORDER BY (id ASC); > ServerError: java.lang.ClassCastException: > org.apache.cassandra.cql3.TokenRelation cannot be cast to > org.apache.cassandra.cql3.SingleColumnRelation > {code} > Stacktrace : > {code:java} > INFO [MigrationStage:1] 2017-04-19 18:32:48,131 ColumnFamilyStore.java:389 - > Initializing test.test > WARN [SharedPool-Worker-1] 2017-04-19 18:44:07,263 FBUtilities.java:337 - > Trigger directory doesn't exist, please create it and try again. > ERROR [SharedPool-Worker-1] 2017-04-19 18:46:10,072 QueryMessage.java:128 - > Unexpected error during query > java.lang.ClassCastException: org.apache.cassandra.cql3.TokenRelation cannot > be cast to org.apache.cassandra.cql3.SingleColumnRelation > at > org.apache.cassandra.db.view.View.relationsToWhereClause(View.java:275) > ~[apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.cql3.statements.CreateViewStatement.announceMigration(CreateViewStatement.java:219) > ~[apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:93) > ~[apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206) > ~[apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:237) > ~[apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:222) > ~[apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115) > ~[apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:513) > [apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:407) > [apache-cassandra-3.0.13.jar:3.0.13] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.44.Final.jar:4.0.44.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) > [netty-all-4.0.44.Final.jar:4.0.44.Final] > at >
[jira] [Commented] (CASSANDRA-13643) converting expired ttl cells to tombstones causing unnecessary digest mismatches
[ https://issues.apache.org/jira/browse/CASSANDRA-13643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16065637#comment-16065637 ] Blake Eggleston commented on CASSANDRA-13643: - unit test run here: https://circleci.com/gh/bdeggleston/cassandra/55 > converting expired ttl cells to tombstones causing unnecessary digest > mismatches > > > Key: CASSANDRA-13643 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13643 > Project: Cassandra > Issue Type: Bug >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > > In > [{{AbstractCell#purge}}|https://github.com/apache/cassandra/blob/26e025804c6777a0d124dbc257747cba85b18f37/src/java/org/apache/cassandra/db/rows/AbstractCell.java#L77] > , we convert expired ttl'd cells to tombstones, and set the the local > deletion time to the cell's expiration time, less the ttl time. Depending on > the timing of the purge, this can cause purge to generate tombstones that are > otherwise purgeable. If compaction for a row with ttls isn't at the same > state between replicas, this will then cause digest mismatches between > logically identical rows, leading to unnecessary repair streaming and read > repairs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13643) converting expired ttl cells to tombstones causing unnecessary digest mismatches
[ https://issues.apache.org/jira/browse/CASSANDRA-13643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Blake Eggleston updated CASSANDRA-13643: Status: Patch Available (was: Open) Patch here: https://github.com/bdeggleston/cassandra/tree/13643 Could you take a look at this [~slebresne]? I've just called purge on the created tombstone, which I wouldn't think would cause any problems. > converting expired ttl cells to tombstones causing unnecessary digest > mismatches > > > Key: CASSANDRA-13643 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13643 > Project: Cassandra > Issue Type: Bug >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > > In > [{{AbstractCell#purge}}|https://github.com/apache/cassandra/blob/26e025804c6777a0d124dbc257747cba85b18f37/src/java/org/apache/cassandra/db/rows/AbstractCell.java#L77] > , we convert expired ttl'd cells to tombstones, and set the the local > deletion time to the cell's expiration time, less the ttl time. Depending on > the timing of the purge, this can cause purge to generate tombstones that are > otherwise purgeable. If compaction for a row with ttls isn't at the same > state between replicas, this will then cause digest mismatches between > logically identical rows, leading to unnecessary repair streaming and read > repairs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-13643) converting expired ttl cells to tombstones causing unnecessary digest mismatches
Blake Eggleston created CASSANDRA-13643: --- Summary: converting expired ttl cells to tombstones causing unnecessary digest mismatches Key: CASSANDRA-13643 URL: https://issues.apache.org/jira/browse/CASSANDRA-13643 Project: Cassandra Issue Type: Bug Reporter: Blake Eggleston Assignee: Blake Eggleston Priority: Minor In [{{AbstractCell#purge}}|https://github.com/apache/cassandra/blob/26e025804c6777a0d124dbc257747cba85b18f37/src/java/org/apache/cassandra/db/rows/AbstractCell.java#L77] , we convert expired ttl'd cells to tombstones, and set the the local deletion time to the cell's expiration time, less the ttl time. Depending on the timing of the purge, this can cause purge to generate tombstones that are otherwise purgeable. If compaction for a row with ttls isn't at the same state between replicas, this will then cause digest mismatches between logically identical rows, leading to unnecessary repair streaming and read repairs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13004) Corruption while adding/removing a column to/from the table
[ https://issues.apache.org/jira/browse/CASSANDRA-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-13004: --- Fix Version/s: (was: 3.11.x) (was: 4.x) (was: 3.0.x) 3.0.14 3.11.0 4.0 > Corruption while adding/removing a column to/from the table > --- > > Key: CASSANDRA-13004 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13004 > Project: Cassandra > Issue Type: Bug > Components: Distributed Metadata >Reporter: Stanislav Vishnevskiy >Assignee: Alex Petrov >Priority: Blocker > Fix For: 3.0.14, 3.11.0, 4.0 > > > We had the following schema in production. > {code:none} > CREATE TYPE IF NOT EXISTS discord_channels.channel_recipient ( > nick text > ); > CREATE TYPE IF NOT EXISTS discord_channels.channel_permission_overwrite ( > id bigint, > type int, > allow_ int, > deny int > ); > CREATE TABLE IF NOT EXISTS discord_channels.channels ( > id bigint, > guild_id bigint, > type tinyint, > name text, > topic text, > position int, > owner_id bigint, > icon_hash text, > recipients map, > permission_overwrites map , > bitrate int, > user_limit int, > last_pin_timestamp timestamp, > last_message_id bigint, > PRIMARY KEY (id) > ); > {code} > And then we executed the following alter. > {code:none} > ALTER TABLE discord_channels.channels ADD application_id bigint; > {code} > And one row (that we can tell) got corrupted at the same time and could no > longer be read from the Python driver. > {code:none} > [E 161206 01:56:58 geventreactor:141] Error decoding response from Cassandra. > ver(4); flags(); stream(27); op(8); offset(9); len(887); buffer: > '\x84\x00\x00\x1b\x08\x00\x00\x03w\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x0f\x00\x10discord_channels\x00\x08channels\x00\x02id\x00\x02\x00\x0eapplication_id\x00\x02\x00\x07bitrate\x00\t\x00\x08guild_id\x00\x02\x00\ticon_hash\x00\r\x00\x0flast_message_id\x00\x02\x00\x12last_pin_timestamp\x00\x0b\x00\x04name\x00\r\x00\x08owner_id\x00\x02\x00\x15permission_overwrites\x00!\x00\x02\x000\x00\x10discord_channels\x00\x1cchannel_permission_overwrite\x00\x04\x00\x02id\x00\x02\x00\x04type\x00\t\x00\x06allow_\x00\t\x00\x04deny\x00\t\x00\x08position\x00\t\x00\nrecipients\x00!\x00\x02\x000\x00\x10discord_channels\x00\x11channel_recipient\x00\x01\x00\x04nick\x00\r\x00\x05topic\x00\r\x00\x04type\x00\x14\x00\nuser_limit\x00\t\x00\x00\x00\x01\x00\x00\x00\x08\x03\x8a\x19\x8e\xf8\x82\x00\x01\xff\xff\xff\xff\x00\x00\x00\x04\x00\x00\xfa\x00\x00\x00\x00\x08\x00\x00\xfa\x00\x00\xf8G\xc5\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8b\xc0\xb5nB\x00\x02\x00\x00\x00\x08G\xc5\xffI\x98\xc4\xb4(\x00\x00\x00\x03\x8b\xc0\xa8\xff\xff\xff\xff\x00\x00\x01<\x00\x00\x00\x06\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x040\x07\xf8Q\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x07\xf8Q\x00\x00\x00\x04\x10\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x05\xe8A\x00\x00\x00\x04\x10\x02\x00\x00\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00 > > \x08\x01\x00\x00\x00\x04\xc4\xb4(\x00\xff\xff\xff\xff\x00\x00\x00O[f\x80Q\x07general\x05\xf8G\xc5\xffI\x98\xc4\xb4(\x00\xf8O[f\x80Q\x00\x00\x00\x02\x04\xf8O[f\x80Q\x00\xf8G\xc5\xffI\x98\x01\x00\x00\xf8O[f\x80Q\x00\x00\x00\x00\xf8G\xc5\xffI\x97\xc4\xb4(\x06\x00\xf8O\x7fe\x1fm\x08\x03\x00\x00\x00\x01\x00\x00\x00\x00\x04\x00\x00\x00\x00' > {code} > And then in cqlsh when trying to read the row we got this. > {code:none} > /usr/bin/cqlsh.py:632: DateOverFlowWarning: Some timestamps are larger than > Python datetime can represent. Timestamps are displayed in milliseconds from > epoch. > Traceback (most recent call last): > File "/usr/bin/cqlsh.py", line 1301, in perform_simple_statement > result = future.result() >
[jira] [Updated] (CASSANDRA-13642) Expose recent histograms in JmxHistograms
[ https://issues.apache.org/jira/browse/CASSANDRA-13642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Lohfink updated CASSANDRA-13642: -- Status: Patch Available (was: Open) > Expose recent histograms in JmxHistograms > - > > Key: CASSANDRA-13642 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13642 > Project: Cassandra > Issue Type: Improvement > Components: Observability >Reporter: Chris Lohfink >Assignee: Chris Lohfink > > For monitoring tools that were written for the recent*Histograms the current > decaying and all time values exposed by the JmxHistograms is not consumable. > We can add a new attribute (easier with some tooling than operations to read) > to expose it just like in storage service previously. > Should additionally make it so this attribute only stores previous values if > read, so that it does not add any additional memory cost to C* unless used > since we already storing 2 versions of histogram for the decaying/alltime > views. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13642) Expose recent histograms in JmxHistograms
[ https://issues.apache.org/jira/browse/CASSANDRA-13642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16065324#comment-16065324 ] ASF GitHub Bot commented on CASSANDRA-13642: GitHub user clohfink opened a pull request: https://github.com/apache/cassandra/pull/126 Expose recent histograms for CASSANDRA-13642 You can merge this pull request into a Git repository by running: $ git pull https://github.com/clohfink/cassandra 13642 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/cassandra/pull/126.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #126 commit cc429b062d7e30d4e38f3e12c1cf7a35f1fdc119 Author: Chris LohfinkDate: 2017-06-27T19:10:50Z Expose recent histograms for CASSANDRA-13642 > Expose recent histograms in JmxHistograms > - > > Key: CASSANDRA-13642 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13642 > Project: Cassandra > Issue Type: Improvement > Components: Observability >Reporter: Chris Lohfink >Assignee: Chris Lohfink > > For monitoring tools that were written for the recent*Histograms the current > decaying and all time values exposed by the JmxHistograms is not consumable. > We can add a new attribute (easier with some tooling than operations to read) > to expose it just like in storage service previously. > Should additionally make it so this attribute only stores previous values if > read, so that it does not add any additional memory cost to C* unless used > since we already storing 2 versions of histogram for the decaying/alltime > views. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-13642) Expose recent histograms in JmxHistograms
Chris Lohfink created CASSANDRA-13642: - Summary: Expose recent histograms in JmxHistograms Key: CASSANDRA-13642 URL: https://issues.apache.org/jira/browse/CASSANDRA-13642 Project: Cassandra Issue Type: Improvement Components: Observability Reporter: Chris Lohfink Assignee: Chris Lohfink For monitoring tools that were written for the recent*Histograms the current decaying and all time values exposed by the JmxHistograms is not consumable. We can add a new attribute (easier with some tooling than operations to read) to expose it just like in storage service previously. Should additionally make it so this attribute only stores previous values if read, so that it does not add any additional memory cost to C* unless used since we already storing 2 versions of histogram for the decaying/alltime views. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13640) CQLSH error when using 'login' to switch users
[ https://issues.apache.org/jira/browse/CASSANDRA-13640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16065249#comment-16065249 ] Andrés de la Peña commented on CASSANDRA-13640: --- First draft of the patch [here|https://github.com/adelapena/cassandra/commit/95207a99402b15e32d75ad8e339e9f14d91a1b6e]. > CQLSH error when using 'login' to switch users > -- > > Key: CASSANDRA-13640 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13640 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Andrés de la Peña >Assignee: Andrés de la Peña >Priority: Minor > Fix For: 3.0.x > > > Using {{PasswordAuthenticator}} and {{CassandraAuthorizer}}: > {code} > bin/cqlsh -u cassandra -p cassandra > Connected to Test Cluster at 127.0.0.1:9042. > [cqlsh 5.0.1 | Cassandra 3.0.14-SNAPSHOT | CQL spec 3.4.0 | Native protocol > v4] > Use HELP for help. > cassandra@cqlsh> create role super with superuser = true and password = 'p' > and login = true; > cassandra@cqlsh> login super; > Password: > super@cqlsh> list roles; > 'Row' object has no attribute 'values' > {code} > When we initialize the Shell, we configure certain settings on the session > object such as > {code} > self.session.default_timeout = request_timeout > self.session.row_factory = ordered_dict_factory > self.session.default_consistency_level = cassandra.ConsistencyLevel.ONE > {code} > However, once we perform a LOGIN cmd, which calls do_login(..), we create a > new cluster/session object but actually never set those settings on the new > session. > It isn't failing on 3.x. > As a workaround, it is possible to logout and log back in and things work > correctly. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-13640) CQLSH error when using 'login' to switch users
[ https://issues.apache.org/jira/browse/CASSANDRA-13640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrés de la Peña reassigned CASSANDRA-13640: - Assignee: Andrés de la Peña > CQLSH error when using 'login' to switch users > -- > > Key: CASSANDRA-13640 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13640 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Andrés de la Peña >Assignee: Andrés de la Peña >Priority: Minor > Fix For: 3.0.x > > > Using {{PasswordAuthenticator}} and {{CassandraAuthorizer}}: > {code} > bin/cqlsh -u cassandra -p cassandra > Connected to Test Cluster at 127.0.0.1:9042. > [cqlsh 5.0.1 | Cassandra 3.0.14-SNAPSHOT | CQL spec 3.4.0 | Native protocol > v4] > Use HELP for help. > cassandra@cqlsh> create role super with superuser = true and password = 'p' > and login = true; > cassandra@cqlsh> login super; > Password: > super@cqlsh> list roles; > 'Row' object has no attribute 'values' > {code} > When we initialize the Shell, we configure certain settings on the session > object such as > {code} > self.session.default_timeout = request_timeout > self.session.row_factory = ordered_dict_factory > self.session.default_consistency_level = cassandra.ConsistencyLevel.ONE > {code} > However, once we perform a LOGIN cmd, which calls do_login(..), we create a > new cluster/session object but actually never set those settings on the new > session. > It isn't failing on 3.x. > As a workaround, it is possible to logout and log back in and things work > correctly. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13641) Properly evict pstmts from prepared statements cache
[ https://issues.apache.org/jira/browse/CASSANDRA-13641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-13641: - Reviewer: Benjamin Lerer Status: Patch Available (was: Open) ||cassandra-3.11|[branch|https://github.com/apache/cassandra/compare/cassandra-3.11...snazy:13641-evict-pstmt-3.11]|[testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-13641-evict-pstmt-3.11-testall/lastSuccessfulBuild/]|[dtest|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-13641-evict-pstmt-3.11-dtest/lastSuccessfulBuild/] ||trunk|[branch|https://github.com/apache/cassandra/compare/trunk...snazy:13641-evict-pstmt-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-13641-evict-pstmt-trunk-testall/lastSuccessfulBuild/]|[dtest|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-13641-evict-pstmt-trunk-dtest/lastSuccessfulBuild/] > Properly evict pstmts from prepared statements cache > > > Key: CASSANDRA-13641 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13641 > Project: Cassandra > Issue Type: Bug >Reporter: Robert Stupp >Assignee: Robert Stupp >Priority: Minor > Fix For: 3.11.x > > > Prepared statements that are evicted from the prepared statements cache are > not removed from the underlying table {{system.prepared_statements}}. This > can lead to issues during startup. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-13641) Properly evict pstmts from prepared statements cache
Robert Stupp created CASSANDRA-13641: Summary: Properly evict pstmts from prepared statements cache Key: CASSANDRA-13641 URL: https://issues.apache.org/jira/browse/CASSANDRA-13641 Project: Cassandra Issue Type: Bug Reporter: Robert Stupp Assignee: Robert Stupp Priority: Minor Fix For: 3.11.x Prepared statements that are evicted from the prepared statements cache are not removed from the underlying table {{system.prepared_statements}}. This can lead to issues during startup. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-13640) CQLSH error when using 'login' to switch users
Andrés de la Peña created CASSANDRA-13640: - Summary: CQLSH error when using 'login' to switch users Key: CASSANDRA-13640 URL: https://issues.apache.org/jira/browse/CASSANDRA-13640 Project: Cassandra Issue Type: Bug Components: CQL Reporter: Andrés de la Peña Priority: Minor Fix For: 3.0.x Using {{PasswordAuthenticator}} and {{CassandraAuthorizer}}: {code} bin/cqlsh -u cassandra -p cassandra Connected to Test Cluster at 127.0.0.1:9042. [cqlsh 5.0.1 | Cassandra 3.0.14-SNAPSHOT | CQL spec 3.4.0 | Native protocol v4] Use HELP for help. cassandra@cqlsh> create role super with superuser = true and password = 'p' and login = true; cassandra@cqlsh> login super; Password: super@cqlsh> list roles; 'Row' object has no attribute 'values' {code} When we initialize the Shell, we configure certain settings on the session object such as {code} self.session.default_timeout = request_timeout self.session.row_factory = ordered_dict_factory self.session.default_consistency_level = cassandra.ConsistencyLevel.ONE {code} However, once we perform a LOGIN cmd, which calls do_login(..), we create a new cluster/session object but actually never set those settings on the new session. It isn't failing on 3.x. As a workaround, it is possible to logout and log back in and things work correctly. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13565) Materialized view usage of commit logs requires large mutation but commitlog_segment_size_in_mb=2048 causes exception
[ https://issues.apache.org/jira/browse/CASSANDRA-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16064975#comment-16064975 ] Tania S Engel commented on CASSANDRA-13565: --- Fabulous. Thanks for the explanation, I understand enough to fix it. We actually switched away from using MVs from these particular tables due to their heavy inserts, and the fact we kept running into memory issues when joining. How about my other question ...setting commitlog_segment_size_in_mb=2048 we get : ERROR [COMMIT-LOG-ALLOCATOR] 2017-05-31 17:01:48,005 JVMStabilityInspector.java:82 - Exiting due to error while processing commit log during initialization. org.apache.cassandra.io.FSWriteError: java.io.IOException: An attempt was made to move the file pointer before the beginning of the file > Materialized view usage of commit logs requires large mutation but > commitlog_segment_size_in_mb=2048 causes exception > - > > Key: CASSANDRA-13565 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13565 > Project: Cassandra > Issue Type: Bug > Components: Configuration, Materialized Views, Streaming and > Messaging > Environment: Cassandra 3.9.0, Windows >Reporter: Tania S Engel > Attachments: CQLforTable.png > > > We will be upgrading to 3.10 for CASSANDRA-11670. However, there is another > scenario (not applyunsafe during JOIN) which leads to : > java.lang.IllegalArgumentException: Mutation of 525.847MiB is too large > for the maximum size of 512.000MiB > at > org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:262) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Mutation.apply(Mutation.java:227) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:147) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:797) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.view.ViewBuilder.buildKey(ViewBuilder.java:96) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.view.ViewBuilder.run(ViewBuilder.java:165) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.compaction.CompactionManager$14.run(CompactionManager.java:1591) > [apache-cassandra-3.9.0.jar:3.9.0] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_66] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [na:1.8.0_66] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_66] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_66] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66] > Due to the relationship of max_mutation_size_in_kb and > commitlog_segment_size_in_mb, we increased commitlog_segment_size_in_mb and > left Cassandra to calculate max_mutation_size_in_kb as half the size > commitlog_segment_size_in_mb * 1024. > However, we have found that if we set commitlog_segment_size_in_mb=2048 we > get an exception upon starting Cassandra, when it is creating a new commit > log. > ERROR [COMMIT-LOG-ALLOCATOR] 2017-05-31 17:01:48,005 > JVMStabilityInspector.java:82 - Exiting due to error while processing commit > log during initialization. > org.apache.cassandra.io.FSWriteError: java.io.IOException: An attempt was > made to move the file pointer before the beginning of the file > Perhaps the index you are using is not big enough and it goes negative. > Is the relationship between max_mutation_size_in_kb and > commitlog_segment_size_in_mb important to preserve? In our limited stress > test we are finding mutation size already over 512mb and we expect more data > in our sstables and associated materialized views. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13440) Sign RPM artifacts
[ https://issues.apache.org/jira/browse/CASSANDRA-13440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16064909#comment-16064909 ] Michael Shuler commented on CASSANDRA-13440: Basically, yes, it's the same method. It differs in that I'm using a Debian machine and rpmsign is ignorant of how to deal with multiple gpg keys for the same named ID (KeyID works), but essentially, yes, it is the same steps so adding those to the readme is fine and can be adapted. Docker has barfed on me in a few interesting ways, so I have not pushed the changes I made locally to {{prepare_release.sh}}, since I'm completely clearing all docker instances to start fresh and I hardcoded some paths. I don't want someone to rm all their stuff, and I would also like to add some conditionals to fail early or not sign and attempted to upload things, if there is no proper gpg key defined. I'll keep cleaning those changes up and get them pushed, once I have some free time. > Sign RPM artifacts > -- > > Key: CASSANDRA-13440 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13440 > Project: Cassandra > Issue Type: Sub-task > Components: Packaging >Reporter: Stefan Podkowinski >Assignee: Michael Shuler > Fix For: 2.1.18, 2.2.10, 3.0.14, 3.11.0 > > > RPMs should be gpg signed just as the deb packages. Also add documentation > how to verify to download page. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13535) Error decoding JSON for timestamp smaller than Integer.MAX_VALUE
[ https://issues.apache.org/jira/browse/CASSANDRA-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16064850#comment-16064850 ] Sven Diedrichsen commented on CASSANDRA-13535: -- This has also been reproduced in version 3.11. > Error decoding JSON for timestamp smaller than Integer.MAX_VALUE > > > Key: CASSANDRA-13535 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13535 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Jeremy Nguyen Xuan > > When trying to insert a JSON with field of type timestamp, the field is > decoded as an Integer instead as a Long. > {code} > CREATE TABLE foo.bar ( > myfield timestamp, > PRIMARY KEY (myfield) > ); > cqlsh:foo> INSERT INTO bar JSON '{"myfield":0}'; > InvalidRequest: Error from server: code=2200 [Invalid query] message="Error > decoding JSON value for myfield: Expected a long or a datestring > representation of a timestamp value, but got a Integer: 0" > cqlsh:foo> INSERT INTO bar JSON '{"myfield":2147483647}'; > InvalidRequest: Error from server: code=2200 [Invalid query] message="Error > decoding JSON value for myfield: Expected a long or a datestring > representation of a timestamp value, but got a Integer: 2147483647" > cqlsh:foo> INSERT INTO bar JSON '{"myfield":2147483648}'; > cqlsh:foo> > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13535) Error decoding JSON for timestamp smaller than Integer.MAX_VALUE
[ https://issues.apache.org/jira/browse/CASSANDRA-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anonymous updated CASSANDRA-13535: -- Reproduced In: 3.10, 3.8, 3.7 (was: 3.7, 3.8, 3.10) Status: Open (was: Awaiting Feedback) > Error decoding JSON for timestamp smaller than Integer.MAX_VALUE > > > Key: CASSANDRA-13535 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13535 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Jeremy Nguyen Xuan > > When trying to insert a JSON with field of type timestamp, the field is > decoded as an Integer instead as a Long. > {code} > CREATE TABLE foo.bar ( > myfield timestamp, > PRIMARY KEY (myfield) > ); > cqlsh:foo> INSERT INTO bar JSON '{"myfield":0}'; > InvalidRequest: Error from server: code=2200 [Invalid query] message="Error > decoding JSON value for myfield: Expected a long or a datestring > representation of a timestamp value, but got a Integer: 0" > cqlsh:foo> INSERT INTO bar JSON '{"myfield":2147483647}'; > InvalidRequest: Error from server: code=2200 [Invalid query] message="Error > decoding JSON value for myfield: Expected a long or a datestring > representation of a timestamp value, but got a Integer: 2147483647" > cqlsh:foo> INSERT INTO bar JSON '{"myfield":2147483648}'; > cqlsh:foo> > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13535) Error decoding JSON for timestamp smaller than Integer.MAX_VALUE
[ https://issues.apache.org/jira/browse/CASSANDRA-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anonymous updated CASSANDRA-13535: -- Reproduced In: 3.10, 3.8, 3.7 (was: 3.7, 3.8, 3.10) Status: Awaiting Feedback (was: Open) > Error decoding JSON for timestamp smaller than Integer.MAX_VALUE > > > Key: CASSANDRA-13535 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13535 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Jeremy Nguyen Xuan > > When trying to insert a JSON with field of type timestamp, the field is > decoded as an Integer instead as a Long. > {code} > CREATE TABLE foo.bar ( > myfield timestamp, > PRIMARY KEY (myfield) > ); > cqlsh:foo> INSERT INTO bar JSON '{"myfield":0}'; > InvalidRequest: Error from server: code=2200 [Invalid query] message="Error > decoding JSON value for myfield: Expected a long or a datestring > representation of a timestamp value, but got a Integer: 0" > cqlsh:foo> INSERT INTO bar JSON '{"myfield":2147483647}'; > InvalidRequest: Error from server: code=2200 [Invalid query] message="Error > decoding JSON value for myfield: Expected a long or a datestring > representation of a timestamp value, but got a Integer: 2147483647" > cqlsh:foo> INSERT INTO bar JSON '{"myfield":2147483648}'; > cqlsh:foo> > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13440) Sign RPM artifacts
[ https://issues.apache.org/jira/browse/CASSANDRA-13440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16064804#comment-16064804 ] Stefan Podkowinski commented on CASSANDRA-13440: Did you end up signing each rpm manually as described above? Does it make sense to just copy my comment on this to the cassandra-builds/README in that case, so this won't get lost? > Sign RPM artifacts > -- > > Key: CASSANDRA-13440 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13440 > Project: Cassandra > Issue Type: Sub-task > Components: Packaging >Reporter: Stefan Podkowinski >Assignee: Michael Shuler > Fix For: 2.1.18, 2.2.10, 3.0.14, 3.11.0 > > > RPMs should be gpg signed just as the deb packages. Also add documentation > how to verify to download page. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13592) Null Pointer exception at SELECT JSON statement
[ https://issues.apache.org/jira/browse/CASSANDRA-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-13592: --- Reviewer: Benjamin Lerer > Null Pointer exception at SELECT JSON statement > --- > > Key: CASSANDRA-13592 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13592 > Project: Cassandra > Issue Type: Bug > Components: CQL > Environment: Debian Linux >Reporter: Wyss Philipp >Assignee: ZhaoYang > Labels: beginner > Attachments: system.log > > > A Nulll pointer exception appears when the command > {code} > SELECT JSON * FROM examples.basic; > ---MORE--- > message="java.lang.NullPointerException"> > Examples.basic has the following description (DESC examples.basic;): > CREATE TABLE examples.basic ( > key frozen> PRIMARY KEY, > wert text > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > {code} > The error appears after the ---MORE--- line. > The field "wert" has a JSON formatted string. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13142) Upgradesstables cancels compactions unnecessarily
[ https://issues.apache.org/jira/browse/CASSANDRA-13142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16064774#comment-16064774 ] Kurt Greaves commented on CASSANDRA-13142: -- Well, I've seriously broken SSTableRewriterTest. Haven't found out why yet though. > Upgradesstables cancels compactions unnecessarily > - > > Key: CASSANDRA-13142 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13142 > Project: Cassandra > Issue Type: Bug >Reporter: Kurt Greaves >Assignee: Kurt Greaves > Attachments: 13142-v1.patch > > > Since at least 1.2 upgradesstables will cancel any compactions bar > validations when run. This was originally determined as a non-issue in > CASSANDRA-3430 however can be quite annoying (especially with STCS) as a > compaction will output the new version anyway. Furthermore, as per > CASSANDRA-12243 it also stops things like view builds and I assume secondary > index builds as well which is not ideal. > We should avoid cancelling compactions unnecessarily. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13639) SSTableLoader always uses hostname to stream files from
[ https://issues.apache.org/jira/browse/CASSANDRA-13639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Karlsson updated CASSANDRA-13639: - Fix Version/s: 4.x > SSTableLoader always uses hostname to stream files from > --- > > Key: CASSANDRA-13639 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13639 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Jan Karlsson >Assignee: Jan Karlsson > Fix For: 4.x > > > I stumbled upon an issue where SSTableLoader was ignoring our routing by > using the wrong interface to send the SSTables to the other nodes. Looking at > the code, it seems that we are using FBUtilities.getLocalAddress() to fetch > out the hostname, even if the yaml file specifies a different host. I am not > sure why we call this function instead of using the routing by leaving it > blank, perhaps someone could enlighten me. > This behaviour comes from the fact that we use a default created > DatabaseDescriptor which does not set the values for listenAddress and > listenInterface. This causes the aforementioned function to retrieve the > hostname at all times, even if it is not the interface used in the yaml file. > I propose we break out the function that handles listenAddress and > listenInterface and call it so that listenAddress or listenInterface is > getting populated in the DatabaseDescriptor. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-12484) Unknown exception caught while attempting to update MaterializedView! findkita.kitas java.lang.AssertionErro
[ https://issues.apache.org/jira/browse/CASSANDRA-12484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16064708#comment-16064708 ] ZhaoYang commented on CASSANDRA-12484: -- [~cordlesswool] could you share you table schemas and typical queries? which version is fixed? > Unknown exception caught while attempting to update MaterializedView! > findkita.kitas java.lang.AssertionErro > > > Key: CASSANDRA-12484 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12484 > Project: Cassandra > Issue Type: Bug > Environment: Docker Container with Cassandra version 3.7 running on > local pc >Reporter: cordlessWool >Priority: Critical > > After restart my cassandra node does not start anymore. Ends with following > error message. > ERROR 18:39:37 Unknown exception caught while attempting to update > MaterializedView! findkita.kitas > java.lang.AssertionError: We shouldn't have got there is the base row had no > associated entry > Cassandra has heavy cpu usage and use 2,1 gb of memory there is be 1gb more > available. I run nodetool cleanup and repair, but did not help. > I have 5 materialzied views on this table, but the amount of rows in table is > under 2000, that is not much. > The cassandra runs in a docker container. The container is access able, but > can not call cqlsh and my website cound not connect too -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13547) Filtered materialized views missing data
[ https://issues.apache.org/jira/browse/CASSANDRA-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Bossa updated CASSANDRA-13547: - Reviewer: ZhaoYang > Filtered materialized views missing data > > > Key: CASSANDRA-13547 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13547 > Project: Cassandra > Issue Type: Bug > Components: Materialized Views > Environment: Official Cassandra 3.10 Docker image (ID 154b919bf8ce). >Reporter: Craig Nicholson >Assignee: Krishna Dattu Koneru >Priority: Blocker > Labels: materializedviews > Fix For: 3.11.x > > > When creating a materialized view against a base table the materialized view > does not always reflect the correct data. > Using the following test schema: > {code:title=Schema|language=sql} > DROP KEYSPACE IF EXISTS test; > CREATE KEYSPACE test > WITH REPLICATION = { >'class' : 'SimpleStrategy', >'replication_factor' : 1 > }; > CREATE TABLE test.table1 ( > id int, > name text, > enabled boolean, > foo text, > PRIMARY KEY (id, name)); > CREATE MATERIALIZED VIEW test.table1_mv1 AS SELECT id, name, foo > FROM test.table1 > WHERE id IS NOT NULL > AND name IS NOT NULL > AND enabled = TRUE > PRIMARY KEY ((name), id); > CREATE MATERIALIZED VIEW test.table1_mv2 AS SELECT id, name, foo, enabled > FROM test.table1 > WHERE id IS NOT NULL > AND name IS NOT NULL > AND enabled = TRUE > PRIMARY KEY ((name), id); > {code} > When I insert a row into the base table the materialized views are updated > appropriately. (+) > {code:title=Insert row|language=sql} > cqlsh> INSERT INTO test.table1 (id, name, enabled, foo) VALUES (1, 'One', > TRUE, 'Bar'); > cqlsh> SELECT * FROM test.table1; > id | name | enabled | foo > +--+-+- > 1 | One |True | Bar > (1 rows) > cqlsh> SELECT * FROM test.table1_mv1; > name | id | foo > --++- > One | 1 | Bar > (1 rows) > cqlsh> SELECT * FROM test.table1_mv2; > name | id | enabled | foo > --++-+- > One | 1 |True | Bar > (1 rows) > {code} > Updating the record in the base table and setting enabled to FALSE will > filter the record from both materialized views. (+) > {code:title=Disable the row|language=sql} > cqlsh> UPDATE test.table1 SET enabled = FALSE WHERE id = 1 AND name = 'One'; > cqlsh> SELECT * FROM test.table1; > id | name | enabled | foo > +--+-+- > 1 | One | False | Bar > (1 rows) > cqlsh> SELECT * FROM test.table1_mv1; > name | id | foo > --++- > (0 rows) > cqlsh> SELECT * FROM test.table1_mv2; > name | id | enabled | foo > --++-+- > (0 rows) > {code} > However a further update to the base table setting enabled to TRUE should > include the record in both materialzed views, however only one view > (table1_mv2) gets updated. (-) > It appears that only the view (table1_mv2) that returns the filtered column > (enabled) is updated. (-) > Additionally columns that are not part of the partiion or clustering key are > not updated. You can see that the foo column has a null value in table1_mv2. > (-) > {code:title=Enable the row|language=sql} > cqlsh> UPDATE test.table1 SET enabled = TRUE WHERE id = 1 AND name = 'One'; > cqlsh> SELECT * FROM test.table1; > id | name | enabled | foo > +--+-+- > 1 | One |True | Bar > (1 rows) > cqlsh> SELECT * FROM test.table1_mv1; > name | id | foo > --++- > (0 rows) > cqlsh> SELECT * FROM test.table1_mv2; > name | id | enabled | foo > --++-+-- > One | 1 |True | null > (1 rows) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13547) Filtered materialized views missing data
[ https://issues.apache.org/jira/browse/CASSANDRA-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16064692#comment-16064692 ] ZhaoYang commented on CASSANDRA-13547: -- I will look into it. need sometime to recall about _MV Tombstones_ .. > Filtered materialized views missing data > > > Key: CASSANDRA-13547 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13547 > Project: Cassandra > Issue Type: Bug > Components: Materialized Views > Environment: Official Cassandra 3.10 Docker image (ID 154b919bf8ce). >Reporter: Craig Nicholson >Assignee: Krishna Dattu Koneru >Priority: Blocker > Labels: materializedviews > Fix For: 3.11.x > > > When creating a materialized view against a base table the materialized view > does not always reflect the correct data. > Using the following test schema: > {code:title=Schema|language=sql} > DROP KEYSPACE IF EXISTS test; > CREATE KEYSPACE test > WITH REPLICATION = { >'class' : 'SimpleStrategy', >'replication_factor' : 1 > }; > CREATE TABLE test.table1 ( > id int, > name text, > enabled boolean, > foo text, > PRIMARY KEY (id, name)); > CREATE MATERIALIZED VIEW test.table1_mv1 AS SELECT id, name, foo > FROM test.table1 > WHERE id IS NOT NULL > AND name IS NOT NULL > AND enabled = TRUE > PRIMARY KEY ((name), id); > CREATE MATERIALIZED VIEW test.table1_mv2 AS SELECT id, name, foo, enabled > FROM test.table1 > WHERE id IS NOT NULL > AND name IS NOT NULL > AND enabled = TRUE > PRIMARY KEY ((name), id); > {code} > When I insert a row into the base table the materialized views are updated > appropriately. (+) > {code:title=Insert row|language=sql} > cqlsh> INSERT INTO test.table1 (id, name, enabled, foo) VALUES (1, 'One', > TRUE, 'Bar'); > cqlsh> SELECT * FROM test.table1; > id | name | enabled | foo > +--+-+- > 1 | One |True | Bar > (1 rows) > cqlsh> SELECT * FROM test.table1_mv1; > name | id | foo > --++- > One | 1 | Bar > (1 rows) > cqlsh> SELECT * FROM test.table1_mv2; > name | id | enabled | foo > --++-+- > One | 1 |True | Bar > (1 rows) > {code} > Updating the record in the base table and setting enabled to FALSE will > filter the record from both materialized views. (+) > {code:title=Disable the row|language=sql} > cqlsh> UPDATE test.table1 SET enabled = FALSE WHERE id = 1 AND name = 'One'; > cqlsh> SELECT * FROM test.table1; > id | name | enabled | foo > +--+-+- > 1 | One | False | Bar > (1 rows) > cqlsh> SELECT * FROM test.table1_mv1; > name | id | foo > --++- > (0 rows) > cqlsh> SELECT * FROM test.table1_mv2; > name | id | enabled | foo > --++-+- > (0 rows) > {code} > However a further update to the base table setting enabled to TRUE should > include the record in both materialzed views, however only one view > (table1_mv2) gets updated. (-) > It appears that only the view (table1_mv2) that returns the filtered column > (enabled) is updated. (-) > Additionally columns that are not part of the partiion or clustering key are > not updated. You can see that the foo column has a null value in table1_mv2. > (-) > {code:title=Enable the row|language=sql} > cqlsh> UPDATE test.table1 SET enabled = TRUE WHERE id = 1 AND name = 'One'; > cqlsh> SELECT * FROM test.table1; > id | name | enabled | foo > +--+-+- > 1 | One |True | Bar > (1 rows) > cqlsh> SELECT * FROM test.table1_mv1; > name | id | foo > --++- > (0 rows) > cqlsh> SELECT * FROM test.table1_mv2; > name | id | enabled | foo > --++-+-- > One | 1 |True | null > (1 rows) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13127) Materialized Views: View row expires too soon
[ https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhaoYang updated CASSANDRA-13127: - Status: Patch Available (was: Awaiting Feedback) > Materialized Views: View row expires too soon > - > > Key: CASSANDRA-13127 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13127 > Project: Cassandra > Issue Type: Bug > Components: Local Write-Read Paths, Materialized Views >Reporter: Duarte Nunes >Assignee: ZhaoYang > > Consider the following commands, ran against trunk: > {code} > echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh > echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | > bin/cqlsh > echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT > NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh > echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh > # wait for row liveness to get closer to expiration > sleep 6; > echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh > echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh > p | c | ttl(v) > ---+---+ > 0 | 0 | 7 > (1 rows) > c | p > ---+--- > 0 | 0 > (1 rows) > # wait for row liveness to expire > sleep 4; > echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh > p | c | ttl(v) > ---+---+ > 0 | 0 | 3 > (1 rows) > c | p > ---+--- > (0 rows) > {code} > Notice how the view row is removed even though the base row is still live. I > would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry > the TTLs are compared instead of the expiration times, but I'm not sure I'm > getting that far ahead in the code when updating a column that's not in the > view. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-13592) Null Pointer exception at SELECT JSON statement
[ https://issues.apache.org/jira/browse/CASSANDRA-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16064405#comment-16064405 ] ZhaoYang edited comment on CASSANDRA-13592 at 6/27/17 8:43 AM: --- In Json path, the bytebuffer is being consumed with position = capacity. So in page-state, no partition key bytes are written. Using bf.duplicate() would fix this issue. {code} public static List rowToJson(List row, ProtocolVersion protocolVersion, ResultSet.ResultMetadata metadata) { StringBuilder sb = new StringBuilder("{"); for (int i = 0; i < metadata.names.size(); i++) { if (i > 0) sb.append(", "); ColumnSpecification spec = metadata.names.get(i); String columnName = spec.name.toString(); if (!columnName.equals(columnName.toLowerCase(Locale.US))) columnName = "\"" + columnName + "\""; ByteBuffer buffer = row.get(i); sb.append('"'); sb.append(Json.quoteAsJsonString(columnName)); sb.append("\": "); if (buffer == null) sb.append("null"); else // use duplicate() to avoid buffer being consumed sb.append(spec.type.toJSONString(buffer.duplicate(), protocolVersion)); } sb.append("}"); return Collections.singletonList(UTF8Type.instance.getSerializer().serialize(sb.toString())); } {code} was (Author: jasonstack): it seems like driver issue.. it swallowed the keys paging state.. > Null Pointer exception at SELECT JSON statement > --- > > Key: CASSANDRA-13592 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13592 > Project: Cassandra > Issue Type: Bug > Components: CQL > Environment: Debian Linux >Reporter: Wyss Philipp > Labels: beginner > Attachments: system.log > > > A Nulll pointer exception appears when the command > {code} > SELECT JSON * FROM examples.basic; > ---MORE--- > message="java.lang.NullPointerException"> > Examples.basic has the following description (DESC examples.basic;): > CREATE TABLE examples.basic ( > key frozen> PRIMARY KEY, > wert text > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > {code} > The error appears after the ---MORE--- line. > The field "wert" has a JSON formatted string. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-13592) Null Pointer exception at SELECT JSON statement
[ https://issues.apache.org/jira/browse/CASSANDRA-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhaoYang reassigned CASSANDRA-13592: Assignee: ZhaoYang > Null Pointer exception at SELECT JSON statement > --- > > Key: CASSANDRA-13592 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13592 > Project: Cassandra > Issue Type: Bug > Components: CQL > Environment: Debian Linux >Reporter: Wyss Philipp >Assignee: ZhaoYang > Labels: beginner > Attachments: system.log > > > A Nulll pointer exception appears when the command > {code} > SELECT JSON * FROM examples.basic; > ---MORE--- > message="java.lang.NullPointerException"> > Examples.basic has the following description (DESC examples.basic;): > CREATE TABLE examples.basic ( > key frozen> PRIMARY KEY, > wert text > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > {code} > The error appears after the ---MORE--- line. > The field "wert" has a JSON formatted string. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13639) SSTableLoader always uses hostname to stream files from
[ https://issues.apache.org/jira/browse/CASSANDRA-13639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Karlsson updated CASSANDRA-13639: - Summary: SSTableLoader always uses hostname to stream files from (was: SSTableLoader always uses hostname to stream files) > SSTableLoader always uses hostname to stream files from > --- > > Key: CASSANDRA-13639 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13639 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Jan Karlsson >Assignee: Jan Karlsson > > I stumbled upon an issue where SSTableLoader was ignoring our routing by > using the wrong interface to send the SSTables to the other nodes. Looking at > the code, it seems that we are using FBUtilities.getLocalAddress() to fetch > out the hostname, even if the yaml file specifies a different host. I am not > sure why we call this function instead of using the routing by leaving it > blank, perhaps someone could enlighten me. > This behaviour comes from the fact that we use a default created > DatabaseDescriptor which does not set the values for listenAddress and > listenInterface. This causes the aforementioned function to retrieve the > hostname at all times, even if it is not the interface used in the yaml file. > I propose we break out the function that handles listenAddress and > listenInterface and call it so that listenAddress or listenInterface is > getting populated in the DatabaseDescriptor. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-13639) SSTableLoader always uses hostname to stream files
Jan Karlsson created CASSANDRA-13639: Summary: SSTableLoader always uses hostname to stream files Key: CASSANDRA-13639 URL: https://issues.apache.org/jira/browse/CASSANDRA-13639 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Jan Karlsson Assignee: Jan Karlsson I stumbled upon an issue where SSTableLoader was ignoring our routing by using the wrong interface to send the SSTables to the other nodes. Looking at the code, it seems that we are using FBUtilities.getLocalAddress() to fetch out the hostname, even if the yaml file specifies a different host. I am not sure why we call this function instead of using the routing by leaving it blank, perhaps someone could enlighten me. This behaviour comes from the fact that we use a default created DatabaseDescriptor which does not set the values for listenAddress and listenInterface. This causes the aforementioned function to retrieve the hostname at all times, even if it is not the interface used in the yaml file. I propose we break out the function that handles listenAddress and listenInterface and call it so that listenAddress or listenInterface is getting populated in the DatabaseDescriptor. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13592) Null Pointer exception at SELECT JSON statement
[ https://issues.apache.org/jira/browse/CASSANDRA-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16064405#comment-16064405 ] ZhaoYang commented on CASSANDRA-13592: -- it seems like driver issue.. it swallowed the keys paging state.. > Null Pointer exception at SELECT JSON statement > --- > > Key: CASSANDRA-13592 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13592 > Project: Cassandra > Issue Type: Bug > Components: CQL > Environment: Debian Linux >Reporter: Wyss Philipp > Labels: beginner > Attachments: system.log > > > A Nulll pointer exception appears when the command > {code} > SELECT JSON * FROM examples.basic; > ---MORE--- > message="java.lang.NullPointerException"> > Examples.basic has the following description (DESC examples.basic;): > CREATE TABLE examples.basic ( > key frozen> PRIMARY KEY, > wert text > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > {code} > The error appears after the ---MORE--- line. > The field "wert" has a JSON formatted string. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13638) Add the JMX metrics about the total number of hints we have delivered per host
[ https://issues.apache.org/jira/browse/CASSANDRA-13638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16064347#comment-16064347 ] Petrit G commented on CASSANDRA-13638: -- Is it this ticket [here|https://issues.apache.org/jira/browse/CASSANDRA-13234?jql=text%20~%20%22Hints_delays%22] which solves that? In that case could it be back-ported to a 3.x version of cassandra? > Add the JMX metrics about the total number of hints we have delivered per host > -- > > Key: CASSANDRA-13638 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13638 > Project: Cassandra > Issue Type: Improvement >Reporter: Petrit G >Priority: Minor > > Recently metrics were added regarding how many hints have been successfully > delivered in total. > See [here | > http://cassandra.apache.org/doc/latest/operating/metrics.html?highlight=metrics#hintsservice-metrics], > for more specific info. > However I think it would be beneficial to add a metric which shows how many > hints have been delivered per host. > More or less a corresponding metric for: > Hints_created- > Could be named: > Hints_succeeded- > This will allow users to actually see how many hints they have towards a > node, by calculating the difference between the two aforementioned metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13565) Materialized view usage of commit logs requires large mutation but commitlog_segment_size_in_mb=2048 causes exception
[ https://issues.apache.org/jira/browse/CASSANDRA-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16064346#comment-16064346 ] ZhaoYang commented on CASSANDRA-13565: -- Could you also share the view table schema? "wide" partition means "too many row * bytes_per_row". Changing it to `LogHour ` will reduce the # of rows in that partition. > Materialized view usage of commit logs requires large mutation but > commitlog_segment_size_in_mb=2048 causes exception > - > > Key: CASSANDRA-13565 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13565 > Project: Cassandra > Issue Type: Bug > Components: Configuration, Materialized Views, Streaming and > Messaging > Environment: Cassandra 3.9.0, Windows >Reporter: Tania S Engel > Attachments: CQLforTable.png > > > We will be upgrading to 3.10 for CASSANDRA-11670. However, there is another > scenario (not applyunsafe during JOIN) which leads to : > java.lang.IllegalArgumentException: Mutation of 525.847MiB is too large > for the maximum size of 512.000MiB > at > org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:262) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.Mutation.apply(Mutation.java:227) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:147) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:797) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.view.ViewBuilder.buildKey(ViewBuilder.java:96) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.view.ViewBuilder.run(ViewBuilder.java:165) > ~[apache-cassandra-3.9.0.jar:3.9.0] > at > org.apache.cassandra.db.compaction.CompactionManager$14.run(CompactionManager.java:1591) > [apache-cassandra-3.9.0.jar:3.9.0] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_66] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [na:1.8.0_66] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_66] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_66] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66] > Due to the relationship of max_mutation_size_in_kb and > commitlog_segment_size_in_mb, we increased commitlog_segment_size_in_mb and > left Cassandra to calculate max_mutation_size_in_kb as half the size > commitlog_segment_size_in_mb * 1024. > However, we have found that if we set commitlog_segment_size_in_mb=2048 we > get an exception upon starting Cassandra, when it is creating a new commit > log. > ERROR [COMMIT-LOG-ALLOCATOR] 2017-05-31 17:01:48,005 > JVMStabilityInspector.java:82 - Exiting due to error while processing commit > log during initialization. > org.apache.cassandra.io.FSWriteError: java.io.IOException: An attempt was > made to move the file pointer before the beginning of the file > Perhaps the index you are using is not big enough and it goes negative. > Is the relationship between max_mutation_size_in_kb and > commitlog_segment_size_in_mb important to preserve? In our limited stress > test we are finding mutation size already over 512mb and we expect more data > in our sstables and associated materialized views. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org