[jira] [Commented] (CASSANDRA-13756) StreamingHistogram is not thread safe
[ https://issues.apache.org/jira/browse/CASSANDRA-13756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131721#comment-16131721 ] Jeff Jirsa commented on CASSANDRA-13756: utests are clean, waiting on dtests. Apparently ~8 of the jenkins slaves are offline, so little bit delayed. > StreamingHistogram is not thread safe > - > > Key: CASSANDRA-13756 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13756 > Project: Cassandra > Issue Type: Bug >Reporter: xiangzhou xia >Assignee: Jeff Jirsa > Fix For: 3.0.x, 3.11.x > > > When we test C*3 in shadow cluster, we notice after a period of time, several > data node suddenly run into 100% cpu and stop process query anymore. > After investigation, we found that threads are stuck on the sum() in > streaminghistogram class. Those are jmx threads that working on expose > getTombStoneRatio metrics (since jmx is kicked off every 3 seconds, there is > a chance that multiple jmx thread is access streaminghistogram at the same > time). > After further investigation, we find that the optimization in CASSANDRA-13038 > led to a spool flush every time when we call sum(). Since TreeMap is not > thread safe, threads will be stuck when multiple threads visit sum() at the > same time. > There are two approaches to solve this issue. > The first one is to add a lock to the flush in sum() which will introduce > some extra overhead to streaminghistogram. > The second one is to avoid streaminghistogram to be access by multiple > threads. For our specific case, is to remove the metrics we added. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-12783) Break up large MV mutations to prevent OOMs
[ https://issues.apache.org/jira/browse/CASSANDRA-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Greaves reassigned CASSANDRA-12783: Assignee: Kurt Greaves > Break up large MV mutations to prevent OOMs > --- > > Key: CASSANDRA-12783 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12783 > Project: Cassandra > Issue Type: Bug > Components: Local Write-Read Paths, Materialized Views >Reporter: Carl Yeksigian >Assignee: Kurt Greaves > Fix For: 4.x > > > We only use the code path added in CASSANDRA-12268 for the view builder > because otherwise we would break the contract of the batchlog, where some > mutations may be written and pushed out before the whole batch log has been > saved. > We would need to ensure that all of the updates make it to the batchlog > before allowing the batchlog manager to try to replay them, but also before > we start pushing out updates to the paired replicas. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-12938) cassandra-stress hangs on error
[ https://issues.apache.org/jira/browse/CASSANDRA-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefania updated CASSANDRA-12938: - Reviewer: Stefania (was: T Jake Luciani) > cassandra-stress hangs on error > --- > > Key: CASSANDRA-12938 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12938 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: James Falcon >Assignee: Eduard Tudenhoefner > Fix For: 3.11.x > > > After encountering a fatal error, cassandra-stress hangs. Having not run a > previous stress write, can be reproduced with: > {code} > cassandra-stress read n=1000 -rate threads=2 > {code} > Here's the full output > {code} > Stress Settings > Command: > Type: read > Count: 1,000 > No Warmup: false > Consistency Level: LOCAL_ONE > Target Uncertainty: not applicable > Key Size (bytes): 10 > Counter Increment Distibution: add=fixed(1) > Rate: > Auto: false > Thread Count: 2 > OpsPer Sec: 0 > Population: > Distribution: Gaussian: min=1,max=1000,mean=500.50,stdev=166.50 > Order: ARBITRARY > Wrap: false > Insert: > Revisits: Uniform: min=1,max=100 > Visits: Fixed: key=1 > Row Population Ratio: Ratio: divisor=1.00;delegate=Fixed: key=1 > Batch Type: not batching > Columns: > Max Columns Per Key: 5 > Column Names: [C0, C1, C2, C3, C4] > Comparator: AsciiType > Timestamp: null > Variable Column Count: false > Slice: false > Size Distribution: Fixed: key=34 > Count Distribution: Fixed: key=5 > Errors: > Ignore: false > Tries: 10 > Log: > No Summary: false > No Settings: false > File: null > Interval Millis: 1000 > Level: NORMAL > Mode: > API: JAVA_DRIVER_NATIVE > Connection Style: CQL_PREPARED > CQL Version: CQL3 > Protocol Version: V4 > Username: null > Password: null > Auth Provide Class: null > Max Pending Per Connection: 128 > Connections Per Host: 8 > Compression: NONE > Node: > Nodes: [localhost] > Is White List: false > Datacenter: null > Schema: > Keyspace: keyspace1 > Replication Strategy: org.apache.cassandra.locator.SimpleStrategy > Replication Strategy Pptions: {replication_factor=1} > Table Compression: null > Table Compaction Strategy: null > Table Compaction Strategy Options: {} > Transport: > factory=org.apache.cassandra.thrift.TFramedTransportFactory; > truststore=null; truststore-password=null; keystore=null; > keystore-password=null; ssl-protocol=TLS; ssl-alg=SunX509; store-type=JKS; > ssl-ciphers=TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA; > Port: > Native Port: 9042 > Thrift Port: 9160 > JMX Port: 9042 > Send To Daemon: > *not set* > Graph: > File: null > Revision: unknown > Title: null > Operation: READ > TokenRange: > Wrap: false > Split Factor: 1 > Sleeping 2s... > Warming up READ with 250 iterations... > Connected to cluster: falcon-test2, max pending requests per connection 128, > max connections per host 8 > Datatacenter: Cassandra; Host: localhost/127.0.0.1; Rack: rack1 > Failed to connect over JMX; not collecting these stats > Connected to cluster: falcon-test2, max pending requests per connection 128, > max connections per host 8 > Datatacenter: Cassandra; Host: localhost/127.0.0.1; Rack: rack1 > com.datastax.driver.core.exceptions.InvalidQueryException: Keyspace > 'keyspace1' does not exist > Connected to cluster: falcon-test2, max pending requests per connection 128, > max connections per host 8 > Datatacenter: Cassandra; Host: localhost/127.0.0.1; Rack: rack1 > com.datastax.driver.core.exceptions.InvalidQueryException: Keyspace > 'keyspace1' does not exist > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13363) java.lang.ArrayIndexOutOfBoundsException: null
[ https://issues.apache.org/jira/browse/CASSANDRA-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131564#comment-16131564 ] zhaoyan commented on CASSANDRA-13363: - Hi @Sam Tunnicliffe Thank you for your patient explanation. I has one more question: "This would partially undo CASSANDRA-10215 and require each replica to figure out which index to use, " Does it take a lot of time to do "the lookup to determine which index to use "? 1=>send the command to other replica, must after the local replica figure out which index to use . 2=>send to all replica, all replica figure it out itself. which looks better? > java.lang.ArrayIndexOutOfBoundsException: null > -- > > Key: CASSANDRA-13363 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13363 > Project: Cassandra > Issue Type: Bug > Environment: CentOS 6, Cassandra 3.10 >Reporter: Artem Rokhin >Assignee: zhaoyan >Priority: Critical > Fix For: 3.0.x, 3.11.x, 4.x > > > Constantly see this error in the log without any additional information or a > stack trace. > {code} > Exception in thread Thread[MessagingService-Incoming-/10.0.1.26,5,main] > {code} > {code} > java.lang.ArrayIndexOutOfBoundsException: null > {code} > Logger: org.apache.cassandra.service.CassandraDaemon > Thrdead: MessagingService-Incoming-/10.0.1.12 > Method: uncaughtException > File: CassandraDaemon.java > Line: 229 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13756) StreamingHistogram is not thread safe
[ https://issues.apache.org/jira/browse/CASSANDRA-13756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131534#comment-16131534 ] Dikang Gu commented on CASSANDRA-13756: --- [~jjirsa], thanks for fixing it! > StreamingHistogram is not thread safe > - > > Key: CASSANDRA-13756 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13756 > Project: Cassandra > Issue Type: Bug >Reporter: xiangzhou xia >Assignee: Jeff Jirsa > Fix For: 3.0.x, 3.11.x > > > When we test C*3 in shadow cluster, we notice after a period of time, several > data node suddenly run into 100% cpu and stop process query anymore. > After investigation, we found that threads are stuck on the sum() in > streaminghistogram class. Those are jmx threads that working on expose > getTombStoneRatio metrics (since jmx is kicked off every 3 seconds, there is > a chance that multiple jmx thread is access streaminghistogram at the same > time). > After further investigation, we find that the optimization in CASSANDRA-13038 > led to a spool flush every time when we call sum(). Since TreeMap is not > thread safe, threads will be stuck when multiple threads visit sum() at the > same time. > There are two approaches to solve this issue. > The first one is to add a lock to the flush in sum() which will introduce > some extra overhead to streaminghistogram. > The second one is to avoid streaminghistogram to be access by multiple > threads. For our specific case, is to remove the metrics we added. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13756) StreamingHistogram is not thread safe
[ https://issues.apache.org/jira/browse/CASSANDRA-13756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Brown updated CASSANDRA-13756: Status: Ready to Commit (was: Patch Available) > StreamingHistogram is not thread safe > - > > Key: CASSANDRA-13756 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13756 > Project: Cassandra > Issue Type: Bug >Reporter: xiangzhou xia >Assignee: Jeff Jirsa > Fix For: 3.0.x, 3.11.x > > > When we test C*3 in shadow cluster, we notice after a period of time, several > data node suddenly run into 100% cpu and stop process query anymore. > After investigation, we found that threads are stuck on the sum() in > streaminghistogram class. Those are jmx threads that working on expose > getTombStoneRatio metrics (since jmx is kicked off every 3 seconds, there is > a chance that multiple jmx thread is access streaminghistogram at the same > time). > After further investigation, we find that the optimization in CASSANDRA-13038 > led to a spool flush every time when we call sum(). Since TreeMap is not > thread safe, threads will be stuck when multiple threads visit sum() at the > same time. > There are two approaches to solve this issue. > The first one is to add a lock to the flush in sum() which will introduce > some extra overhead to streaminghistogram. > The second one is to avoid streaminghistogram to be access by multiple > threads. For our specific case, is to remove the metrics we added. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13756) StreamingHistogram is not thread safe
[ https://issues.apache.org/jira/browse/CASSANDRA-13756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Brown updated CASSANDRA-13756: Status: Patch Available (was: Open) > StreamingHistogram is not thread safe > - > > Key: CASSANDRA-13756 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13756 > Project: Cassandra > Issue Type: Bug >Reporter: xiangzhou xia >Assignee: Jeff Jirsa > Fix For: 3.0.x, 3.11.x > > > When we test C*3 in shadow cluster, we notice after a period of time, several > data node suddenly run into 100% cpu and stop process query anymore. > After investigation, we found that threads are stuck on the sum() in > streaminghistogram class. Those are jmx threads that working on expose > getTombStoneRatio metrics (since jmx is kicked off every 3 seconds, there is > a chance that multiple jmx thread is access streaminghistogram at the same > time). > After further investigation, we find that the optimization in CASSANDRA-13038 > led to a spool flush every time when we call sum(). Since TreeMap is not > thread safe, threads will be stuck when multiple threads visit sum() at the > same time. > There are two approaches to solve this issue. > The first one is to add a lock to the flush in sum() which will introduce > some extra overhead to streaminghistogram. > The second one is to avoid streaminghistogram to be access by multiple > threads. For our specific case, is to remove the metrics we added. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13728) Provide max hint window as part of nodetool
[ https://issues.apache.org/jira/browse/CASSANDRA-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-13728: --- Fix Version/s: (was: 3.11.1) (was: 3.0.15) 4.x 3.11.x 3.0.x > Provide max hint window as part of nodetool > --- > > Key: CASSANDRA-13728 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13728 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Milan Milosevic >Assignee: Varun Gupta >Priority: Minor > Labels: lhf > Fix For: 3.0.x, 3.11.x, 4.x > > Attachments: display-max-hint-handoff-period.patch > > > Currently it is not possible to get max_hint_window over nodetool. The > information is available through StorageProxyMBean, though. Since max hint > window information is needed in order to asses what kind of failure recovery > should be performed for a node that goes down (bootstrap or just restart), it > would be handy if max hint window is easily accessible using nodetool. > Currently nodetool statushandoff output is: > {code} > [centos@cassandra-node]$ nodetool statushandoff > Hinted handoff is running > {code} > The output could be improved to look like this: > {code} > [centos@cassandra-node]$ nodetool statushandoff > Hinted handoff is running with max hint window (ms): 1080 > {code} > Implementation is quite trivial (fetch the info from the StorageProxyMBean > from the StatusHandoff class). I can provide the patch for this, if it is > agreed that this it right approach. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13756) StreamingHistogram is not thread safe
[ https://issues.apache.org/jira/browse/CASSANDRA-13756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131480#comment-16131480 ] Jason Brown commented on CASSANDRA-13756: - +1 on the changes, and please commit if/when the tests pass. wrt trunk, yes, I think it should be safe due to the snapshots we create and access from {{StatsMetadata}}. > StreamingHistogram is not thread safe > - > > Key: CASSANDRA-13756 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13756 > Project: Cassandra > Issue Type: Bug >Reporter: xiangzhou xia >Assignee: Jeff Jirsa > Fix For: 3.0.x, 3.11.x > > > When we test C*3 in shadow cluster, we notice after a period of time, several > data node suddenly run into 100% cpu and stop process query anymore. > After investigation, we found that threads are stuck on the sum() in > streaminghistogram class. Those are jmx threads that working on expose > getTombStoneRatio metrics (since jmx is kicked off every 3 seconds, there is > a chance that multiple jmx thread is access streaminghistogram at the same > time). > After further investigation, we find that the optimization in CASSANDRA-13038 > led to a spool flush every time when we call sum(). Since TreeMap is not > thread safe, threads will be stuck when multiple threads visit sum() at the > same time. > There are two approaches to solve this issue. > The first one is to add a lock to the flush in sum() which will introduce > some extra overhead to streaminghistogram. > The second one is to avoid streaminghistogram to be access by multiple > threads. For our specific case, is to remove the metrics we added. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13728) Provide max hint window as part of nodetool
[ https://issues.apache.org/jira/browse/CASSANDRA-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131479#comment-16131479 ] Jeff Jirsa commented on CASSANDRA-13728: If you change that, and you undo the changes to imports that your IDE probably did on your behalf, I'll be happy to +1 it for you (talked to Jason and he says he doesnt mind) > Provide max hint window as part of nodetool > --- > > Key: CASSANDRA-13728 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13728 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Milan Milosevic >Assignee: Varun Gupta >Priority: Minor > Labels: lhf > Fix For: 3.0.15, 3.11.1 > > Attachments: display-max-hint-handoff-period.patch > > > Currently it is not possible to get max_hint_window over nodetool. The > information is available through StorageProxyMBean, though. Since max hint > window information is needed in order to asses what kind of failure recovery > should be performed for a node that goes down (bootstrap or just restart), it > would be handy if max hint window is easily accessible using nodetool. > Currently nodetool statushandoff output is: > {code} > [centos@cassandra-node]$ nodetool statushandoff > Hinted handoff is running > {code} > The output could be improved to look like this: > {code} > [centos@cassandra-node]$ nodetool statushandoff > Hinted handoff is running with max hint window (ms): 1080 > {code} > Implementation is quite trivial (fetch the info from the StorageProxyMBean > from the StatusHandoff class). I can provide the patch for this, if it is > agreed that this it right approach. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13728) Provide max hint window as part of nodetool
[ https://issues.apache.org/jira/browse/CASSANDRA-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131474#comment-16131474 ] Jeff Jirsa commented on CASSANDRA-13728: I'm not [~jasobrown] , but I do have an opinion that we shouldn't change output of existing JMX endpoints in minor versions, as people may be parsing it with tools and relying on its output not changing. We should treat it as a public API, and not break it on a minor. Would be better (in my opinion) to use {{spProxy.getMaxHintWindow()}} to add a new nodetool command (such as {{nodetool handoffwindow}} or similar) instead. > Provide max hint window as part of nodetool > --- > > Key: CASSANDRA-13728 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13728 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Milan Milosevic >Assignee: Varun Gupta >Priority: Minor > Labels: lhf > Fix For: 3.0.15, 3.11.1 > > Attachments: display-max-hint-handoff-period.patch > > > Currently it is not possible to get max_hint_window over nodetool. The > information is available through StorageProxyMBean, though. Since max hint > window information is needed in order to asses what kind of failure recovery > should be performed for a node that goes down (bootstrap or just restart), it > would be handy if max hint window is easily accessible using nodetool. > Currently nodetool statushandoff output is: > {code} > [centos@cassandra-node]$ nodetool statushandoff > Hinted handoff is running > {code} > The output could be improved to look like this: > {code} > [centos@cassandra-node]$ nodetool statushandoff > Hinted handoff is running with max hint window (ms): 1080 > {code} > Implementation is quite trivial (fetch the info from the StorageProxyMBean > from the StatusHandoff class). I can provide the patch for this, if it is > agreed that this it right approach. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-13728) Provide max hint window as part of nodetool
[ https://issues.apache.org/jira/browse/CASSANDRA-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa reassigned CASSANDRA-13728: -- Assignee: Varun Gupta > Provide max hint window as part of nodetool > --- > > Key: CASSANDRA-13728 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13728 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Milan Milosevic >Assignee: Varun Gupta >Priority: Minor > Labels: lhf > Fix For: 3.0.15, 3.11.1 > > Attachments: display-max-hint-handoff-period.patch > > > Currently it is not possible to get max_hint_window over nodetool. The > information is available through StorageProxyMBean, though. Since max hint > window information is needed in order to asses what kind of failure recovery > should be performed for a node that goes down (bootstrap or just restart), it > would be handy if max hint window is easily accessible using nodetool. > Currently nodetool statushandoff output is: > {code} > [centos@cassandra-node]$ nodetool statushandoff > Hinted handoff is running > {code} > The output could be improved to look like this: > {code} > [centos@cassandra-node]$ nodetool statushandoff > Hinted handoff is running with max hint window (ms): 1080 > {code} > Implementation is quite trivial (fetch the info from the StorageProxyMBean > from the StatusHandoff class). I can provide the patch for this, if it is > agreed that this it right approach. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13633) Data not loading to keyspace using sstable create via CQLSSTableWriter
[ https://issues.apache.org/jira/browse/CASSANDRA-13633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131464#comment-16131464 ] Varun Gupta commented on CASSANDRA-13633: - [~arpanps] 3.11 supports loading UDT. > Data not loading to keyspace using sstable create via CQLSSTableWriter > -- > > Key: CASSANDRA-13633 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13633 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: Linux >Reporter: Arpan Khandelwal > Fix For: 3.11.x > > Attachments: dataloading_result.png > > > Scenario : Need to read CSV, Write SSTable using CQLSSTableWriter and load it > to the keyspace. [Explained > here|https://stackoverflow.com/questions/44713777/json-cassandra-field-value-parsing-using-antlr4] > . That was not working so tried a simple test case available > here[https://github.com/apache/cassandra/blob/cassandra-3.11/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterTest.java#L378] > Which did not worked either. Following is what i tried. > Created cql_keyspace3 with below table and types. > {code:java} > CREATE TYPE cql_keyspace3.tuple2 (a int, b int); > CREATE TYPE cql_keyspace3.tuple3 (a int, b int, c int) > CREATE TABLE cql_keyspace3.table3 ( k int, v1 list, v2 > frozen, PRIMARY KEY (k)); > {code} > ran code > {code:java} > final String KS = "cql_keyspace3"; > final String TABLE = "table3"; > final String schema = "CREATE TABLE " + KS + "." + TABLE + " (" + " > k int," + " v1 list ," > + " v2 frozen," + " PRIMARY KEY (k)" + ")"; > File tempdir = Files.createTempDir(); > File dataDir = new File(tempdir.getAbsolutePath() + File.separator + > KS + File.separator + TABLE); > System.out.println(dataDir); > assert dataDir.mkdirs(); > CQLSSTableWriter writer = > CQLSSTableWriter.builder().inDirectory(dataDir) > .withType("CREATE TYPE " + KS + ".tuple2 (a int, b int)") > .withType("CREATE TYPE " + KS + ".tuple3 (a int, b int, c > int)").forTable(schema) > .using("INSERT INTO " + KS + "." + TABLE + " (k, v1, v2) " + > "VALUES (?, ?, ?)").build(); > > > UserType tuple2Type = writer.getUDType("tuple2"); > UserType tuple3Type = writer.getUDType("tuple3"); > for (int i = 0; i < 100; i++) { > writer.addRow(i, > > ImmutableList.builder().add(tuple2Type.newValue().setInt("a", i * > 10).setInt("b", i * 20)) > .add(tuple2Type.newValue().setInt("a", i * > 30).setInt("b", i * 40)).build(), > tuple3Type.newValue().setInt("a", i * 100).setInt("b", i > * 200).setInt("c", i * 300)); > } > writer.close(); > {code} > It generated sstable in "/tmp/1498224996687-0/cql_keyspace3" dir > Loaded data using following command > {code:java} > /tmp/1498224996687-0/cql_keyspace3 $ sstableloader -d localhost > table3-e6e0fa61581911e78be6a72ebce4c745/ > Established connection to initial hosts > Opening sstables and calculating sections to stream > Streaming relevant part of > /tmp/1498224996687-0/cql_keyspace3/table3-e6e0fa61581911e78be6a72ebce4c745/mc-2-big-Data.db > to [localhost/127.0.0.1] > progress: [localhost/127.0.0.1]0:1/1 100% total: 100% 1.060KiB/s (avg: > 1.060KiB/s) > progress: [localhost/127.0.0.1]0:1/1 100% total: 100% 0.000KiB/s (avg: > 0.984KiB/s) > Summary statistics: >Connections per host: 1 >Total files transferred : 1 >Total bytes transferred : 5.572KiB >Total duration : 5668 ms >Average transfer rate : 0.982KiB/s >Peak transfer rate : 1.060KiB/s > {code} > ||k||v1||v2|| > |92|[{a:920,b:1840}, {a:2760,b:3680}]| > find full result in attached snapshot. > Please let me know which version of cassandra I should which can allow me to > load reseverd types, collections, UDT from sstable create using > CQLSSTableWriter. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Resolved] (CASSANDRA-13718) ConcurrentModificationException in nodetool upgradesstables
[ https://issues.apache.org/jira/browse/CASSANDRA-13718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa resolved CASSANDRA-13718. Resolution: Fixed Assignee: Jeff Jirsa Resolving this as a dupe of CASSANDRA-13756 , you reported it first, but I arbitrarily chose that one for my github branch name, so going with that. Thanks for the report! > ConcurrentModificationException in nodetool upgradesstables > --- > > Key: CASSANDRA-13718 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13718 > Project: Cassandra > Issue Type: Bug > Environment: Cassandra 3.11 on Linux >Reporter: Hannu Kröger >Assignee: Jeff Jirsa > > When upgrading from 2.2.8 to Cassandra 3.11 we were able to upgrade all other > sstables except 1 file on 3 nodes (out of 4). Those are related to 2 > different tables. > Upgrading sstables fails with ConcurrentModificationException. > {code} > $ nodetool upgradesstables > error: null > -- StackTrace -- > java.util.ConcurrentModificationException > at java.util.TreeMap$PrivateEntryIterator.nextEntry(TreeMap.java:1211) > at java.util.TreeMap$KeyIterator.next(TreeMap.java:1265) > at > org.apache.cassandra.utils.StreamingHistogram.flushHistogram(StreamingHistogram.java:168) > at > org.apache.cassandra.utils.StreamingHistogram.update(StreamingHistogram.java:124) > at > org.apache.cassandra.utils.StreamingHistogram.update(StreamingHistogram.java:96) > at > org.apache.cassandra.io.sstable.metadata.MetadataCollector.updateLocalDeletionTime(MetadataCollector.java:209) > at > org.apache.cassandra.io.sstable.metadata.MetadataCollector.update(MetadataCollector.java:182) > at org.apache.cassandra.db.rows.Cells.collectStats(Cells.java:44) > at > org.apache.cassandra.db.rows.Rows.lambda$collectStats$0(Rows.java:102) > at org.apache.cassandra.utils.btree.BTree.applyForwards(BTree.java:1242) > at org.apache.cassandra.utils.btree.BTree.apply(BTree.java:1197) > at org.apache.cassandra.db.rows.BTreeRow.apply(BTreeRow.java:172) > at org.apache.cassandra.db.rows.Rows.collectStats(Rows.java:97) > at > org.apache.cassandra.io.sstable.format.big.BigTableWriter$StatsCollector.applyToRow(BigTableWriter.java:237) > at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:141) > at > org.apache.cassandra.db.ColumnIndex.buildRowIndex(ColumnIndex.java:110) > at > org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:173) > at > org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:135) > at > org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:65) > at > org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:141) > at > org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:201) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:85) > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61) > at > org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:428) > at > org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:315) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at > org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13756) StreamingHistogram is not thread safe
[ https://issues.apache.org/jira/browse/CASSANDRA-13756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131453#comment-16131453 ] Jeff Jirsa commented on CASSANDRA-13756: Shouldn't need a version for trunk, but [~jasobrown] if you can check me there to be sure that'd be nice (I think in the faster rewrite for trunk, we now [build|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/utils/streamhist/StreamingTombstoneHistogramBuilder.java#L182-L186] a snapshot that is no longer modified on read). || branch || utest || dtest || | [3.0|https://github.com/jeffjirsa/cassandra/tree/cassandra-3.0-13756] | [3.0 circle|https://circleci.com/gh/jeffjirsa/cassandra/tree/cassandra-3.0-13756] | [3.0 dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/189/] | | [3.11|https://github.com/jeffjirsa/cassandra/tree/cassandra-3.11-13756] | [3.11 circle|https://circleci.com/gh/jeffjirsa/cassandra/tree/cassandra-3.11-13756] | [3.11 dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/190/] | > StreamingHistogram is not thread safe > - > > Key: CASSANDRA-13756 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13756 > Project: Cassandra > Issue Type: Bug >Reporter: xiangzhou xia >Assignee: Jeff Jirsa > Fix For: 3.0.x, 3.11.x > > > When we test C*3 in shadow cluster, we notice after a period of time, several > data node suddenly run into 100% cpu and stop process query anymore. > After investigation, we found that threads are stuck on the sum() in > streaminghistogram class. Those are jmx threads that working on expose > getTombStoneRatio metrics (since jmx is kicked off every 3 seconds, there is > a chance that multiple jmx thread is access streaminghistogram at the same > time). > After further investigation, we find that the optimization in CASSANDRA-13038 > led to a spool flush every time when we call sum(). Since TreeMap is not > thread safe, threads will be stuck when multiple threads visit sum() at the > same time. > There are two approaches to solve this issue. > The first one is to add a lock to the flush in sum() which will introduce > some extra overhead to streaminghistogram. > The second one is to avoid streaminghistogram to be access by multiple > threads. For our specific case, is to remove the metrics we added. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-13756) StreamingHistogram is not thread safe
[ https://issues.apache.org/jira/browse/CASSANDRA-13756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa reassigned CASSANDRA-13756: -- Assignee: Jeff Jirsa > StreamingHistogram is not thread safe > - > > Key: CASSANDRA-13756 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13756 > Project: Cassandra > Issue Type: Bug >Reporter: xiangzhou xia >Assignee: Jeff Jirsa > Fix For: 3.0.x, 3.11.x > > > When we test C*3 in shadow cluster, we notice after a period of time, several > data node suddenly run into 100% cpu and stop process query anymore. > After investigation, we found that threads are stuck on the sum() in > streaminghistogram class. Those are jmx threads that working on expose > getTombStoneRatio metrics (since jmx is kicked off every 3 seconds, there is > a chance that multiple jmx thread is access streaminghistogram at the same > time). > After further investigation, we find that the optimization in CASSANDRA-13038 > led to a spool flush every time when we call sum(). Since TreeMap is not > thread safe, threads will be stuck when multiple threads visit sum() at the > same time. > There are two approaches to solve this issue. > The first one is to add a lock to the flush in sum() which will introduce > some extra overhead to streaminghistogram. > The second one is to avoid streaminghistogram to be access by multiple > threads. For our specific case, is to remove the metrics we added. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13756) StreamingHistogram is not thread safe
[ https://issues.apache.org/jira/browse/CASSANDRA-13756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-13756: --- Reviewer: Jason Brown > StreamingHistogram is not thread safe > - > > Key: CASSANDRA-13756 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13756 > Project: Cassandra > Issue Type: Bug >Reporter: xiangzhou xia >Assignee: Jeff Jirsa > Fix For: 3.0.x, 3.11.x > > > When we test C*3 in shadow cluster, we notice after a period of time, several > data node suddenly run into 100% cpu and stop process query anymore. > After investigation, we found that threads are stuck on the sum() in > streaminghistogram class. Those are jmx threads that working on expose > getTombStoneRatio metrics (since jmx is kicked off every 3 seconds, there is > a chance that multiple jmx thread is access streaminghistogram at the same > time). > After further investigation, we find that the optimization in CASSANDRA-13038 > led to a spool flush every time when we call sum(). Since TreeMap is not > thread safe, threads will be stuck when multiple threads visit sum() at the > same time. > There are two approaches to solve this issue. > The first one is to add a lock to the flush in sum() which will introduce > some extra overhead to streaminghistogram. > The second one is to avoid streaminghistogram to be access by multiple > threads. For our specific case, is to remove the metrics we added. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13756) StreamingHistogram is not thread safe
[ https://issues.apache.org/jira/browse/CASSANDRA-13756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-13756: --- Fix Version/s: 3.11.x 3.0.x > StreamingHistogram is not thread safe > - > > Key: CASSANDRA-13756 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13756 > Project: Cassandra > Issue Type: Bug >Reporter: xiangzhou xia > Fix For: 3.0.x, 3.11.x > > > When we test C*3 in shadow cluster, we notice after a period of time, several > data node suddenly run into 100% cpu and stop process query anymore. > After investigation, we found that threads are stuck on the sum() in > streaminghistogram class. Those are jmx threads that working on expose > getTombStoneRatio metrics (since jmx is kicked off every 3 seconds, there is > a chance that multiple jmx thread is access streaminghistogram at the same > time). > After further investigation, we find that the optimization in CASSANDRA-13038 > led to a spool flush every time when we call sum(). Since TreeMap is not > thread safe, threads will be stuck when multiple threads visit sum() at the > same time. > There are two approaches to solve this issue. > The first one is to add a lock to the flush in sum() which will introduce > some extra overhead to streaminghistogram. > The second one is to avoid streaminghistogram to be access by multiple > threads. For our specific case, is to remove the metrics we added. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13728) Provide max hint window as part of nodetool
[ https://issues.apache.org/jira/browse/CASSANDRA-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Gupta updated CASSANDRA-13728: Attachment: display-max-hint-handoff-period.patch > Provide max hint window as part of nodetool > --- > > Key: CASSANDRA-13728 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13728 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Milan Milosevic >Priority: Minor > Labels: lhf > Fix For: 3.0.15, 3.11.1 > > Attachments: display-max-hint-handoff-period.patch > > > Currently it is not possible to get max_hint_window over nodetool. The > information is available through StorageProxyMBean, though. Since max hint > window information is needed in order to asses what kind of failure recovery > should be performed for a node that goes down (bootstrap or just restart), it > would be handy if max hint window is easily accessible using nodetool. > Currently nodetool statushandoff output is: > {code} > [centos@cassandra-node]$ nodetool statushandoff > Hinted handoff is running > {code} > The output could be improved to look like this: > {code} > [centos@cassandra-node]$ nodetool statushandoff > Hinted handoff is running with max hint window (ms): 1080 > {code} > Implementation is quite trivial (fetch the info from the StorageProxyMBean > from the StatusHandoff class). I can provide the patch for this, if it is > agreed that this it right approach. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13728) Provide max hint window as part of nodetool
[ https://issues.apache.org/jira/browse/CASSANDRA-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131358#comment-16131358 ] Varun Gupta commented on CASSANDRA-13728: - [~jasobrown] can you please review the patch? > Provide max hint window as part of nodetool > --- > > Key: CASSANDRA-13728 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13728 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Milan Milosevic >Priority: Minor > Labels: lhf > Fix For: 3.0.15, 3.11.1 > > Attachments: display-max-hint-handoff-period.patch > > > Currently it is not possible to get max_hint_window over nodetool. The > information is available through StorageProxyMBean, though. Since max hint > window information is needed in order to asses what kind of failure recovery > should be performed for a node that goes down (bootstrap or just restart), it > would be handy if max hint window is easily accessible using nodetool. > Currently nodetool statushandoff output is: > {code} > [centos@cassandra-node]$ nodetool statushandoff > Hinted handoff is running > {code} > The output could be improved to look like this: > {code} > [centos@cassandra-node]$ nodetool statushandoff > Hinted handoff is running with max hint window (ms): 1080 > {code} > Implementation is quite trivial (fetch the info from the StorageProxyMBean > from the StatusHandoff class). I can provide the patch for this, if it is > agreed that this it right approach. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-12390) Make SASI work with partitioners that have variable-size tokens
[ https://issues.apache.org/jira/browse/CASSANDRA-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa reassigned CASSANDRA-12390: -- Assignee: Abhish Agarwal > Make SASI work with partitioners that have variable-size tokens > --- > > Key: CASSANDRA-12390 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12390 > Project: Cassandra > Issue Type: Improvement > Components: sasi >Reporter: Alex Petrov >Assignee: Abhish Agarwal > > At the moment, SASI indexed can work only with Murmu3Partitioner. > [CASSANDRA-12389] was created to enable support of one more partitioner with > fixed-size tokens, although enabling variable-size tokens will need more > work, namely skipping tokens, since we can't rely on fixed-size > multiplication for calculating offsets in that case anymore. > This change won't require bytecode format changes, although supporting > ByteOrderedPartitioner is not a very high priority, and performance will be > worse because of "manual" skipping. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13775) CircleCI tests fail because *stress-test* isn't a valid target
[ https://issues.apache.org/jira/browse/CASSANDRA-13775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-13775: --- Resolution: Fixed Reproduced In: 3.0.14, 2.2.10, 2.1.18 (was: 2.1.18, 2.2.10, 3.0.14) Status: Resolved (was: Ready to Commit) Commit {{3c0c4620f2}} on cassandra-2.1 and merged up. Thanks, Ed. > CircleCI tests fail because *stress-test* isn't a valid target > -- > > Key: CASSANDRA-13775 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13775 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Eduard Tudenhoefner >Assignee: Eduard Tudenhoefner > Labels: CI > Fix For: 2.2.11, 3.0.15, 2.1.19 > > > *stress-test* was added to CircleCI in CASSANDRA-13413 (2.1+) but the target > itself got introduced in CASSANDRA-11638 (3.10). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[06/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2
Merge branch 'cassandra-2.1' into cassandra-2.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/830b0127 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/830b0127 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/830b0127 Branch: refs/heads/trunk Commit: 830b01272c227503f74b5fe5a5340331ee3ce685 Parents: 270f690 3c0c462 Author: Michael ShulerAuthored: Thu Aug 17 14:24:09 2017 -0500 Committer: Michael Shuler Committed: Thu Aug 17 14:24:09 2017 -0500 -- CHANGES.txt | 1 + circle.yml | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/830b0127/CHANGES.txt -- diff --cc CHANGES.txt index f712333,4f8f65f..5c1d1e5 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,11 -1,5 +1,12 @@@ -2.1.19 - * Remove stress-test target in CircleCI as it's not existing (CASSANDRA-13775) +2.2.11 + * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) + * Fix queries with LIMIT and filtering on clustering columns (CASSANDRA-11223) + * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272) + * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592) + * Fix nested Tuples/UDTs validation (CASSANDRA-13646) + * Remove unused max_value_size_in_mb config setting from yaml (CASSANDRA-13625 +Merged from 2.1: ++ * Remove stress-test target in CircleCI as it's not existing (CASSANDRA-13775) * Clone HeartBeatState when building gossip messages. Make its generation/version volatile (CASSANDRA-13700) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[15/15] cassandra git commit: Merge branch 'cassandra-3.11' into trunk
Merge branch 'cassandra-3.11' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/22c86f7b Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/22c86f7b Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/22c86f7b Branch: refs/heads/trunk Commit: 22c86f7bca1ded50f22d79babe6b44c9880877f2 Parents: c0dc77e 76e9e63 Author: Michael ShulerAuthored: Thu Aug 17 14:27:35 2017 -0500 Committer: Michael Shuler Committed: Thu Aug 17 14:27:35 2017 -0500 -- -- - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[14/15] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11
Merge branch 'cassandra-3.0' into cassandra-3.11 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/76e9e632 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/76e9e632 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/76e9e632 Branch: refs/heads/trunk Commit: 76e9e632e60490dc37d01426b9d33171af604dfa Parents: 2795d72 0614b27 Author: Michael ShulerAuthored: Thu Aug 17 14:27:25 2017 -0500 Committer: Michael Shuler Committed: Thu Aug 17 14:27:25 2017 -0500 -- -- - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[09/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2
Merge branch 'cassandra-2.1' into cassandra-2.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/830b0127 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/830b0127 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/830b0127 Branch: refs/heads/cassandra-3.11 Commit: 830b01272c227503f74b5fe5a5340331ee3ce685 Parents: 270f690 3c0c462 Author: Michael ShulerAuthored: Thu Aug 17 14:24:09 2017 -0500 Committer: Michael Shuler Committed: Thu Aug 17 14:24:09 2017 -0500 -- CHANGES.txt | 1 + circle.yml | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/830b0127/CHANGES.txt -- diff --cc CHANGES.txt index f712333,4f8f65f..5c1d1e5 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,11 -1,5 +1,12 @@@ -2.1.19 - * Remove stress-test target in CircleCI as it's not existing (CASSANDRA-13775) +2.2.11 + * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) + * Fix queries with LIMIT and filtering on clustering columns (CASSANDRA-11223) + * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272) + * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592) + * Fix nested Tuples/UDTs validation (CASSANDRA-13646) + * Remove unused max_value_size_in_mb config setting from yaml (CASSANDRA-13625 +Merged from 2.1: ++ * Remove stress-test target in CircleCI as it's not existing (CASSANDRA-13775) * Clone HeartBeatState when building gossip messages. Make its generation/version volatile (CASSANDRA-13700) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[03/15] cassandra git commit: CASSANDRA-13775: Remove stress-test target in CircleCI as it's not existing
CASSANDRA-13775: Remove stress-test target in CircleCI as it's not existing Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3c0c4620 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3c0c4620 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3c0c4620 Branch: refs/heads/trunk Commit: 3c0c4620f2eb64a10d9e12fdea4c8a6b167f7165 Parents: 2290c0d Author: Eduard TudenhoefnerAuthored: Thu Aug 17 11:49:50 2017 -0700 Committer: Michael Shuler Committed: Thu Aug 17 14:22:20 2017 -0500 -- CHANGES.txt | 1 + circle.yml | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c0c4620/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 4dbd984..4f8f65f 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.19 + * Remove stress-test target in CircleCI as it's not existing (CASSANDRA-13775) * Clone HeartBeatState when building gossip messages. Make its generation/version volatile (CASSANDRA-13700) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c0c4620/circle.yml -- diff --git a/circle.yml b/circle.yml index 9d31277..5b4c72d 100644 --- a/circle.yml +++ b/circle.yml @@ -7,7 +7,7 @@ test: - sudo apt-get update; sudo apt-get install wamerican: parallel: true override: -- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test ;; 1) ant long-test ;; 2) ant test-compression ;; 3) ant stress-test ;;esac: +- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test ;; 1) ant long-test ;; 2) ant test-compression ;;esac: parallel: true post: - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[01/15] cassandra git commit: CASSANDRA-13775: Remove stress-test target in CircleCI as it's not existing
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 2290c0d4b -> 3c0c4620f refs/heads/cassandra-2.2 270f690ff -> 830b01272 refs/heads/cassandra-3.0 c2b635ac2 -> 0614b274f refs/heads/cassandra-3.11 2795d72b4 -> 76e9e632e refs/heads/trunk c0dc77ed4 -> 22c86f7bc CASSANDRA-13775: Remove stress-test target in CircleCI as it's not existing Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3c0c4620 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3c0c4620 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3c0c4620 Branch: refs/heads/cassandra-2.1 Commit: 3c0c4620f2eb64a10d9e12fdea4c8a6b167f7165 Parents: 2290c0d Author: Eduard TudenhoefnerAuthored: Thu Aug 17 11:49:50 2017 -0700 Committer: Michael Shuler Committed: Thu Aug 17 14:22:20 2017 -0500 -- CHANGES.txt | 1 + circle.yml | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c0c4620/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 4dbd984..4f8f65f 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.19 + * Remove stress-test target in CircleCI as it's not existing (CASSANDRA-13775) * Clone HeartBeatState when building gossip messages. Make its generation/version volatile (CASSANDRA-13700) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c0c4620/circle.yml -- diff --git a/circle.yml b/circle.yml index 9d31277..5b4c72d 100644 --- a/circle.yml +++ b/circle.yml @@ -7,7 +7,7 @@ test: - sudo apt-get update; sudo apt-get install wamerican: parallel: true override: -- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test ;; 1) ant long-test ;; 2) ant test-compression ;; 3) ant stress-test ;;esac: +- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test ;; 1) ant long-test ;; 2) ant test-compression ;;esac: parallel: true post: - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[12/15] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0614b274 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0614b274 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0614b274 Branch: refs/heads/cassandra-3.0 Commit: 0614b274faa6410904d24b81d7dd62df1dfae4c6 Parents: c2b635a 830b012 Author: Michael ShulerAuthored: Thu Aug 17 14:27:03 2017 -0500 Committer: Michael Shuler Committed: Thu Aug 17 14:27:03 2017 -0500 -- CHANGES.txt | 3 ++- circle.yml | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0614b274/CHANGES.txt -- diff --cc CHANGES.txt index 358dd04,5c1d1e5..6faaa48 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,31 -1,12 +1,32 @@@ -2.2.11 - * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) +3.0.15 + * Randomize batchlog endpoint selection with only 1 or 2 racks (CASSANDRA-12884) + * Fix digest calculation for counter cells (CASSANDRA-13750) + * Fix ColumnDefinition.cellValueType() for non-frozen collection and change SSTabledump to use type.toJSONString() (CASSANDRA-13573) + * Skip materialized view addition if the base table doesn't exist (CASSANDRA-13737) + * Drop table should remove corresponding entries in dropped_columns table (CASSANDRA-13730) + * Log warn message until legacy auth tables have been migrated (CASSANDRA-13371) + * Fix incorrect [2.1 <- 3.0] serialization of counter cells created in 2.0 (CASSANDRA-13691) + * Fix invalid writetime for null cells (CASSANDRA-13711) + * Fix ALTER TABLE statement to atomically propagate changes to the table and its MVs (CASSANDRA-12952) + * Fixed ambiguous output of nodetool tablestats command (CASSANDRA-13722) + * JMXEnabledThreadPoolExecutor with corePoolSize equal to maxPoolSize (Backport CASSANDRA-13329) + * Fix Digest mismatch Exception if hints file has UnknownColumnFamily (CASSANDRA-13696) + * Purge tombstones created by expired cells (CASSANDRA-13643) + * Make concat work with iterators that have different subsets of columns (CASSANDRA-13482) + * Set test.runners based on cores and memory size (CASSANDRA-13078) + * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557) + * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606) + * Fix secondary index queries on COMPACT tables (CASSANDRA-13627) + * Nodetool listsnapshots output is missing a newline, if there are no snapshots (CASSANDRA-13568) + * sstabledump reports incorrect usage for argument order (CASSANDRA-13532) - Merged from 2.2: ++Merged from 2.2: + * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) * Fix queries with LIMIT and filtering on clustering columns (CASSANDRA-11223) * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272) * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592) * Fix nested Tuples/UDTs validation (CASSANDRA-13646) - * Remove unused max_value_size_in_mb config setting from yaml (CASSANDRA-13625 Merged from 2.1: + * Remove stress-test target in CircleCI as it's not existing (CASSANDRA-13775) * Clone HeartBeatState when building gossip messages. Make its generation/version volatile (CASSANDRA-13700) http://git-wip-us.apache.org/repos/asf/cassandra/blob/0614b274/circle.yml -- diff --cc circle.yml index f4801b7,5b4c72d..a51cf25 --- a/circle.yml +++ b/circle.yml @@@ -7,7 -7,7 +7,7 @@@ test - sudo apt-get update; sudo apt-get install wamerican: parallel: true override: - - case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test -Dtest.runners=1;; 1) ant long-test ;; 2) ant test-compression ;; 3) ant stress-test ;;esac: -- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test ;; 1) ant long-test ;; 2) ant test-compression ;;esac: ++- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test -Dtest.runners=1;; 1) ant long-test ;; 2) ant test-compression ;;esac: parallel: true post: - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[13/15] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11
Merge branch 'cassandra-3.0' into cassandra-3.11 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/76e9e632 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/76e9e632 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/76e9e632 Branch: refs/heads/cassandra-3.11 Commit: 76e9e632e60490dc37d01426b9d33171af604dfa Parents: 2795d72 0614b27 Author: Michael ShulerAuthored: Thu Aug 17 14:27:25 2017 -0500 Committer: Michael Shuler Committed: Thu Aug 17 14:27:25 2017 -0500 -- -- - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[02/15] cassandra git commit: CASSANDRA-13775: Remove stress-test target in CircleCI as it's not existing
CASSANDRA-13775: Remove stress-test target in CircleCI as it's not existing Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3c0c4620 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3c0c4620 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3c0c4620 Branch: refs/heads/cassandra-2.2 Commit: 3c0c4620f2eb64a10d9e12fdea4c8a6b167f7165 Parents: 2290c0d Author: Eduard TudenhoefnerAuthored: Thu Aug 17 11:49:50 2017 -0700 Committer: Michael Shuler Committed: Thu Aug 17 14:22:20 2017 -0500 -- CHANGES.txt | 1 + circle.yml | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c0c4620/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 4dbd984..4f8f65f 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.19 + * Remove stress-test target in CircleCI as it's not existing (CASSANDRA-13775) * Clone HeartBeatState when building gossip messages. Make its generation/version volatile (CASSANDRA-13700) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c0c4620/circle.yml -- diff --git a/circle.yml b/circle.yml index 9d31277..5b4c72d 100644 --- a/circle.yml +++ b/circle.yml @@ -7,7 +7,7 @@ test: - sudo apt-get update; sudo apt-get install wamerican: parallel: true override: -- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test ;; 1) ant long-test ;; 2) ant test-compression ;; 3) ant stress-test ;;esac: +- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test ;; 1) ant long-test ;; 2) ant test-compression ;;esac: parallel: true post: - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[04/15] cassandra git commit: CASSANDRA-13775: Remove stress-test target in CircleCI as it's not existing
CASSANDRA-13775: Remove stress-test target in CircleCI as it's not existing Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3c0c4620 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3c0c4620 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3c0c4620 Branch: refs/heads/cassandra-3.0 Commit: 3c0c4620f2eb64a10d9e12fdea4c8a6b167f7165 Parents: 2290c0d Author: Eduard TudenhoefnerAuthored: Thu Aug 17 11:49:50 2017 -0700 Committer: Michael Shuler Committed: Thu Aug 17 14:22:20 2017 -0500 -- CHANGES.txt | 1 + circle.yml | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c0c4620/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 4dbd984..4f8f65f 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.19 + * Remove stress-test target in CircleCI as it's not existing (CASSANDRA-13775) * Clone HeartBeatState when building gossip messages. Make its generation/version volatile (CASSANDRA-13700) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c0c4620/circle.yml -- diff --git a/circle.yml b/circle.yml index 9d31277..5b4c72d 100644 --- a/circle.yml +++ b/circle.yml @@ -7,7 +7,7 @@ test: - sudo apt-get update; sudo apt-get install wamerican: parallel: true override: -- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test ;; 1) ant long-test ;; 2) ant test-compression ;; 3) ant stress-test ;;esac: +- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test ;; 1) ant long-test ;; 2) ant test-compression ;;esac: parallel: true post: - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-13775) CircleCI tests fail because *stress-test* isn't a valid target
[ https://issues.apache.org/jira/browse/CASSANDRA-13775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eduard Tudenhoefner reassigned CASSANDRA-13775: --- Assignee: Eduard Tudenhoefner (was: Michael Shuler) > CircleCI tests fail because *stress-test* isn't a valid target > -- > > Key: CASSANDRA-13775 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13775 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Eduard Tudenhoefner >Assignee: Eduard Tudenhoefner > Labels: CI > Fix For: 2.2.11, 3.0.15, 2.1.19 > > > *stress-test* was added to CircleCI in CASSANDRA-13413 (2.1+) but the target > itself got introduced in CASSANDRA-11638 (3.10). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[05/15] cassandra git commit: CASSANDRA-13775: Remove stress-test target in CircleCI as it's not existing
CASSANDRA-13775: Remove stress-test target in CircleCI as it's not existing Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3c0c4620 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3c0c4620 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3c0c4620 Branch: refs/heads/cassandra-3.11 Commit: 3c0c4620f2eb64a10d9e12fdea4c8a6b167f7165 Parents: 2290c0d Author: Eduard TudenhoefnerAuthored: Thu Aug 17 11:49:50 2017 -0700 Committer: Michael Shuler Committed: Thu Aug 17 14:22:20 2017 -0500 -- CHANGES.txt | 1 + circle.yml | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c0c4620/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 4dbd984..4f8f65f 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.19 + * Remove stress-test target in CircleCI as it's not existing (CASSANDRA-13775) * Clone HeartBeatState when building gossip messages. Make its generation/version volatile (CASSANDRA-13700) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c0c4620/circle.yml -- diff --git a/circle.yml b/circle.yml index 9d31277..5b4c72d 100644 --- a/circle.yml +++ b/circle.yml @@ -7,7 +7,7 @@ test: - sudo apt-get update; sudo apt-get install wamerican: parallel: true override: -- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test ;; 1) ant long-test ;; 2) ant test-compression ;; 3) ant stress-test ;;esac: +- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test ;; 1) ant long-test ;; 2) ant test-compression ;;esac: parallel: true post: - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[10/15] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0614b274 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0614b274 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0614b274 Branch: refs/heads/trunk Commit: 0614b274faa6410904d24b81d7dd62df1dfae4c6 Parents: c2b635a 830b012 Author: Michael ShulerAuthored: Thu Aug 17 14:27:03 2017 -0500 Committer: Michael Shuler Committed: Thu Aug 17 14:27:03 2017 -0500 -- CHANGES.txt | 3 ++- circle.yml | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0614b274/CHANGES.txt -- diff --cc CHANGES.txt index 358dd04,5c1d1e5..6faaa48 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,31 -1,12 +1,32 @@@ -2.2.11 - * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) +3.0.15 + * Randomize batchlog endpoint selection with only 1 or 2 racks (CASSANDRA-12884) + * Fix digest calculation for counter cells (CASSANDRA-13750) + * Fix ColumnDefinition.cellValueType() for non-frozen collection and change SSTabledump to use type.toJSONString() (CASSANDRA-13573) + * Skip materialized view addition if the base table doesn't exist (CASSANDRA-13737) + * Drop table should remove corresponding entries in dropped_columns table (CASSANDRA-13730) + * Log warn message until legacy auth tables have been migrated (CASSANDRA-13371) + * Fix incorrect [2.1 <- 3.0] serialization of counter cells created in 2.0 (CASSANDRA-13691) + * Fix invalid writetime for null cells (CASSANDRA-13711) + * Fix ALTER TABLE statement to atomically propagate changes to the table and its MVs (CASSANDRA-12952) + * Fixed ambiguous output of nodetool tablestats command (CASSANDRA-13722) + * JMXEnabledThreadPoolExecutor with corePoolSize equal to maxPoolSize (Backport CASSANDRA-13329) + * Fix Digest mismatch Exception if hints file has UnknownColumnFamily (CASSANDRA-13696) + * Purge tombstones created by expired cells (CASSANDRA-13643) + * Make concat work with iterators that have different subsets of columns (CASSANDRA-13482) + * Set test.runners based on cores and memory size (CASSANDRA-13078) + * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557) + * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606) + * Fix secondary index queries on COMPACT tables (CASSANDRA-13627) + * Nodetool listsnapshots output is missing a newline, if there are no snapshots (CASSANDRA-13568) + * sstabledump reports incorrect usage for argument order (CASSANDRA-13532) - Merged from 2.2: ++Merged from 2.2: + * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) * Fix queries with LIMIT and filtering on clustering columns (CASSANDRA-11223) * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272) * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592) * Fix nested Tuples/UDTs validation (CASSANDRA-13646) - * Remove unused max_value_size_in_mb config setting from yaml (CASSANDRA-13625 Merged from 2.1: + * Remove stress-test target in CircleCI as it's not existing (CASSANDRA-13775) * Clone HeartBeatState when building gossip messages. Make its generation/version volatile (CASSANDRA-13700) http://git-wip-us.apache.org/repos/asf/cassandra/blob/0614b274/circle.yml -- diff --cc circle.yml index f4801b7,5b4c72d..a51cf25 --- a/circle.yml +++ b/circle.yml @@@ -7,7 -7,7 +7,7 @@@ test - sudo apt-get update; sudo apt-get install wamerican: parallel: true override: - - case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test -Dtest.runners=1;; 1) ant long-test ;; 2) ant test-compression ;; 3) ant stress-test ;;esac: -- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test ;; 1) ant long-test ;; 2) ant test-compression ;;esac: ++- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test -Dtest.runners=1;; 1) ant long-test ;; 2) ant test-compression ;;esac: parallel: true post: - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[08/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2
Merge branch 'cassandra-2.1' into cassandra-2.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/830b0127 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/830b0127 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/830b0127 Branch: refs/heads/cassandra-3.0 Commit: 830b01272c227503f74b5fe5a5340331ee3ce685 Parents: 270f690 3c0c462 Author: Michael ShulerAuthored: Thu Aug 17 14:24:09 2017 -0500 Committer: Michael Shuler Committed: Thu Aug 17 14:24:09 2017 -0500 -- CHANGES.txt | 1 + circle.yml | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/830b0127/CHANGES.txt -- diff --cc CHANGES.txt index f712333,4f8f65f..5c1d1e5 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,11 -1,5 +1,12 @@@ -2.1.19 - * Remove stress-test target in CircleCI as it's not existing (CASSANDRA-13775) +2.2.11 + * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) + * Fix queries with LIMIT and filtering on clustering columns (CASSANDRA-11223) + * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272) + * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592) + * Fix nested Tuples/UDTs validation (CASSANDRA-13646) + * Remove unused max_value_size_in_mb config setting from yaml (CASSANDRA-13625 +Merged from 2.1: ++ * Remove stress-test target in CircleCI as it's not existing (CASSANDRA-13775) * Clone HeartBeatState when building gossip messages. Make its generation/version volatile (CASSANDRA-13700) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[07/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2
Merge branch 'cassandra-2.1' into cassandra-2.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/830b0127 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/830b0127 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/830b0127 Branch: refs/heads/cassandra-2.2 Commit: 830b01272c227503f74b5fe5a5340331ee3ce685 Parents: 270f690 3c0c462 Author: Michael ShulerAuthored: Thu Aug 17 14:24:09 2017 -0500 Committer: Michael Shuler Committed: Thu Aug 17 14:24:09 2017 -0500 -- CHANGES.txt | 1 + circle.yml | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/830b0127/CHANGES.txt -- diff --cc CHANGES.txt index f712333,4f8f65f..5c1d1e5 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,11 -1,5 +1,12 @@@ -2.1.19 - * Remove stress-test target in CircleCI as it's not existing (CASSANDRA-13775) +2.2.11 + * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) + * Fix queries with LIMIT and filtering on clustering columns (CASSANDRA-11223) + * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272) + * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592) + * Fix nested Tuples/UDTs validation (CASSANDRA-13646) + * Remove unused max_value_size_in_mb config setting from yaml (CASSANDRA-13625 +Merged from 2.1: ++ * Remove stress-test target in CircleCI as it's not existing (CASSANDRA-13775) * Clone HeartBeatState when building gossip messages. Make its generation/version volatile (CASSANDRA-13700) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[11/15] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0614b274 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0614b274 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0614b274 Branch: refs/heads/cassandra-3.11 Commit: 0614b274faa6410904d24b81d7dd62df1dfae4c6 Parents: c2b635a 830b012 Author: Michael ShulerAuthored: Thu Aug 17 14:27:03 2017 -0500 Committer: Michael Shuler Committed: Thu Aug 17 14:27:03 2017 -0500 -- CHANGES.txt | 3 ++- circle.yml | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0614b274/CHANGES.txt -- diff --cc CHANGES.txt index 358dd04,5c1d1e5..6faaa48 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,31 -1,12 +1,32 @@@ -2.2.11 - * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) +3.0.15 + * Randomize batchlog endpoint selection with only 1 or 2 racks (CASSANDRA-12884) + * Fix digest calculation for counter cells (CASSANDRA-13750) + * Fix ColumnDefinition.cellValueType() for non-frozen collection and change SSTabledump to use type.toJSONString() (CASSANDRA-13573) + * Skip materialized view addition if the base table doesn't exist (CASSANDRA-13737) + * Drop table should remove corresponding entries in dropped_columns table (CASSANDRA-13730) + * Log warn message until legacy auth tables have been migrated (CASSANDRA-13371) + * Fix incorrect [2.1 <- 3.0] serialization of counter cells created in 2.0 (CASSANDRA-13691) + * Fix invalid writetime for null cells (CASSANDRA-13711) + * Fix ALTER TABLE statement to atomically propagate changes to the table and its MVs (CASSANDRA-12952) + * Fixed ambiguous output of nodetool tablestats command (CASSANDRA-13722) + * JMXEnabledThreadPoolExecutor with corePoolSize equal to maxPoolSize (Backport CASSANDRA-13329) + * Fix Digest mismatch Exception if hints file has UnknownColumnFamily (CASSANDRA-13696) + * Purge tombstones created by expired cells (CASSANDRA-13643) + * Make concat work with iterators that have different subsets of columns (CASSANDRA-13482) + * Set test.runners based on cores and memory size (CASSANDRA-13078) + * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557) + * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606) + * Fix secondary index queries on COMPACT tables (CASSANDRA-13627) + * Nodetool listsnapshots output is missing a newline, if there are no snapshots (CASSANDRA-13568) + * sstabledump reports incorrect usage for argument order (CASSANDRA-13532) - Merged from 2.2: ++Merged from 2.2: + * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) * Fix queries with LIMIT and filtering on clustering columns (CASSANDRA-11223) * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272) * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592) * Fix nested Tuples/UDTs validation (CASSANDRA-13646) - * Remove unused max_value_size_in_mb config setting from yaml (CASSANDRA-13625 Merged from 2.1: + * Remove stress-test target in CircleCI as it's not existing (CASSANDRA-13775) * Clone HeartBeatState when building gossip messages. Make its generation/version volatile (CASSANDRA-13700) http://git-wip-us.apache.org/repos/asf/cassandra/blob/0614b274/circle.yml -- diff --cc circle.yml index f4801b7,5b4c72d..a51cf25 --- a/circle.yml +++ b/circle.yml @@@ -7,7 -7,7 +7,7 @@@ test - sudo apt-get update; sudo apt-get install wamerican: parallel: true override: - - case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test -Dtest.runners=1;; 1) ant long-test ;; 2) ant test-compression ;; 3) ant stress-test ;;esac: -- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test ;; 1) ant long-test ;; 2) ant test-compression ;;esac: ++- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test -Dtest.runners=1;; 1) ant long-test ;; 2) ant test-compression ;;esac: parallel: true post: - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13775) CircleCI tests fail because *stress-test* isn't a valid target
[ https://issues.apache.org/jira/browse/CASSANDRA-13775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131165#comment-16131165 ] Michael Shuler commented on CASSANDRA-13775: Looks good - will commit in a few! > CircleCI tests fail because *stress-test* isn't a valid target > -- > > Key: CASSANDRA-13775 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13775 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Eduard Tudenhoefner >Assignee: Michael Shuler > Labels: CI > Fix For: 2.2.11, 3.0.15, 2.1.19 > > > *stress-test* was added to CircleCI in CASSANDRA-13413 (2.1+) but the target > itself got introduced in CASSANDRA-11638 (3.10). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13775) CircleCI tests fail because *stress-test* isn't a valid target
[ https://issues.apache.org/jira/browse/CASSANDRA-13775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-13775: --- Status: Ready to Commit (was: Patch Available) > CircleCI tests fail because *stress-test* isn't a valid target > -- > > Key: CASSANDRA-13775 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13775 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Eduard Tudenhoefner >Assignee: Michael Shuler > Labels: CI > Fix For: 2.2.11, 3.0.15, 2.1.19 > > > *stress-test* was added to CircleCI in CASSANDRA-13413 (2.1+) but the target > itself got introduced in CASSANDRA-11638 (3.10). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-13775) CircleCI tests fail because *stress-test* isn't a valid target
[ https://issues.apache.org/jira/browse/CASSANDRA-13775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler reassigned CASSANDRA-13775: -- Assignee: Michael Shuler (was: Eduard Tudenhoefner) > CircleCI tests fail because *stress-test* isn't a valid target > -- > > Key: CASSANDRA-13775 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13775 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Eduard Tudenhoefner >Assignee: Michael Shuler > Labels: CI > Fix For: 2.2.11, 3.0.15, 2.1.19 > > > *stress-test* was added to CircleCI in CASSANDRA-13413 (2.1+) but the target > itself got introduced in CASSANDRA-11638 (3.10). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13775) CircleCI tests fail because *stress-test* isn't a valid target
[ https://issues.apache.org/jira/browse/CASSANDRA-13775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-13775: --- Reviewer: Michael Shuler (was: Marcus Eriksson) > CircleCI tests fail because *stress-test* isn't a valid target > -- > > Key: CASSANDRA-13775 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13775 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Eduard Tudenhoefner >Assignee: Eduard Tudenhoefner > Labels: CI > Fix For: 2.2.11, 3.0.15, 2.1.19 > > > *stress-test* was added to CircleCI in CASSANDRA-13413 (2.1+) but the target > itself got introduced in CASSANDRA-11638 (3.10). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13774) add bytes repaired/unrepaired in nodetool tablestats
[ https://issues.apache.org/jira/browse/CASSANDRA-13774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131014#comment-16131014 ] Chris Lohfink commented on CASSANDRA-13774: --- +1 with last set of changes, thanks! > add bytes repaired/unrepaired in nodetool tablestats > > > Key: CASSANDRA-13774 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13774 > Project: Cassandra > Issue Type: Improvement >Reporter: Blake Eggleston >Assignee: Blake Eggleston > > It would be helpful to have the actual bytes that are repaired/unrepaired, in > addition to the percentage -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13743) CAPTURE not easilly usable with PAGING
[ https://issues.apache.org/jira/browse/CASSANDRA-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131011#comment-16131011 ] Jeff Jirsa commented on CASSANDRA-13743: Thanks [~rgerard] - I've updated the [CHANGES log|https://github.com/apache/cassandra/commit/c0dc77ed4fa3b16558ce6f92c4ff076b890afc49] appropriately (but I'm not going to back out the commit to fix it there). - Jeff > CAPTURE not easilly usable with PAGING > -- > > Key: CASSANDRA-13743 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13743 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Corentin Chary >Assignee: Corentin Chary > Fix For: 4.0 > > > See > https://github.com/iksaif/cassandra/commit/7ed56966a7150ced44c375af307685517d7e09a3 > for a patch fixing that. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
cassandra git commit: Ninja: Fix jira number for CASSANDRA-13743
Repository: cassandra Updated Branches: refs/heads/trunk 4f5bf0b67 -> c0dc77ed4 Ninja: Fix jira number for CASSANDRA-13743 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c0dc77ed Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c0dc77ed Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c0dc77ed Branch: refs/heads/trunk Commit: c0dc77ed4fa3b16558ce6f92c4ff076b890afc49 Parents: 4f5bf0b Author: Jeff JirsaAuthored: Thu Aug 17 12:01:43 2017 -0700 Committer: Jeff Jirsa Committed: Thu Aug 17 12:01:43 2017 -0700 -- CHANGES.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c0dc77ed/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c0a8067..38b9b17 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -112,7 +112,7 @@ * Nodetool repair can hang forever if we lose the notification for the repair completing/failing (CASSANDRA-13480) * Anticompaction can cause noisy log messages (CASSANDRA-13684) * Switch to client init for sstabledump (CASSANDRA-13683) - * CQLSH: Don't pause when capturing data (CASSANDRA-13473) + * CQLSH: Don't pause when capturing data (CASSANDRA-13743) 3.11.1 - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-13775) CircleCI tests fail because *stress-test* isn't a valid target
[ https://issues.apache.org/jira/browse/CASSANDRA-13775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131001#comment-16131001 ] Eduard Tudenhoefner edited comment on CASSANDRA-13775 at 8/17/17 6:56 PM: -- Branch: https://github.com/nastra/cassandra/tree/13775-21 Test: https://circleci.com/gh/nastra/cassandra/7 was (Author: eduard.tudenhoefner): Branch: https://github.com/nastra/cassandra/tree/13775-21 Test: https://circleci.com/gh/nastra/cassandra/6 > CircleCI tests fail because *stress-test* isn't a valid target > -- > > Key: CASSANDRA-13775 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13775 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Eduard Tudenhoefner >Assignee: Eduard Tudenhoefner > Labels: CI > Fix For: 2.2.11, 3.0.15, 2.1.19 > > > *stress-test* was added to CircleCI in CASSANDRA-13413 (2.1+) but the target > itself got introduced in CASSANDRA-11638 (3.10). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13775) CircleCI tests fail because *stress-test* isn't a valid target
[ https://issues.apache.org/jira/browse/CASSANDRA-13775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eduard Tudenhoefner updated CASSANDRA-13775: Labels: CI (was: ) Reproduced In: 3.0.14, 2.2.10, 2.1.18 (was: 2.1.18, 2.2.10, 3.0.14) Status: Patch Available (was: In Progress) > CircleCI tests fail because *stress-test* isn't a valid target > -- > > Key: CASSANDRA-13775 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13775 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Eduard Tudenhoefner >Assignee: Eduard Tudenhoefner > Labels: CI > Fix For: 2.2.11, 3.0.15, 2.1.19 > > > *stress-test* was added to CircleCI in CASSANDRA-13413 (2.1+) but the target > itself got introduced in CASSANDRA-11638 (3.10). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13413) Run more test targets on CircleCI
[ https://issues.apache.org/jira/browse/CASSANDRA-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131006#comment-16131006 ] Eduard Tudenhoefner commented on CASSANDRA-13413: - It looks like the *stress-test* target only got introduced with CASSANDRA-11638 (3.10) and so *stress-test* isn't valid in all previous versions. [~krummas] I created CASSANDRA-13775, could you review it please? > Run more test targets on CircleCI > - > > Key: CASSANDRA-13413 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13413 > Project: Cassandra > Issue Type: Bug >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson > Fix For: 2.1.18, 2.2.10, 3.0.13, 3.11.0, 4.0 > > > Currently we only run {{ant test}} on circleci, we should use all the (free) > containers we have and run more targets in parallel. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13775) CircleCI tests fail because *stress-test* isn't a valid target
[ https://issues.apache.org/jira/browse/CASSANDRA-13775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131001#comment-16131001 ] Eduard Tudenhoefner commented on CASSANDRA-13775: - Branch: https://github.com/nastra/cassandra/tree/13775-21 Test: https://circleci.com/gh/nastra/cassandra/6 > CircleCI tests fail because *stress-test* isn't a valid target > -- > > Key: CASSANDRA-13775 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13775 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Eduard Tudenhoefner >Assignee: Eduard Tudenhoefner > Fix For: 2.2.11, 3.0.15, 2.1.19 > > > *stress-test* was added to CircleCI in CASSANDRA-13413 (2.1+) but the target > itself got introduced in CASSANDRA-11638 (3.10). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-13773) cassandra-stress writes even data when n=0
[ https://issues.apache.org/jira/browse/CASSANDRA-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130849#comment-16130849 ] Eduard Tudenhoefner edited comment on CASSANDRA-13773 at 8/17/17 6:46 PM: -- Branch: https://github.com/nastra/cassandra/tree/CASSANDRA-13773-30 Test: https://circleci.com/gh/nastra/cassandra/4 Above test is failing because of CASSANDRA-13775. Re-running tests with excluding *stress-test* target: Branch with CASSANDRA-13775 applied: https://github.com/nastra/cassandra/tree/CASSANDRA-13773-30-with-13775 Test with CASSANDRA-13775 applied: https://circleci.com/gh/nastra/cassandra/5 was (Author: eduard.tudenhoefner): Branch: https://github.com/nastra/cassandra/tree/CASSANDRA-13773-30 test: https://circleci.com/gh/nastra/cassandra/4 > cassandra-stress writes even data when n=0 > -- > > Key: CASSANDRA-13773 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13773 > Project: Cassandra > Issue Type: Bug > Components: Stress >Reporter: Eduard Tudenhoefner >Assignee: Eduard Tudenhoefner >Priority: Minor > Fix For: 3.0.15 > > > This is very unintuitive as > {code} > cassandra-stress write n=0 -rate threads=1 > {code} > will do inserts even with *n=0*. I guess most people won't ever run with > *n=0* but this is a nice shortcut for creating some schema without using > *cqlsh* > This is happening because we're writing *50k* rows of warmup data as can be > seen below: > {code} > cqlsh> select count(*) from keyspace1.standard1 ; > count > --- > 5 > (1 rows) > {code} > We can avoid writing warmup data using > {code} > cassandra-stress write n=0 no-warmup -rate threads=1 > {code} > but I would still expect to have *0* rows written when specifying *n=0*. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-13775) CircleCI tests fail because *stress-test* isn't a valid target
Eduard Tudenhoefner created CASSANDRA-13775: --- Summary: CircleCI tests fail because *stress-test* isn't a valid target Key: CASSANDRA-13775 URL: https://issues.apache.org/jira/browse/CASSANDRA-13775 Project: Cassandra Issue Type: Bug Components: Build Reporter: Eduard Tudenhoefner Assignee: Eduard Tudenhoefner Fix For: 2.2.11, 3.0.15, 2.1.19 *stress-test* was added to CircleCI in CASSANDRA-13413 (2.1+) but the target itself got introduced in CASSANDRA-11638 (3.10). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-12390) Make SASI work with partitioners that have variable-size tokens
[ https://issues.apache.org/jira/browse/CASSANDRA-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130913#comment-16130913 ] Abhish Agarwal commented on CASSANDRA-12390: I want to take it up. can someone help me in assigning it to myself. > Make SASI work with partitioners that have variable-size tokens > --- > > Key: CASSANDRA-12390 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12390 > Project: Cassandra > Issue Type: Improvement > Components: sasi >Reporter: Alex Petrov > > At the moment, SASI indexed can work only with Murmu3Partitioner. > [CASSANDRA-12389] was created to enable support of one more partitioner with > fixed-size tokens, although enabling variable-size tokens will need more > work, namely skipping tokens, since we can't rely on fixed-size > multiplication for calculating offsets in that case anymore. > This change won't require bytecode format changes, although supporting > ByteOrderedPartitioner is not a very high priority, and performance will be > worse because of "manual" skipping. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13774) add bytes repaired/unrepaired in nodetool tablestats
[ https://issues.apache.org/jira/browse/CASSANDRA-13774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Blake Eggleston updated CASSANDRA-13774: Reviewer: Chris Lohfink > add bytes repaired/unrepaired in nodetool tablestats > > > Key: CASSANDRA-13774 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13774 > Project: Cassandra > Issue Type: Improvement >Reporter: Blake Eggleston >Assignee: Blake Eggleston > > It would be helpful to have the actual bytes that are repaired/unrepaired, in > addition to the percentage -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13771) Emit metrics whenever we hit tombstone failures and warn thresholds
[ https://issues.apache.org/jira/browse/CASSANDRA-13771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] sankalp kohli updated CASSANDRA-13771: -- Reviewer: Marcus Eriksson > Emit metrics whenever we hit tombstone failures and warn thresholds > --- > > Key: CASSANDRA-13771 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13771 > Project: Cassandra > Issue Type: Improvement >Reporter: TIRU ADDANKI >Assignee: TIRU ADDANKI >Priority: Minor > Attachments: 13771.patch > > > Many a times we see cassandra timeouts, but unless we check the logs we won’t > be able to tell if the time outs are result of too many tombstones or some > other issue. It would be easier if we have metrics published whenever we hit > tombstone failure/warning thresholds. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13771) Emit metrics whenever we hit tombstone failures and warn thresholds
[ https://issues.apache.org/jira/browse/CASSANDRA-13771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130907#comment-16130907 ] sankalp kohli commented on CASSANDRA-13771: --- It will give a different picture I would say :). We also need to know about this metrics as well > Emit metrics whenever we hit tombstone failures and warn thresholds > --- > > Key: CASSANDRA-13771 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13771 > Project: Cassandra > Issue Type: Improvement >Reporter: TIRU ADDANKI >Assignee: TIRU ADDANKI >Priority: Minor > Attachments: 13771.patch > > > Many a times we see cassandra timeouts, but unless we check the logs we won’t > be able to tell if the time outs are result of too many tombstones or some > other issue. It would be easier if we have metrics published whenever we hit > tombstone failure/warning thresholds. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13774) add bytes repaired/unrepaired in nodetool tablestats
[ https://issues.apache.org/jira/browse/CASSANDRA-13774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Blake Eggleston updated CASSANDRA-13774: Status: Patch Available (was: Open) [trunk|https://github.com/bdeggleston/cassandra/tree/13774] [utest|https://circleci.com/gh/bdeggleston/cassandra/89] > add bytes repaired/unrepaired in nodetool tablestats > > > Key: CASSANDRA-13774 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13774 > Project: Cassandra > Issue Type: Improvement >Reporter: Blake Eggleston >Assignee: Blake Eggleston > > It would be helpful to have the actual bytes that are repaired/unrepaired, in > addition to the percentage -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-13774) add bytes repaired/unrepaired in nodetool tablestats
Blake Eggleston created CASSANDRA-13774: --- Summary: add bytes repaired/unrepaired in nodetool tablestats Key: CASSANDRA-13774 URL: https://issues.apache.org/jira/browse/CASSANDRA-13774 Project: Cassandra Issue Type: Improvement Reporter: Blake Eggleston Assignee: Blake Eggleston It would be helpful to have the actual bytes that are repaired/unrepaired, in addition to the percentage -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13773) cassandra-stress writes even data when n=0
[ https://issues.apache.org/jira/browse/CASSANDRA-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eduard Tudenhoefner updated CASSANDRA-13773: Reviewer: Stefania Status: Patch Available (was: In Progress) > cassandra-stress writes even data when n=0 > -- > > Key: CASSANDRA-13773 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13773 > Project: Cassandra > Issue Type: Bug > Components: Stress >Reporter: Eduard Tudenhoefner >Assignee: Eduard Tudenhoefner >Priority: Minor > Fix For: 3.0.15 > > > This is very unintuitive as > {code} > cassandra-stress write n=0 -rate threads=1 > {code} > will do inserts even with *n=0*. I guess most people won't ever run with > *n=0* but this is a nice shortcut for creating some schema without using > *cqlsh* > This is happening because we're writing *50k* rows of warmup data as can be > seen below: > {code} > cqlsh> select count(*) from keyspace1.standard1 ; > count > --- > 5 > (1 rows) > {code} > We can avoid writing warmup data using > {code} > cassandra-stress write n=0 no-warmup -rate threads=1 > {code} > but I would still expect to have *0* rows written when specifying *n=0*. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13773) cassandra-stress writes even data when n=0
[ https://issues.apache.org/jira/browse/CASSANDRA-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130849#comment-16130849 ] Eduard Tudenhoefner commented on CASSANDRA-13773: - Branch: https://github.com/nastra/cassandra/tree/CASSANDRA-13773-30 test: https://circleci.com/gh/nastra/cassandra/4 > cassandra-stress writes even data when n=0 > -- > > Key: CASSANDRA-13773 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13773 > Project: Cassandra > Issue Type: Bug > Components: Stress >Reporter: Eduard Tudenhoefner >Assignee: Eduard Tudenhoefner >Priority: Minor > Fix For: 3.0.15 > > > This is very unintuitive as > {code} > cassandra-stress write n=0 -rate threads=1 > {code} > will do inserts even with *n=0*. I guess most people won't ever run with > *n=0* but this is a nice shortcut for creating some schema without using > *cqlsh* > This is happening because we're writing *50k* rows of warmup data as can be > seen below: > {code} > cqlsh> select count(*) from keyspace1.standard1 ; > count > --- > 5 > (1 rows) > {code} > We can avoid writing warmup data using > {code} > cassandra-stress write n=0 no-warmup -rate threads=1 > {code} > but I would still expect to have *0* rows written when specifying *n=0*. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-13773) cassandra-stress writes even data when n=0
Eduard Tudenhoefner created CASSANDRA-13773: --- Summary: cassandra-stress writes even data when n=0 Key: CASSANDRA-13773 URL: https://issues.apache.org/jira/browse/CASSANDRA-13773 Project: Cassandra Issue Type: Bug Components: Stress Reporter: Eduard Tudenhoefner Assignee: Eduard Tudenhoefner Priority: Minor Fix For: 3.0.15 This is very unintuitive as {code} cassandra-stress write n=0 -rate threads=1 {code} will do inserts even with *n=0*. I guess most people won't ever run with *n=0* but this is a nice shortcut for creating some schema without using *cqlsh* This is happening because we're writing *50k* rows of warmup data as can be seen below: {code} cqlsh> select count(*) from keyspace1.standard1 ; count --- 5 (1 rows) {code} We can avoid writing warmup data using {code} cassandra-stress write n=0 no-warmup -rate threads=1 {code} but I would still expect to have *0* rows written when specifying *n=0*. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13758) Incremental repair sessions shouldn't be deleted if they still have sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-13758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130687#comment-16130687 ] Marcus Eriksson commented on CASSANDRA-13758: - +1 - but it might make sense to log something if a session is kept because it contains data? Feel free to add that on commit if you agree > Incremental repair sessions shouldn't be deleted if they still have sstables > > > Key: CASSANDRA-13758 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13758 > Project: Cassandra > Issue Type: Bug >Reporter: Blake Eggleston >Assignee: Blake Eggleston > Labels: incremental_repair > Fix For: 4.0 > > > The incremental session cleanup doesn't verify that there are no remaining > sstables marked as part of the repair before deleting it. Deleting a > successful repair session which still has outstanding sstables will cause > those sstables to be demoted to unrepaired, creating an inconsistency. > This typically wouldn't be an issue, since we'd expect the sstables to long > since have been promoted / demoted. However, I've seen a few ref leak issues > which can cause sstables to get stuck. Those have been fixed, but we should > still protect against that edge case to prevent inconsistencies caused by > future (or currently unknown) bugs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13758) Incremental repair sessions shouldn't be deleted if they still have sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-13758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-13758: Status: Ready to Commit (was: Patch Available) > Incremental repair sessions shouldn't be deleted if they still have sstables > > > Key: CASSANDRA-13758 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13758 > Project: Cassandra > Issue Type: Bug >Reporter: Blake Eggleston >Assignee: Blake Eggleston > Labels: incremental_repair > Fix For: 4.0 > > > The incremental session cleanup doesn't verify that there are no remaining > sstables marked as part of the repair before deleting it. Deleting a > successful repair session which still has outstanding sstables will cause > those sstables to be demoted to unrepaired, creating an inconsistency. > This typically wouldn't be an issue, since we'd expect the sstables to long > since have been promoted / demoted. However, I've seen a few ref leak issues > which can cause sstables to get stuck. Those have been fixed, but we should > still protect against that edge case to prevent inconsistencies caused by > future (or currently unknown) bugs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-10726) Read repair inserts should not be blocking
[ https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130504#comment-16130504 ] Xiaolong Jiang commented on CASSANDRA-10726: [~iamaleksey] Because I don't know how to do refactor to make my change clean except we change the iteractor style. If we do, it will be a much bigger change which we can put in a different new JIRA. But I am not sure how safe to refactor since I think the iteractor style is new engine style and it's all over the place, which seems the significant change and new direction after 3.0. If you have any suggestion of how to refactor or even specific refactor to this JIRA, I would be more than happy to make the proper change. > Read repair inserts should not be blocking > -- > > Key: CASSANDRA-10726 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10726 > Project: Cassandra > Issue Type: Improvement > Components: Coordination >Reporter: Richard Low >Assignee: Xiaolong Jiang > Fix For: 4.x > > > Today, if there’s a digest mismatch in a foreground read repair, the insert > to update out of date replicas is blocking. This means, if it fails, the read > fails with a timeout. If a node is dropping writes (maybe it is overloaded or > the mutation stage is backed up for some other reason), all reads to a > replica set could fail. Further, replicas dropping writes get more out of > sync so will require more read repair. > The comment on the code for why the writes are blocking is: > {code} > // wait for the repair writes to be acknowledged, to minimize impact on any > replica that's > // behind on writes in case the out-of-sync row is read multiple times in > quick succession > {code} > but the bad side effect is that reads timeout. Either the writes should not > be blocking or we should return success for the read even if the write times > out. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Resolved] (CASSANDRA-13576) test failure in bootstrap_test.TestBootstrap.consistent_range_movement_false_with_rf1_should_succeed_test
[ https://issues.apache.org/jira/browse/CASSANDRA-13576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson resolved CASSANDRA-13576. - Resolution: Fixed Fix Version/s: 4.0 circle ci looks ok - {{testFixedSize - org.apache.cassandra.db.commitlog.CommitLogStressTest}} failed, but that is unrelated commited as 4f5bf0b67d2e0a93595cc8061018b20aa2309566 > test failure in > bootstrap_test.TestBootstrap.consistent_range_movement_false_with_rf1_should_succeed_test > - > > Key: CASSANDRA-13576 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13576 > Project: Cassandra > Issue Type: Bug >Reporter: Michael Hamm >Assignee: Marcus Eriksson > Labels: dtest, test-failure > Fix For: 4.0 > > Attachments: node1_debug.log, node1_gc.log, node1.log, > node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, > node3.log, Screen Shot 2017-08-17 at 14.46.00.png > > > example failure: > http://cassci.datastax.com/job/trunk_offheap_dtest/445/testReport/bootstrap_test/TestBootstrap/consistent_range_movement_false_with_rf1_should_succeed_test > {noformat} > Error Message > 31 May 2017 04:28:09 [node3] Missing: ['Starting listening for CQL clients']: > INFO [main] 2017-05-31 04:18:01,615 YamlConfigura. > See system.log for remainder > {noformat} > {noformat} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 236, in > consistent_range_movement_false_with_rf1_should_succeed_test > self._bootstrap_test_with_replica_down(False, rf=1) > File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 278, in > _bootstrap_test_with_replica_down > > jvm_args=["-Dcassandra.consistent.rangemovement={}".format(consistent_range_movement)]) > File > "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line > 696, in start > self.wait_for_binary_interface(from_mark=self.mark) > File > "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line > 514, in wait_for_binary_interface > self.watch_log_for("Starting listening for CQL clients", **kwargs) > File > "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line > 471, in watch_log_for > raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " > [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + > reads[:50] + ".\nSee {} for remainder".format(filename)) > "31 May 2017 04:28:09 [node3] Missing: ['Starting listening for CQL > clients']:\nINFO [main] 2017-05-31 04:18:01,615 YamlConfigura.\n > {noformat} > {noformat} > >> begin captured logging << > \ndtest: DEBUG: cluster ccm directory: > /tmp/dtest-PKphwD\ndtest: DEBUG: Done setting configuration options:\n{ > 'initial_token': None,\n'memtable_allocation_type': 'offheap_objects',\n > 'num_tokens': '32',\n'phi_convict_threshold': 5,\n > 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': > 1,\n'request_timeout_in_ms': 1,\n > 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': > 1}\ncassandra.policies: INFO: Using datacenter 'datacenter1' for > DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify > a local_dc to the constructor, or limit contact points to local cluster > nodes\ncassandra.cluster: INFO: New Cassandra host datacenter1> discovered\ncassandra.protocol: WARNING: Server warning: When > increasing replication factor you need to run a full (-full) repair to > distribute the data.\ncassandra.connection: WARNING: Heartbeat failed for > connection (139927174110160) to 127.0.0.2\ncassandra.cluster: WARNING: Host > 127.0.0.2 has been marked down\ncassandra.pool: WARNING: Error attempting to > reconnect to 127.0.0.2, scheduling retry in 2.0 seconds: [Errno 111] Tried > connecting to [('127.0.0.2', 9042)]. Last error: Connection > refused\ncassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.2, > scheduling retry in 4.0 seconds: [Errno 111] Tried connecting to > [('127.0.0.2', 9042)]. Last error: Connection refused\ncassandra.pool: > WARNING: Error attempting to reconnect to 127.0.0.2, scheduling retry in 8.0 > seconds: [Errno 111] Tried connecting to [('127.0.0.2', 9042)]. Last error: > Connection refused\ncassandra.pool: WARNING: Error attempting to reconnect to > 127.0.0.2, scheduling retry in 16.0 seconds: [Errno 111] Tried connecting to > [('127.0.0.2', 9042)]. Last error: Connection refused\ncassandra.pool: > WARNING: Error attempting to reconnect to 127.0.0.2,
cassandra git commit: Don't use RangeFetchMapCalculator when RF=1
Repository: cassandra Updated Branches: refs/heads/trunk 22b2a82f7 -> 4f5bf0b67 Don't use RangeFetchMapCalculator when RF=1 Patch by marcuse; reviewed by Alex Petrov for CASSANDRA-13576 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4f5bf0b6 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4f5bf0b6 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4f5bf0b6 Branch: refs/heads/trunk Commit: 4f5bf0b67d2e0a93595cc8061018b20aa2309566 Parents: 22b2a82 Author: Marcus ErikssonAuthored: Mon Aug 14 18:10:01 2017 +0200 Committer: Marcus Eriksson Committed: Thu Aug 17 16:39:17 2017 +0200 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/dht/RangeStreamer.java | 7 --- 2 files changed, 5 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/4f5bf0b6/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 2961a1d..c0a8067 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 4.0 + * Don't use RangeFetchMapCalculator when RF=1 (CASSANDRA-13576) * Don't optimise trivial ranges in RangeFetchMapCalculator (CASSANDRA-13664) * Use an ExecutorService for repair commands instead of new Thread(..).start() (CASSANDRA-13594) * Fix race / ref leak in anticompaction (CASSANDRA-13688) http://git-wip-us.apache.org/repos/asf/cassandra/blob/4f5bf0b6/src/java/org/apache/cassandra/dht/RangeStreamer.java -- diff --git a/src/java/org/apache/cassandra/dht/RangeStreamer.java b/src/java/org/apache/cassandra/dht/RangeStreamer.java index 134ed13..eabb212 100644 --- a/src/java/org/apache/cassandra/dht/RangeStreamer.java +++ b/src/java/org/apache/cassandra/dht/RangeStreamer.java @@ -192,9 +192,10 @@ public class RangeStreamer for (Map.Entry entry : rangesForKeyspace.entries()) logger.info("{}: range {} exists on {} for keyspace {}", description, entry.getKey(), entry.getValue(), keyspaceName); - -Multimap rangeFetchMap = useStrictSource ? getRangeFetchMap(rangesForKeyspace, sourceFilters, keyspaceName, useStrictConsistency) : -getOptimizedRangeFetchMap(rangesForKeyspace, sourceFilters, keyspaceName); +AbstractReplicationStrategy strat = Keyspace.open(keyspaceName).getReplicationStrategy(); +Multimap rangeFetchMap = useStrictSource || strat == null || strat.getReplicationFactor() == 1 +? getRangeFetchMap(rangesForKeyspace, sourceFilters, keyspaceName, useStrictConsistency) +: getOptimizedRangeFetchMap(rangesForKeyspace, sourceFilters, keyspaceName); for (Map.Entry > entry : rangeFetchMap.asMap().entrySet()) { - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13576) test failure in bootstrap_test.TestBootstrap.consistent_range_movement_false_with_rf1_should_succeed_test
[ https://issues.apache.org/jira/browse/CASSANDRA-13576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-13576: Attachment: Screen Shot 2017-08-17 at 14.46.00.png dtests failures look flaky (and the ones that could be related fail on trunk as well): !Screen Shot 2017-08-17 at 14.46.00.png|thumbnail! waiting for a clean circleci build before committing: https://circleci.com/gh/krummas/cassandra/72 > test failure in > bootstrap_test.TestBootstrap.consistent_range_movement_false_with_rf1_should_succeed_test > - > > Key: CASSANDRA-13576 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13576 > Project: Cassandra > Issue Type: Bug >Reporter: Michael Hamm >Assignee: Marcus Eriksson > Labels: dtest, test-failure > Attachments: node1_debug.log, node1_gc.log, node1.log, > node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, > node3.log, Screen Shot 2017-08-17 at 14.46.00.png > > > example failure: > http://cassci.datastax.com/job/trunk_offheap_dtest/445/testReport/bootstrap_test/TestBootstrap/consistent_range_movement_false_with_rf1_should_succeed_test > {noformat} > Error Message > 31 May 2017 04:28:09 [node3] Missing: ['Starting listening for CQL clients']: > INFO [main] 2017-05-31 04:18:01,615 YamlConfigura. > See system.log for remainder > {noformat} > {noformat} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 236, in > consistent_range_movement_false_with_rf1_should_succeed_test > self._bootstrap_test_with_replica_down(False, rf=1) > File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 278, in > _bootstrap_test_with_replica_down > > jvm_args=["-Dcassandra.consistent.rangemovement={}".format(consistent_range_movement)]) > File > "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line > 696, in start > self.wait_for_binary_interface(from_mark=self.mark) > File > "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line > 514, in wait_for_binary_interface > self.watch_log_for("Starting listening for CQL clients", **kwargs) > File > "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line > 471, in watch_log_for > raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " > [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + > reads[:50] + ".\nSee {} for remainder".format(filename)) > "31 May 2017 04:28:09 [node3] Missing: ['Starting listening for CQL > clients']:\nINFO [main] 2017-05-31 04:18:01,615 YamlConfigura.\n > {noformat} > {noformat} > >> begin captured logging << > \ndtest: DEBUG: cluster ccm directory: > /tmp/dtest-PKphwD\ndtest: DEBUG: Done setting configuration options:\n{ > 'initial_token': None,\n'memtable_allocation_type': 'offheap_objects',\n > 'num_tokens': '32',\n'phi_convict_threshold': 5,\n > 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': > 1,\n'request_timeout_in_ms': 1,\n > 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': > 1}\ncassandra.policies: INFO: Using datacenter 'datacenter1' for > DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify > a local_dc to the constructor, or limit contact points to local cluster > nodes\ncassandra.cluster: INFO: New Cassandra host datacenter1> discovered\ncassandra.protocol: WARNING: Server warning: When > increasing replication factor you need to run a full (-full) repair to > distribute the data.\ncassandra.connection: WARNING: Heartbeat failed for > connection (139927174110160) to 127.0.0.2\ncassandra.cluster: WARNING: Host > 127.0.0.2 has been marked down\ncassandra.pool: WARNING: Error attempting to > reconnect to 127.0.0.2, scheduling retry in 2.0 seconds: [Errno 111] Tried > connecting to [('127.0.0.2', 9042)]. Last error: Connection > refused\ncassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.2, > scheduling retry in 4.0 seconds: [Errno 111] Tried connecting to > [('127.0.0.2', 9042)]. Last error: Connection refused\ncassandra.pool: > WARNING: Error attempting to reconnect to 127.0.0.2, scheduling retry in 8.0 > seconds: [Errno 111] Tried connecting to [('127.0.0.2', 9042)]. Last error: > Connection refused\ncassandra.pool: WARNING: Error attempting to reconnect to > 127.0.0.2, scheduling retry in 16.0 seconds: [Errno 111] Tried connecting to > [('127.0.0.2', 9042)]. Last error: Connection refused\ncassandra.pool: > WARNING: Error attempting
[jira] [Comment Edited] (CASSANDRA-13576) test failure in bootstrap_test.TestBootstrap.consistent_range_movement_false_with_rf1_should_succeed_test
[ https://issues.apache.org/jira/browse/CASSANDRA-13576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130451#comment-16130451 ] Marcus Eriksson edited comment on CASSANDRA-13576 at 8/17/17 2:09 PM: -- dtests failures look flaky (and the ones that could be related fail on trunk as well): [^Screen Shot 2017-08-17 at 14.46.00.png] waiting for a clean circleci build before committing: https://circleci.com/gh/krummas/cassandra/72 was (Author: krummas): dtests failures look flaky (and the ones that could be related fail on trunk as well): !Screen Shot 2017-08-17 at 14.46.00.png|thumbnail! waiting for a clean circleci build before committing: https://circleci.com/gh/krummas/cassandra/72 > test failure in > bootstrap_test.TestBootstrap.consistent_range_movement_false_with_rf1_should_succeed_test > - > > Key: CASSANDRA-13576 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13576 > Project: Cassandra > Issue Type: Bug >Reporter: Michael Hamm >Assignee: Marcus Eriksson > Labels: dtest, test-failure > Attachments: node1_debug.log, node1_gc.log, node1.log, > node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, > node3.log, Screen Shot 2017-08-17 at 14.46.00.png > > > example failure: > http://cassci.datastax.com/job/trunk_offheap_dtest/445/testReport/bootstrap_test/TestBootstrap/consistent_range_movement_false_with_rf1_should_succeed_test > {noformat} > Error Message > 31 May 2017 04:28:09 [node3] Missing: ['Starting listening for CQL clients']: > INFO [main] 2017-05-31 04:18:01,615 YamlConfigura. > See system.log for remainder > {noformat} > {noformat} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 236, in > consistent_range_movement_false_with_rf1_should_succeed_test > self._bootstrap_test_with_replica_down(False, rf=1) > File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 278, in > _bootstrap_test_with_replica_down > > jvm_args=["-Dcassandra.consistent.rangemovement={}".format(consistent_range_movement)]) > File > "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line > 696, in start > self.wait_for_binary_interface(from_mark=self.mark) > File > "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line > 514, in wait_for_binary_interface > self.watch_log_for("Starting listening for CQL clients", **kwargs) > File > "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line > 471, in watch_log_for > raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " > [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + > reads[:50] + ".\nSee {} for remainder".format(filename)) > "31 May 2017 04:28:09 [node3] Missing: ['Starting listening for CQL > clients']:\nINFO [main] 2017-05-31 04:18:01,615 YamlConfigura.\n > {noformat} > {noformat} > >> begin captured logging << > \ndtest: DEBUG: cluster ccm directory: > /tmp/dtest-PKphwD\ndtest: DEBUG: Done setting configuration options:\n{ > 'initial_token': None,\n'memtable_allocation_type': 'offheap_objects',\n > 'num_tokens': '32',\n'phi_convict_threshold': 5,\n > 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': > 1,\n'request_timeout_in_ms': 1,\n > 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': > 1}\ncassandra.policies: INFO: Using datacenter 'datacenter1' for > DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify > a local_dc to the constructor, or limit contact points to local cluster > nodes\ncassandra.cluster: INFO: New Cassandra host datacenter1> discovered\ncassandra.protocol: WARNING: Server warning: When > increasing replication factor you need to run a full (-full) repair to > distribute the data.\ncassandra.connection: WARNING: Heartbeat failed for > connection (139927174110160) to 127.0.0.2\ncassandra.cluster: WARNING: Host > 127.0.0.2 has been marked down\ncassandra.pool: WARNING: Error attempting to > reconnect to 127.0.0.2, scheduling retry in 2.0 seconds: [Errno 111] Tried > connecting to [('127.0.0.2', 9042)]. Last error: Connection > refused\ncassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.2, > scheduling retry in 4.0 seconds: [Errno 111] Tried connecting to > [('127.0.0.2', 9042)]. Last error: Connection refused\ncassandra.pool: > WARNING: Error attempting to reconnect to 127.0.0.2, scheduling retry in 8.0 > seconds: [Errno 111] Tried connecting to
[jira] [Updated] (CASSANDRA-13747) Fix short read protection
[ https://issues.apache.org/jira/browse/CASSANDRA-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-13747: -- Status: Open (was: Patch Available) > Fix short read protection > - > > Key: CASSANDRA-13747 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13747 > Project: Cassandra > Issue Type: Bug > Components: Coordination >Reporter: Aleksey Yeschenko >Assignee: Aleksey Yeschenko > Fix For: 3.0.x, 3.11.x > > > {{ShortReadRowProtection.moreContents()}} expects that by the time we get to > that method, the global post-reconciliation counter was already applied to > the current partition. However, sometimes it won’t get applied, and the > global counter continues counting with {{rowInCurrentPartition}} value not > reset from previous partition, which in the most obvious case would trigger > the assertion we are observing - {{assert > !postReconciliationCounter.isDoneForPartition();}}. In other cases it’s > possible because of this lack of reset to query a node for too few extra > rows, causing unnecessary SRP data requests. > Why is the counter not always applied to the current partition? > The merged {{PartitionIterator}} returned from {{DataResolver.resolve()}} has > two transformations applied to it, in the following order: > {{Filter}} - to purge non-live data from partitions, and to discard empty > partitions altogether (except for Thrift) > {{Counter}}, to count and stop iteration > Problem is, {{Filter}} ’s {{applyToPartition()}} code that discards empty > partitions ({{closeIfEmpty()}} method) would sometimes consume the iterator, > triggering short read protection *before* {{Counter}} ’s > {{applyToPartition()}} gets called and resets its {{rowInCurrentPartition}} > sub-counter. > We should not be consuming iterators until all transformations are applied to > them. For transformations it means that they cannot consume iterators unless > they are the last transformation on the stack. > The linked branch fixes the problem by splitting {{Filter}} into two > transformations. The original - {{Filter}} - that does filtering within > partitions - and a separate {{EmptyPartitionsDiscarder}}, that discards empty > partitions from {{PartitionIterators}}. Thus {{DataResolve.resolve()}}, when > constructing its {{PartitionIterator}}, now does merge first, then applies > {{Filter}}, then {{Counter}}, and only then, as its last (third) > transformation - the {{EmptyPartitionsDiscarder}}. Being the last one > applied, it’s legal for it to consume the iterator, and triggering > {{moreContents()}} is now no longer a problem. > Fixes: [3.0|https://github.com/iamaleksey/cassandra/commits/13747-3.0], > [3.11|https://github.com/iamaleksey/cassandra/commits/13747-3.11], > [4.0|https://github.com/iamaleksey/cassandra/commits/13747-4.0]. dtest > [here|https://github.com/iamaleksey/cassandra-dtest/commits/13747]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-10726) Read repair inserts should not be blocking
[ https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130320#comment-16130320 ] Aleksey Yeschenko commented on CASSANDRA-10726: --- bq. I wound not refactor the whole read pipeline right now I guess even though I do agree the code becomes so complicated Can you elaborate on reasons why you wouldn't do that? > Read repair inserts should not be blocking > -- > > Key: CASSANDRA-10726 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10726 > Project: Cassandra > Issue Type: Improvement > Components: Coordination >Reporter: Richard Low >Assignee: Xiaolong Jiang > Fix For: 4.x > > > Today, if there’s a digest mismatch in a foreground read repair, the insert > to update out of date replicas is blocking. This means, if it fails, the read > fails with a timeout. If a node is dropping writes (maybe it is overloaded or > the mutation stage is backed up for some other reason), all reads to a > replica set could fail. Further, replicas dropping writes get more out of > sync so will require more read repair. > The comment on the code for why the writes are blocking is: > {code} > // wait for the repair writes to be acknowledged, to minimize impact on any > replica that's > // behind on writes in case the out-of-sync row is read multiple times in > quick succession > {code} > but the bad side effect is that reads timeout. Either the writes should not > be blocking or we should return success for the read even if the write times > out. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-12203) AssertionError on compaction after upgrade (2.1.9 -> 3.7)
[ https://issues.apache.org/jira/browse/CASSANDRA-12203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130277#comment-16130277 ] R1J2 commented on CASSANDRA-12203: -- Can you list out the steps you took for this upgrade please ? > AssertionError on compaction after upgrade (2.1.9 -> 3.7) > - > > Key: CASSANDRA-12203 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12203 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra 3.7 (upgrade from 2.1.9) > Java version "1.8.0_91" > Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-83-generic x86_64) >Reporter: Roman S. Borschel >Assignee: Yuki Morishita >Priority: Critical > Fix For: 3.0.11, 3.10 > > > After upgrading a Cassandra cluster from 2.1.9 to 3.7, one column family > (using SizeTieredCompaction) repeatedly and continuously failed compaction > (and thus also repair) across the cluster, with all nodes producing the > following errors in the logs: > {noformat} > 016-07-14T09:29:47.96855 |srv=cassandra|ERROR: Exception in thread > Thread[CompactionExecutor:3,1,main] > 2016-07-14T09:29:47.96858 |srv=cassandra|java.lang.AssertionError: null > 2016-07-14T09:29:47.96859 |srv=cassandra| at > org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$TombstoneTracker.openNew(UnfilteredDeserializer.java:650) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96860 |srv=cassandra| at > org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.hasNext(UnfilteredDeserializer.java:423) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96860 |srv=cassandra| at > org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.hasNext(UnfilteredDeserializer.java:298) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96860 |srv=cassandra| at > org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.readStaticRow(SSTableSimpleIterator.java:133) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96861 |srv=cassandra| at > org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:57) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96861 |srv=cassandra| at > org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator$1.initializeIterator(BigTableScanner.java:334) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96862 |srv=cassandra| at > org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96862 |srv=cassandra| at > org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.isReverseOrder(LazilyInitializedUnfilteredRowIterator.java:70) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96863 |srv=cassandra| at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:109) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96863 |srv=cassandra| at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:100) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96864 |srv=cassandra| at > org.apache.cassandra.utils.MergeIterator$Candidate.consume(MergeIterator.java:408) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96864 |srv=cassandra| at > org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:203) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96865 |srv=cassandra| at > org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96865 |srv=cassandra| at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96866 |srv=cassandra| at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.hasNext(UnfilteredPartitionIterators.java:150) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96866 |srv=cassandra| at > org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96867 |srv=cassandra| at > org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96867 |srv=cassandra| at > org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96867 |srv=cassandra| at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > ~[apache-cassandra-3.7.jar:3.7] >
[jira] [Commented] (CASSANDRA-13769) PendingRepairManager.getNextBackgroundTask throwing IndexOutOfBoundsException
[ https://issues.apache.org/jira/browse/CASSANDRA-13769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130252#comment-16130252 ] Marcus Eriksson commented on CASSANDRA-13769: - +1 > PendingRepairManager.getNextBackgroundTask throwing IndexOutOfBoundsException > - > > Key: CASSANDRA-13769 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13769 > Project: Cassandra > Issue Type: Bug >Reporter: Blake Eggleston >Assignee: Blake Eggleston > Fix For: 4.0 > > > If all the repair sessions managed by a PendingRepairManager are can be > cleaned up and we call getNextBackgroundTask, we'll try to pull an element > out of an empty list and throw an exception. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130152#comment-16130152 ] Romain GERARD edited comment on CASSANDRA-13418 at 8/17/17 9:57 AM: Hi, I am back with a new proposition https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05 Majors differences : + Used [~krummas] way for introducing the ignore Overlaps + I splitted the function that is doing the overlapingChecks as in the previous patch, I was wrongfully checking for overlaps in memtables (even if the option was activated) https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e8e282423dcbf34d30a3578c8dec15cdR170 + I enable uncheckedTombstoneCompaction when ignoreOverlaps is activated https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e83635b2fb3079d9b91b039c605c15daR71 It seems a sane default for me, as even if we drop fully expired sstables, we will still check for worth Dropping ones and we want to also ignore overlaps check in this case. (was the default behavior in the last patch) + Added a simple test case. I will look to add more (feel free to suggest some) + Rebased upon trunk Every tests passes (ant test) and I will deploy this patch internally to confirm that it works as expected. If you have any remarks [~krummas] in the mean time was (Author: rgerard): Hi, I am back with a new proposition https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05 Majors differences : + Used [~krummas] way for introducing the ignore Overlaps + I splitted the function that is doing the overlapingChecks as in the previous patch, I was wrongfully checking for overlaps in memtables (even if the option was activated) https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e8e282423dcbf34d30a3578c8dec15cdR170 + I enable uncheckedTombstoneCompaction when ignoreOverlaps is activated https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e83635b2fb3079d9b91b039c605c15daR71 It seems a sane default for me, as even if we drop fully expired sstables, we will still check for worth Dropping ones and we want to also ignore overlaps check in this case. (was the default behavior in the last patch) + Added a simple test case. I will look to add more (feel free to suggest some) + Rebased upon trunk Every tests passes and I will deploy this patch internally to confirm that it works as expected. If you have any remarks [~krummas] in the mean time > Allow TWCS to ignore overlaps when dropping fully expired sstables > -- > > Key: CASSANDRA-13418 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13418 > Project: Cassandra > Issue Type: Improvement > Components: Compaction >Reporter: Corentin Chary > Labels: twcs > Attachments: twcs-cleanup.png > > > http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If > you really want read-repairs you're going to have sstables blocking the > expiration of other fully expired SSTables because they overlap. > You can set unchecked_tombstone_compaction = true or tombstone_threshold to a > very low value and that will purge the blockers of old data that should > already have expired, thus removing the overlaps and allowing the other > SSTables to expire. > The thing is that this is rather CPU intensive and not optimal. If you have > time series, you might not care if all your data doesn't exactly expire at > the right time, or if data re-appears for some time, as long as it gets > deleted as soon as it can. And in this situation I believe it would be really > beneficial to allow users to simply ignore overlapping SSTables when looking > for fully expired ones. > To the question: why would you need read-repairs ? > - Full repairs basically take longer than the TTL of the data on my dataset, > so this isn't really effective. > - Even with a 10% chances of doing a repair, we found out that this would be > enough to greatly reduce entropy of the most used data (and if you have > timeseries, you're likely to have a dashboard doing the same important > queries over and over again). > - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow. > I'll try to come up with a patch demonstrating how this would work, try it on > our system and report the effects. > cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail:
[jira] [Comment Edited] (CASSANDRA-13743) CAPTURE not easilly usable with PAGING
[ https://issues.apache.org/jira/browse/CASSANDRA-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130158#comment-16130158 ] Romain GERARD edited comment on CASSANDRA-13743 at 8/17/17 9:56 AM: As a side note, I was reading the commit logs and found that the commit message and changelog badly reference this ticket. In both, CASSANDRA-13473 is used but this ticket is CASSANDRA-13743 https://github.com/apache/cassandra/commit/ed0243954f9ab9c5c68a4516a836ab3710891d5b was (Author: rgerard): As a side note, I was reading the commit logs and found that the commit message and changelog badly reference this ticket. In both, CASSANDRA-13473 is used but this ticket is CASSANDRA-13743 > CAPTURE not easilly usable with PAGING > -- > > Key: CASSANDRA-13743 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13743 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Corentin Chary >Assignee: Corentin Chary > Fix For: 4.0 > > > See > https://github.com/iksaif/cassandra/commit/7ed56966a7150ced44c375af307685517d7e09a3 > for a patch fixing that. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130152#comment-16130152 ] Romain GERARD edited comment on CASSANDRA-13418 at 8/17/17 9:53 AM: Hi, I am back with a new proposition https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05 Majors differences : + Used [~krummas] way for introducing the ignore Overlaps + I splitted the function that is doing the overlapingChecks as in the previous patch, I was wrongfully checking for overlaps in memtables (even if the option was activated) https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e8e282423dcbf34d30a3578c8dec15cdR170 + I enable uncheckedTombstoneCompaction when ignoreOverlaps is activated https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e83635b2fb3079d9b91b039c605c15daR71 It seems a sane default for me, as even if we drop fully expired sstables, we will still check for worth Dropping ones and we want to also ignore overlaps check in this case. (was the default behavior in the last patch) + Added a simple test case. I will look to add more (feel free to suggest some) + Rebased upon trunk Every tests passes and I will deploy this patch internally to confirm that it works as expected. If you have any remarks [~krummas] in the mean time was (Author: rgerard): Hi, I am back with a new proposition https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05 Majors differences : + Used [~krummas] way for introducing the ignore Overlaps + I splitted the function that is doing the overlapingChecks as in the previous patch, I was wrongfully checking for overlaps in memtables (even if the option was activated) https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e8e282423dcbf34d30a3578c8dec15cdR170 + I enable uncheckedTombstoneCompaction when ignoreOverlaps is activated https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e83635b2fb3079d9b91b039c605c15daR71 It seems a sane default for me, as even if we drop fully expired sstables, we will still check for worth Dropping ones and we want to also ignore overlaps check in this case. + Added a simple test case. I will look to add more (feel free to suggest some) + Rebased upon trunk Every tests passes and I will deploy this patch internally to confirm that it works as expected. If you have any remarks [~krummas] in the mean time > Allow TWCS to ignore overlaps when dropping fully expired sstables > -- > > Key: CASSANDRA-13418 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13418 > Project: Cassandra > Issue Type: Improvement > Components: Compaction >Reporter: Corentin Chary > Labels: twcs > Attachments: twcs-cleanup.png > > > http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If > you really want read-repairs you're going to have sstables blocking the > expiration of other fully expired SSTables because they overlap. > You can set unchecked_tombstone_compaction = true or tombstone_threshold to a > very low value and that will purge the blockers of old data that should > already have expired, thus removing the overlaps and allowing the other > SSTables to expire. > The thing is that this is rather CPU intensive and not optimal. If you have > time series, you might not care if all your data doesn't exactly expire at > the right time, or if data re-appears for some time, as long as it gets > deleted as soon as it can. And in this situation I believe it would be really > beneficial to allow users to simply ignore overlapping SSTables when looking > for fully expired ones. > To the question: why would you need read-repairs ? > - Full repairs basically take longer than the TTL of the data on my dataset, > so this isn't really effective. > - Even with a 10% chances of doing a repair, we found out that this would be > enough to greatly reduce entropy of the most used data (and if you have > timeseries, you're likely to have a dashboard doing the same important > queries over and over again). > - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow. > I'll try to come up with a patch demonstrating how this would work, try it on > our system and report the effects. > cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail:
[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130152#comment-16130152 ] Romain GERARD edited comment on CASSANDRA-13418 at 8/17/17 9:52 AM: Hi, I am back with a new proposition https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05 Majors differences : + Used [~krummas] way for introducing the ignore Overlaps + I splitted the function that is doing the overlapingChecks as in the previous patch, I was wrongfully checking for overlaps in memtables (even if the option was activated) https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e8e282423dcbf34d30a3578c8dec15cdR170 + I enable uncheckedTombstoneCompaction when ignoreOverlaps is activated https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e83635b2fb3079d9b91b039c605c15daR71 It seems a sane default for me, as even if we drop fully expired sstables, we will still check for worth Dropping ones and we want to also ignore overlaps check in this case. + Added a simple test case. I will look to add more (feel free to suggest some) + Rebased upon trunk Every tests passes and I will deploy this patch internally to confirm that it works as expected. If you have any remarks [~krummas] in the mean time was (Author: rgerard): Hi, I am back with a new proposition https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05 Majors differences : + Used [~krummas] way for introducing the ignore Overlaps + I splitted the function that is doing the overlapingChecks as in the previous patch, I was wrongfully checking for overlaps in memtables (even if the option was activated) https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e8e282423dcbf34d30a3578c8dec15cdR170 + I enable uncheckedTombstoneCompaction when ignoreOverlaps is activated https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e83635b2fb3079d9b91b039c605c15daR71 It seems a sane default for me, as even if we drop fully expired sstables, we will still check for worth Dropping ones and we want to also ignore overlaps check in this case. + Added a simple test case. I will look to add more (feel free to suggest some) + Rebased upon trunk Every tests passes and I will deploy this patch internally to confirm that it works as expected > Allow TWCS to ignore overlaps when dropping fully expired sstables > -- > > Key: CASSANDRA-13418 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13418 > Project: Cassandra > Issue Type: Improvement > Components: Compaction >Reporter: Corentin Chary > Labels: twcs > Attachments: twcs-cleanup.png > > > http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If > you really want read-repairs you're going to have sstables blocking the > expiration of other fully expired SSTables because they overlap. > You can set unchecked_tombstone_compaction = true or tombstone_threshold to a > very low value and that will purge the blockers of old data that should > already have expired, thus removing the overlaps and allowing the other > SSTables to expire. > The thing is that this is rather CPU intensive and not optimal. If you have > time series, you might not care if all your data doesn't exactly expire at > the right time, or if data re-appears for some time, as long as it gets > deleted as soon as it can. And in this situation I believe it would be really > beneficial to allow users to simply ignore overlapping SSTables when looking > for fully expired ones. > To the question: why would you need read-repairs ? > - Full repairs basically take longer than the TTL of the data on my dataset, > so this isn't really effective. > - Even with a 10% chances of doing a repair, we found out that this would be > enough to greatly reduce entropy of the most used data (and if you have > timeseries, you're likely to have a dashboard doing the same important > queries over and over again). > - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow. > I'll try to come up with a patch demonstrating how this would work, try it on > our system and report the effects. > cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13771) Emit metrics whenever we hit tombstone failures and warn thresholds
[ https://issues.apache.org/jira/browse/CASSANDRA-13771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130163#comment-16130163 ] Stefan Podkowinski commented on CASSANDRA-13771: Have you looked at the TombstoneScannedHistogram metric? I think this value should give you an even better picture on the number of read tombstones. > Emit metrics whenever we hit tombstone failures and warn thresholds > --- > > Key: CASSANDRA-13771 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13771 > Project: Cassandra > Issue Type: Improvement >Reporter: TIRU ADDANKI >Assignee: TIRU ADDANKI >Priority: Minor > Attachments: 13771.patch > > > Many a times we see cassandra timeouts, but unless we check the logs we won’t > be able to tell if the time outs are result of too many tombstones or some > other issue. It would be easier if we have metrics published whenever we hit > tombstone failure/warning thresholds. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130152#comment-16130152 ] Romain GERARD edited comment on CASSANDRA-13418 at 8/17/17 9:51 AM: Hi, I am back with a new proposition https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05 Majors differences : + Used [~krummas] way for introducing the ignore Overlaps + I splitted the function that is doing the overlapingChecks as in the previous patch, I was wrongfully checking for overlaps in memtables (even if the option was activated) https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e8e282423dcbf34d30a3578c8dec15cdR170 + I enable uncheckedTombstoneCompaction when ignoreOverlaps is activated https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e83635b2fb3079d9b91b039c605c15daR71 It seems a sane default for me, as even if we drop fully expired sstables, we will still check for worth Dropping ones and we want to also ignore overlaps check in this case. + Added a simple test case. I will look to add more (feel free to suggest some) + Rebased upon trunk Every tests passes and I will deploy this patch internally to confirm that it works as expected was (Author: rgerard): Hi, I am back with a new proposition https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05 Majors differences : + Used [~krummas] way for introducing the ignore Overlaps + I splitted the function that is doing the overlapingChecks as in the previous patch, I was wrongfully checking for overlaps in memtables (even if the option was activated) https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e8e282423dcbf34d30a3578c8dec15cdR170 + I enable uncheckedTombstoneCompaction when ignoreOverlaps is activated https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e83635b2fb3079d9b91b039c605c15daR71 It seems a sane default for me, as even if we drop fully expired sstables, we will still check for worth Dropping ones and we want to also ignore overlaps check in this case. + Added a simple test case. I will look to add more (feel free to suggest some) + Rebased upon trunk Every tests pass and I will deploy this patch internally to confirm that it works as expected > Allow TWCS to ignore overlaps when dropping fully expired sstables > -- > > Key: CASSANDRA-13418 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13418 > Project: Cassandra > Issue Type: Improvement > Components: Compaction >Reporter: Corentin Chary > Labels: twcs > Attachments: twcs-cleanup.png > > > http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If > you really want read-repairs you're going to have sstables blocking the > expiration of other fully expired SSTables because they overlap. > You can set unchecked_tombstone_compaction = true or tombstone_threshold to a > very low value and that will purge the blockers of old data that should > already have expired, thus removing the overlaps and allowing the other > SSTables to expire. > The thing is that this is rather CPU intensive and not optimal. If you have > time series, you might not care if all your data doesn't exactly expire at > the right time, or if data re-appears for some time, as long as it gets > deleted as soon as it can. And in this situation I believe it would be really > beneficial to allow users to simply ignore overlapping SSTables when looking > for fully expired ones. > To the question: why would you need read-repairs ? > - Full repairs basically take longer than the TTL of the data on my dataset, > so this isn't really effective. > - Even with a 10% chances of doing a repair, we found out that this would be > enough to greatly reduce entropy of the most used data (and if you have > timeseries, you're likely to have a dashboard doing the same important > queries over and over again). > - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow. > I'll try to come up with a patch demonstrating how this would work, try it on > our system and report the effects. > cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-13743) CAPTURE not easilly usable with PAGING
[ https://issues.apache.org/jira/browse/CASSANDRA-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130158#comment-16130158 ] Romain GERARD edited comment on CASSANDRA-13743 at 8/17/17 9:50 AM: As a side note, I was reading the commit logs and found that the commit message and changelog badly reference this ticket. In both, CASSANDRA-13473 is used but this ticket is CASSANDRA-13743 was (Author: rgerard): As a side note, I was reading the commit logs and the commit message and changelog badly reference this ticket. In both, CASSANDRA-13473 is used but this ticket is CASSANDRA-13743 > CAPTURE not easilly usable with PAGING > -- > > Key: CASSANDRA-13743 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13743 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Corentin Chary >Assignee: Corentin Chary > Fix For: 4.0 > > > See > https://github.com/iksaif/cassandra/commit/7ed56966a7150ced44c375af307685517d7e09a3 > for a patch fixing that. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13743) CAPTURE not easilly usable with PAGING
[ https://issues.apache.org/jira/browse/CASSANDRA-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130158#comment-16130158 ] Romain GERARD commented on CASSANDRA-13743: --- As a side note, I was reading the commit logs and the commit message and changelog badly reference this ticket. In both, CASSANDRA-13473 is used but this ticket is CASSANDRA-13743 > CAPTURE not easilly usable with PAGING > -- > > Key: CASSANDRA-13743 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13743 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Corentin Chary >Assignee: Corentin Chary > Fix For: 4.0 > > > See > https://github.com/iksaif/cassandra/commit/7ed56966a7150ced44c375af307685517d7e09a3 > for a patch fixing that. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130152#comment-16130152 ] Romain GERARD edited comment on CASSANDRA-13418 at 8/17/17 9:43 AM: Hi, I am back with a new proposition https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05 Majors differences : + Used [~krummas] way for introducing the ignore Overlaps + I splitted the function that is doing the overlapingChecks as in the previous patch, I was wrongfully checking for overlaps in memtables (even if the option was activated) https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e8e282423dcbf34d30a3578c8dec15cdR170 + I enable uncheckedTombstoneCompaction when ignoreOverlaps is activated https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e83635b2fb3079d9b91b039c605c15daR71 It seems a sane default for me, as even if we drop fully expired sstables, we will still check for worth Dropping ones and we want to also ignore overlaps check in this case. + Added a simple test case. I will look to add more (feel free to suggest some) + Rebased upon trunk Every tests pass and I will deploy this patch internally to confirm that it works as expected was (Author: rgerard): Hi, I am back with a new proposition https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05 Majors differences : + Used [~krummas] way for introducing the ignore Overlaps + I splitted the function that is doing the overlapingChecks as in the previous patch, I was wrongfully checking for overlaps in memtables (even if the option was activated) https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e8e282423dcbf34d30a3578c8dec15cdR170 + I enable uncheckedTombstoneCompaction when ignoreOverlaps is activated https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e83635b2fb3079d9b91b039c605c15daR71 It seems a sane default for me, as even if we drop fully expired sstables, we will still check for worth Dropping ones and we want to also ignore overlaps check in this case. + Added a simple test case. I will look to add more (feel free to suggest somes) + Rebased upon trunk Every tests pass and I will deploy this patch internally to confirm that it works as expected > Allow TWCS to ignore overlaps when dropping fully expired sstables > -- > > Key: CASSANDRA-13418 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13418 > Project: Cassandra > Issue Type: Improvement > Components: Compaction >Reporter: Corentin Chary > Labels: twcs > Attachments: twcs-cleanup.png > > > http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If > you really want read-repairs you're going to have sstables blocking the > expiration of other fully expired SSTables because they overlap. > You can set unchecked_tombstone_compaction = true or tombstone_threshold to a > very low value and that will purge the blockers of old data that should > already have expired, thus removing the overlaps and allowing the other > SSTables to expire. > The thing is that this is rather CPU intensive and not optimal. If you have > time series, you might not care if all your data doesn't exactly expire at > the right time, or if data re-appears for some time, as long as it gets > deleted as soon as it can. And in this situation I believe it would be really > beneficial to allow users to simply ignore overlapping SSTables when looking > for fully expired ones. > To the question: why would you need read-repairs ? > - Full repairs basically take longer than the TTL of the data on my dataset, > so this isn't really effective. > - Even with a 10% chances of doing a repair, we found out that this would be > enough to greatly reduce entropy of the most used data (and if you have > timeseries, you're likely to have a dashboard doing the same important > queries over and over again). > - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow. > I'll try to come up with a patch demonstrating how this would work, try it on > our system and report the effects. > cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130152#comment-16130152 ] Romain GERARD edited comment on CASSANDRA-13418 at 8/17/17 9:41 AM: Hi, I am back with a new proposition https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05 Majors differences : + Used [~krummas] way for introducing the ignore Overlaps + I splitted the function that is doing the overlapingChecks as in the previous patch, I was wrongfully checking for overlaps in memtables (even if the option was activated) https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e8e282423dcbf34d30a3578c8dec15cdR170 + I enable uncheckedTombstoneCompaction when ignoreOverlaps is activated https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e83635b2fb3079d9b91b039c605c15daR71 It seems a sane default for me, as even if we drop fully expired sstables, we will still check for worth Dropping ones and we want to also ignore overlaps check in this case. + Added a simple test case. I will look to add more (feel free to suggest somes) + Rebased upon trunk Every tests pass and I will deploy this patch internally to confirm that it works as expected was (Author: rgerard): Hi, I am back with a new proposition https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05 Majors differences : + Used [~krummas] way for introducing the ignore Overlaps + I splitted the function that is doing the overlapingChecks as in the previous patch, I was wrongfully checking for overlaps in memtables (even if the option was activated) https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e8e282423dcbf34d30a3578c8dec15cdR170 + I enable uncheckedTombstoneCompaction when ignoreOverlaps is activated https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e83635b2fb3079d9b91b039c605c15daR71 It seems a sane default for me, as even if we drop fully expired sstables, we will still check for worth Dropping ones and we want to also ignore overlaps check in this case. + Added a simple test case. I will look to add more (feel free to suggest somes) + Rebased upon trunk Every tests passed and I will deploy this patch internally to confirm that it works as expected > Allow TWCS to ignore overlaps when dropping fully expired sstables > -- > > Key: CASSANDRA-13418 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13418 > Project: Cassandra > Issue Type: Improvement > Components: Compaction >Reporter: Corentin Chary > Labels: twcs > Attachments: twcs-cleanup.png > > > http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If > you really want read-repairs you're going to have sstables blocking the > expiration of other fully expired SSTables because they overlap. > You can set unchecked_tombstone_compaction = true or tombstone_threshold to a > very low value and that will purge the blockers of old data that should > already have expired, thus removing the overlaps and allowing the other > SSTables to expire. > The thing is that this is rather CPU intensive and not optimal. If you have > time series, you might not care if all your data doesn't exactly expire at > the right time, or if data re-appears for some time, as long as it gets > deleted as soon as it can. And in this situation I believe it would be really > beneficial to allow users to simply ignore overlapping SSTables when looking > for fully expired ones. > To the question: why would you need read-repairs ? > - Full repairs basically take longer than the TTL of the data on my dataset, > so this isn't really effective. > - Even with a 10% chances of doing a repair, we found out that this would be > enough to greatly reduce entropy of the most used data (and if you have > timeseries, you're likely to have a dashboard doing the same important > queries over and over again). > - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow. > I'll try to come up with a patch demonstrating how this would work, try it on > our system and report the effects. > cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130152#comment-16130152 ] Romain GERARD edited comment on CASSANDRA-13418 at 8/17/17 9:41 AM: Hi, I am back with a new proposition https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05 Majors differences : + Used [~krummas] way for introducing the ignore Overlaps + I splitted the function that is doing the overlapingChecks as in the previous patch, I was wrongfully checking for overlaps in memtables (even if the option was activated) https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e8e282423dcbf34d30a3578c8dec15cdR170 + I enable uncheckedTombstoneCompaction when ignoreOverlaps is activated https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e83635b2fb3079d9b91b039c605c15daR71 It seems a sane default for me, as even if we drop fully expired sstables, we will still check for worth Dropping ones and we want to also ignore overlaps check in this case. + Added a simple test case. I will look to add more (feel free to suggest somes) + Rebased upon trunk Every tests passed and I will deploy this patch internally to confirm that it works as expected was (Author: rgerard): Hi, I am back with a new proposition https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05 Majors differences : + Used [~krummas] way for introducing the ignore Overlaps + I splitted the function that is doing the overlapingChecks as in the previous patch, I was wrongfully checking for overlaps in memtables (even if the option was activated) https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e8e282423dcbf34d30a3578c8dec15cdR170 + I enable uncheckedTombstoneCompaction when ignoreOverlaps is activated https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e83635b2fb3079d9b91b039c605c15daR71 It seems a sane default for me, as even if we drop fully expired sstables, we will still check for worth Dropping ones and we want to also ignore overlaps check in this case. + Added a simple test case. I will look to add more (feel free to suggest somes) > Allow TWCS to ignore overlaps when dropping fully expired sstables > -- > > Key: CASSANDRA-13418 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13418 > Project: Cassandra > Issue Type: Improvement > Components: Compaction >Reporter: Corentin Chary > Labels: twcs > Attachments: twcs-cleanup.png > > > http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If > you really want read-repairs you're going to have sstables blocking the > expiration of other fully expired SSTables because they overlap. > You can set unchecked_tombstone_compaction = true or tombstone_threshold to a > very low value and that will purge the blockers of old data that should > already have expired, thus removing the overlaps and allowing the other > SSTables to expire. > The thing is that this is rather CPU intensive and not optimal. If you have > time series, you might not care if all your data doesn't exactly expire at > the right time, or if data re-appears for some time, as long as it gets > deleted as soon as it can. And in this situation I believe it would be really > beneficial to allow users to simply ignore overlapping SSTables when looking > for fully expired ones. > To the question: why would you need read-repairs ? > - Full repairs basically take longer than the TTL of the data on my dataset, > so this isn't really effective. > - Even with a 10% chances of doing a repair, we found out that this would be > enough to greatly reduce entropy of the most used data (and if you have > timeseries, you're likely to have a dashboard doing the same important > queries over and over again). > - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow. > I'll try to come up with a patch demonstrating how this would work, try it on > our system and report the effects. > cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130152#comment-16130152 ] Romain GERARD commented on CASSANDRA-13418: --- Hi, I am back with a new proposition https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05 Majors differences : + Used [~krummas] way for introducing the ignore Overlaps + I splitted the function that is doing the overlapingChecks as in the previous patch, I was wrongfully checking for overlaps in memtables (even if the option was activated) https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e8e282423dcbf34d30a3578c8dec15cdR170 + I enable uncheckedTombstoneCompaction when ignoreOverlaps is activated https://github.com/criteo-forks/cassandra/commit/0c4d342341340115d2c8d15f78b2cb3eab3c2f05#diff-e83635b2fb3079d9b91b039c605c15daR71 It seems a sane default for me, as even if we drop fully expired sstables, we will still check for worth Dropping ones and we want to also ignore overlaps check in this case. + Added a simple test case. I will look to add more (feel free to suggest somes) > Allow TWCS to ignore overlaps when dropping fully expired sstables > -- > > Key: CASSANDRA-13418 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13418 > Project: Cassandra > Issue Type: Improvement > Components: Compaction >Reporter: Corentin Chary > Labels: twcs > Attachments: twcs-cleanup.png > > > http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If > you really want read-repairs you're going to have sstables blocking the > expiration of other fully expired SSTables because they overlap. > You can set unchecked_tombstone_compaction = true or tombstone_threshold to a > very low value and that will purge the blockers of old data that should > already have expired, thus removing the overlaps and allowing the other > SSTables to expire. > The thing is that this is rather CPU intensive and not optimal. If you have > time series, you might not care if all your data doesn't exactly expire at > the right time, or if data re-appears for some time, as long as it gets > deleted as soon as it can. And in this situation I believe it would be really > beneficial to allow users to simply ignore overlapping SSTables when looking > for fully expired ones. > To the question: why would you need read-repairs ? > - Full repairs basically take longer than the TTL of the data on my dataset, > so this isn't really effective. > - Even with a 10% chances of doing a repair, we found out that this would be > enough to greatly reduce entropy of the most used data (and if you have > timeseries, you're likely to have a dashboard doing the same important > queries over and over again). > - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow. > I'll try to come up with a patch demonstrating how this would work, try it on > our system and report the effects. > cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13772) Add a skip read validation flag to cassandra-stress
[ https://issues.apache.org/jira/browse/CASSANDRA-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noam Hasson updated CASSANDRA-13772: Status: Patch Available (was: Open) Attached 13772-trunk.txt which add support for the "skip-read-validations" flag. > Add a skip read validation flag to cassandra-stress > --- > > Key: CASSANDRA-13772 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13772 > Project: Cassandra > Issue Type: Improvement > Components: Stress >Reporter: Noam Hasson >Priority: Minor > Fix For: 3.11.x > > Attachments: 13772-trunk.txt > > > When running cassandra-stress with read operations, you must make sure all > the data was populated beforehand or else you will get the following errors: > java.io.IOException: Operation x0 on key(s) [4d31314e32314b395030]: Data > returned was not validated > at org.apache.cassandra.stress.Operation.error(Operation.java:127) > at > org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:105) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:91) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:99) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:245) > at > org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:453) > java.lang.RuntimeException: Failed to execute warmup > at > org.apache.cassandra.stress.StressAction.warmup(StressAction.java:117) > at org.apache.cassandra.stress.StressAction.run(StressAction.java:62) > at org.apache.cassandra.stress.Stress.run(Stress.java:143) > at org.apache.cassandra.stress.Stress.main(Stress.java:62) > Even if you use the "-errors ignore" flag you'll get a lot of the following > messages which will both slow the stress and prevent it from displaying the > metrics: > Operation x0 on key(s) [4b3539393831374e3431]: Data returned was not validated > Operation x0 on key(s) [4f4b3936363233375030]: Data returned was not validated > Operation x0 on key(s) [4d304c4b32384f4b3031]: Data returned was not validated > What I propose is to add a flag to skip the read validation, such as: > -errors skip-read-validations > This way when needed you can run a mixed workload and ignore validation of > unpopulated data. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13772) Add a skip read validation flag to cassandra-stress
[ https://issues.apache.org/jira/browse/CASSANDRA-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noam Hasson updated CASSANDRA-13772: Attachment: 13772-trunk.txt > Add a skip read validation flag to cassandra-stress > --- > > Key: CASSANDRA-13772 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13772 > Project: Cassandra > Issue Type: Improvement > Components: Stress >Reporter: Noam Hasson >Priority: Minor > Fix For: 3.11.x > > Attachments: 13772-trunk.txt > > > When running cassandra-stress with read operations, you must make sure all > the data was populated beforehand or else you will get the following errors: > java.io.IOException: Operation x0 on key(s) [4d31314e32314b395030]: Data > returned was not validated > at org.apache.cassandra.stress.Operation.error(Operation.java:127) > at > org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:105) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:91) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:99) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:245) > at > org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:453) > java.lang.RuntimeException: Failed to execute warmup > at > org.apache.cassandra.stress.StressAction.warmup(StressAction.java:117) > at org.apache.cassandra.stress.StressAction.run(StressAction.java:62) > at org.apache.cassandra.stress.Stress.run(Stress.java:143) > at org.apache.cassandra.stress.Stress.main(Stress.java:62) > Even if you use the "-errors ignore" flag you'll get a lot of the following > messages which will both slow the stress and prevent it from displaying the > metrics: > Operation x0 on key(s) [4b3539393831374e3431]: Data returned was not validated > Operation x0 on key(s) [4f4b3936363233375030]: Data returned was not validated > Operation x0 on key(s) [4d304c4b32384f4b3031]: Data returned was not validated > What I propose is to add a flag to skip the read validation, such as: > -errors skip-read-validations > This way when needed you can run a mixed workload and ignore validation of > unpopulated data. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13772) Add a skip read validation flag to cassandra-stress
[ https://issues.apache.org/jira/browse/CASSANDRA-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noam Hasson updated CASSANDRA-13772: Status: Open (was: Patch Available) > Add a skip read validation flag to cassandra-stress > --- > > Key: CASSANDRA-13772 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13772 > Project: Cassandra > Issue Type: Improvement > Components: Stress >Reporter: Noam Hasson >Priority: Minor > Fix For: 3.11.x > > > When running cassandra-stress with read operations, you must make sure all > the data was populated beforehand or else you will get the following errors: > java.io.IOException: Operation x0 on key(s) [4d31314e32314b395030]: Data > returned was not validated > at org.apache.cassandra.stress.Operation.error(Operation.java:127) > at > org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:105) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:91) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:99) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:245) > at > org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:453) > java.lang.RuntimeException: Failed to execute warmup > at > org.apache.cassandra.stress.StressAction.warmup(StressAction.java:117) > at org.apache.cassandra.stress.StressAction.run(StressAction.java:62) > at org.apache.cassandra.stress.Stress.run(Stress.java:143) > at org.apache.cassandra.stress.Stress.main(Stress.java:62) > Even if you use the "-errors ignore" flag you'll get a lot of the following > messages which will both slow the stress and prevent it from displaying the > metrics: > Operation x0 on key(s) [4b3539393831374e3431]: Data returned was not validated > Operation x0 on key(s) [4f4b3936363233375030]: Data returned was not validated > Operation x0 on key(s) [4d304c4b32384f4b3031]: Data returned was not validated > What I propose is to add a flag to skip the read validation, such as: > -errors skip-read-validations > This way when needed you can run a mixed workload and ignore validation of > unpopulated data. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13772) Add a skip read validation flag to cassandra-stress
[ https://issues.apache.org/jira/browse/CASSANDRA-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noam Hasson updated CASSANDRA-13772: Fix Version/s: 3.11.x Status: Patch Available (was: Open) > Add a skip read validation flag to cassandra-stress > --- > > Key: CASSANDRA-13772 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13772 > Project: Cassandra > Issue Type: Improvement > Components: Stress >Reporter: Noam Hasson >Priority: Minor > Fix For: 3.11.x > > > When running cassandra-stress with read operations, you must make sure all > the data was populated beforehand or else you will get the following errors: > java.io.IOException: Operation x0 on key(s) [4d31314e32314b395030]: Data > returned was not validated > at org.apache.cassandra.stress.Operation.error(Operation.java:127) > at > org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:105) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:91) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:99) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:245) > at > org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:453) > java.lang.RuntimeException: Failed to execute warmup > at > org.apache.cassandra.stress.StressAction.warmup(StressAction.java:117) > at org.apache.cassandra.stress.StressAction.run(StressAction.java:62) > at org.apache.cassandra.stress.Stress.run(Stress.java:143) > at org.apache.cassandra.stress.Stress.main(Stress.java:62) > Even if you use the "-errors ignore" flag you'll get a lot of the following > messages which will both slow the stress and prevent it from displaying the > metrics: > Operation x0 on key(s) [4b3539393831374e3431]: Data returned was not validated > Operation x0 on key(s) [4f4b3936363233375030]: Data returned was not validated > Operation x0 on key(s) [4d304c4b32384f4b3031]: Data returned was not validated > What I propose is to add a flag to skip the read validation, such as: > -errors skip-read-validations > This way when needed you can run a mixed workload and ignore validation of > unpopulated data. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-13772) Add a skip read validation flag to cassandra-stress
Noam Hasson created CASSANDRA-13772: --- Summary: Add a skip read validation flag to cassandra-stress Key: CASSANDRA-13772 URL: https://issues.apache.org/jira/browse/CASSANDRA-13772 Project: Cassandra Issue Type: Improvement Components: Stress Reporter: Noam Hasson Priority: Minor When running cassandra-stress with read operations, you must make sure all the data was populated beforehand or else you will get the following errors: java.io.IOException: Operation x0 on key(s) [4d31314e32314b395030]: Data returned was not validated at org.apache.cassandra.stress.Operation.error(Operation.java:127) at org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:105) at org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:91) at org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:99) at org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:245) at org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:453) java.lang.RuntimeException: Failed to execute warmup at org.apache.cassandra.stress.StressAction.warmup(StressAction.java:117) at org.apache.cassandra.stress.StressAction.run(StressAction.java:62) at org.apache.cassandra.stress.Stress.run(Stress.java:143) at org.apache.cassandra.stress.Stress.main(Stress.java:62) Even if you use the "-errors ignore" flag you'll get a lot of the following messages which will both slow the stress and prevent it from displaying the metrics: Operation x0 on key(s) [4b3539393831374e3431]: Data returned was not validated Operation x0 on key(s) [4f4b3936363233375030]: Data returned was not validated Operation x0 on key(s) [4d304c4b32384f4b3031]: Data returned was not validated What I propose is to add a flag to skip the read validation, such as: -errors skip-read-validations This way when needed you can run a mixed workload and ignore validation of unpopulated data. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13771) Emit metrics whenever we hit tombstone failures and warn thresholds
[ https://issues.apache.org/jira/browse/CASSANDRA-13771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] TIRU ADDANKI updated CASSANDRA-13771: - Attachment: 13771.patch > Emit metrics whenever we hit tombstone failures and warn thresholds > --- > > Key: CASSANDRA-13771 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13771 > Project: Cassandra > Issue Type: Improvement >Reporter: TIRU ADDANKI >Assignee: TIRU ADDANKI >Priority: Minor > Attachments: 13771.patch > > > Many a times we see cassandra timeouts, but unless we check the logs we won’t > be able to tell if the time outs are result of too many tombstones or some > other issue. It would be easier if we have metrics published whenever we hit > tombstone failure/warning thresholds. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-13771) Emit metrics whenever we hit tombstone failures and warn thresholds
TIRU ADDANKI created CASSANDRA-13771: Summary: Emit metrics whenever we hit tombstone failures and warn thresholds Key: CASSANDRA-13771 URL: https://issues.apache.org/jira/browse/CASSANDRA-13771 Project: Cassandra Issue Type: Improvement Reporter: TIRU ADDANKI Assignee: TIRU ADDANKI Priority: Minor Many a times we see cassandra timeouts, but unless we check the logs we won’t be able to tell if the time outs are result of too many tombstones or some other issue. It would be easier if we have metrics published whenever we hit tombstone failure/warning thresholds. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-13747) Fix short read protection
[ https://issues.apache.org/jira/browse/CASSANDRA-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130085#comment-16130085 ] Benedict edited comment on CASSANDRA-13747 at 8/17/17 8:26 AM: --- The patch looks good, but while reviewing I got a little suspicious of the modified line {{DataResolver:479}}, as it seemed that {{n}} and {{x}} were the wrong way around... and, reading the comment of intent directly above, and reproducing the calculation, they are indeed. -Assuming, now correctly defined, that {{n <= x}}, this also obviates the need for the {{Math.max(x, 1)}} you have introduced. This must be true, given that we can only have a short-read triggered in the case that we have yielded too few rows, so we must have fewer than we requested (even if other rows we didn't know about were introduced by other peers).- This is _probably_ a significant enough bug that it warrants its own ticket for record keeping, though I'm fairly agnostic on that decision. I'm a little concerned about our current short read behaviour, as right now it seems we should be requesting exactly one row, for any size of under-read, which could mean extremely poor performance in case of large under-reads. I would suggest that the outer unconditional {{Math.max}} is a bad idea, has been (poorly) insulating us from this error, and that we should first be asserting that the calculation yields a value {{>= 0}} before setting to 1. was (Author: benedict): The patch looks good, but while reviewing I got a little suspicious of the modified line {{DataResolver:479}}, as it seemed that {{n}} and {{x}} were the wrong way around... and, reading the comment of intent directly above, and reproducing the calculation, they are indeed. Assuming, now correctly defined, that {{n <= x}}, this also obviates the need for the {{Math.max(x, 1)}} you have introduced. This must be true, given that we can only have a short-read triggered in the case that we have yielded too few rows, so we must have fewer than we requested (even if other rows we didn't know about were introduced by other peers). This is _probably_ a significant enough bug that it warrants its own ticket for record keeping, though I'm fairly agnostic on that decision. I'm a little concerned about our current short read behaviour, as right now it seems we should be requesting exactly one row, for any size of under-read, which could mean extremely poor performance in case of large under-reads. I would suggest that the outer unconditional {{Math.max}} is a bad idea, has been (poorly) insulating us from this error, and that we should first be asserting that the calculation yields a value {{>= 0}} before setting to 1. > Fix short read protection > - > > Key: CASSANDRA-13747 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13747 > Project: Cassandra > Issue Type: Bug > Components: Coordination >Reporter: Aleksey Yeschenko >Assignee: Aleksey Yeschenko > Fix For: 3.0.x, 3.11.x > > > {{ShortReadRowProtection.moreContents()}} expects that by the time we get to > that method, the global post-reconciliation counter was already applied to > the current partition. However, sometimes it won’t get applied, and the > global counter continues counting with {{rowInCurrentPartition}} value not > reset from previous partition, which in the most obvious case would trigger > the assertion we are observing - {{assert > !postReconciliationCounter.isDoneForPartition();}}. In other cases it’s > possible because of this lack of reset to query a node for too few extra > rows, causing unnecessary SRP data requests. > Why is the counter not always applied to the current partition? > The merged {{PartitionIterator}} returned from {{DataResolver.resolve()}} has > two transformations applied to it, in the following order: > {{Filter}} - to purge non-live data from partitions, and to discard empty > partitions altogether (except for Thrift) > {{Counter}}, to count and stop iteration > Problem is, {{Filter}} ’s {{applyToPartition()}} code that discards empty > partitions ({{closeIfEmpty()}} method) would sometimes consume the iterator, > triggering short read protection *before* {{Counter}} ’s > {{applyToPartition()}} gets called and resets its {{rowInCurrentPartition}} > sub-counter. > We should not be consuming iterators until all transformations are applied to > them. For transformations it means that they cannot consume iterators unless > they are the last transformation on the stack. > The linked branch fixes the problem by splitting {{Filter}} into two > transformations. The original - {{Filter}} - that does filtering within > partitions - and a separate {{EmptyPartitionsDiscarder}}, that discards empty > partitions from
[jira] [Commented] (CASSANDRA-13747) Fix short read protection
[ https://issues.apache.org/jira/browse/CASSANDRA-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130085#comment-16130085 ] Benedict commented on CASSANDRA-13747: -- The patch looks good, but while reviewing I got a little suspicious of the modified line {{DataResolver:479}}, as it seemed that {{n}} and {{x}} were the wrong way around... and, reading the comment of intent directly above, and reproducing the calculation, they are indeed. Assuming, now correctly defined, that {{n <= x}}, this also obviates the need for the {{Math.max(x, 1)}} you have introduced. This must be true, given that we can only have a short-read triggered in the case that we have yielded too few rows, so we must have fewer than we requested (even if other rows we didn't know about were introduced by other peers). This is _probably_ a significant enough bug that it warrants its own ticket for record keeping, though I'm fairly agnostic on that decision. I'm a little concerned about our current short read behaviour, as right now it seems we should be requesting exactly one row, for any size of under-read, which could mean extremely poor performance in case of large under-reads. I would suggest that the outer unconditional {{Math.max}} is a bad idea, has been (poorly) insulating us from this error, and that we should first be asserting that the calculation yields a value {{>= 0}} before setting to 1. > Fix short read protection > - > > Key: CASSANDRA-13747 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13747 > Project: Cassandra > Issue Type: Bug > Components: Coordination >Reporter: Aleksey Yeschenko >Assignee: Aleksey Yeschenko > Fix For: 3.0.x, 3.11.x > > > {{ShortReadRowProtection.moreContents()}} expects that by the time we get to > that method, the global post-reconciliation counter was already applied to > the current partition. However, sometimes it won’t get applied, and the > global counter continues counting with {{rowInCurrentPartition}} value not > reset from previous partition, which in the most obvious case would trigger > the assertion we are observing - {{assert > !postReconciliationCounter.isDoneForPartition();}}. In other cases it’s > possible because of this lack of reset to query a node for too few extra > rows, causing unnecessary SRP data requests. > Why is the counter not always applied to the current partition? > The merged {{PartitionIterator}} returned from {{DataResolver.resolve()}} has > two transformations applied to it, in the following order: > {{Filter}} - to purge non-live data from partitions, and to discard empty > partitions altogether (except for Thrift) > {{Counter}}, to count and stop iteration > Problem is, {{Filter}} ’s {{applyToPartition()}} code that discards empty > partitions ({{closeIfEmpty()}} method) would sometimes consume the iterator, > triggering short read protection *before* {{Counter}} ’s > {{applyToPartition()}} gets called and resets its {{rowInCurrentPartition}} > sub-counter. > We should not be consuming iterators until all transformations are applied to > them. For transformations it means that they cannot consume iterators unless > they are the last transformation on the stack. > The linked branch fixes the problem by splitting {{Filter}} into two > transformations. The original - {{Filter}} - that does filtering within > partitions - and a separate {{EmptyPartitionsDiscarder}}, that discards empty > partitions from {{PartitionIterators}}. Thus {{DataResolve.resolve()}}, when > constructing its {{PartitionIterator}}, now does merge first, then applies > {{Filter}}, then {{Counter}}, and only then, as its last (third) > transformation - the {{EmptyPartitionsDiscarder}}. Being the last one > applied, it’s legal for it to consume the iterator, and triggering > {{moreContents()}} is now no longer a problem. > Fixes: [3.0|https://github.com/iamaleksey/cassandra/commits/13747-3.0], > [3.11|https://github.com/iamaleksey/cassandra/commits/13747-3.11], > [4.0|https://github.com/iamaleksey/cassandra/commits/13747-4.0]. dtest > [here|https://github.com/iamaleksey/cassandra-dtest/commits/13747]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-13770) AssertionError: Lower bound INCL_START_BOUND during select by index
Rok Doltar created CASSANDRA-13770: -- Summary: AssertionError: Lower bound INCL_START_BOUND during select by index Key: CASSANDRA-13770 URL: https://issues.apache.org/jira/browse/CASSANDRA-13770 Project: Cassandra Issue Type: Bug Environment: Cassandra 3.11 (cassandra.noarch 3.11.0-1), CentOS Linux release 7.3.1611 (Core) Reporter: Rok Doltar We are getting the following error: DEBUG [Native-Transport-Requests-1] 2017-08-17 07:47:01,815 ReadCallback.java:132 - Failed; received 0 of 1 responses WARN [ReadStage-2] 2017-08-17 07:47:01,816 AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread Thread[ReadStage-2,5,main]: {} java.lang.AssertionError: Lower bound [INCL_START_BOUND(00283543383338354144333637373731373445443036424134424442463445393233453846394634283836453642373436354546423435334544363636443236344644313935333032363338314542363200, ab570080-831f-11e7-a81f-417b646547c3, , 1x) ]is bigger than first returned value [Row: partition_key=00283543383338354144333637373731373445443036424134424442463445393233453846394634283836453642373436354546423435334544363636443236344644313935333032363338314542363200, version=null, file_path=null, file_name=null | ] for sstable /var/lib/cassandra/data/catalog/file-aa90a340831f11e7aca2ed895c1dab3f/.idx_file_path_hash/mc-51-big-Data.db at org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:124) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:47) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:500) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:360) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.db.rows.UnfilteredRowIterator.isEmpty(UnfilteredRowIterator.java:67) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.db.SinglePartitionReadCommand.withSSTablesIterated(SinglePartitionReadCommand.java:695) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDiskInternal(SinglePartitionReadCommand.java:639) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDisk(SinglePartitionReadCommand.java:514) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.index.internal.CassandraIndexSearcher.queryIndex(CassandraIndexSearcher.java:81) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.index.internal.CassandraIndexSearcher.search(CassandraIndexSearcher.java:63) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.db.ReadCommand.executeLocally(ReadCommand.java:408) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1882) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2587) ~[apache-cassandra-3.11.0.jar:3.11.0] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_141] at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134) [apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) [apache-cassandra-3.11.0.jar:3.11.0] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141] The related table is: CREATE TABLE catalog.file (