[jira] [Commented] (CASSANDRA-13299) Potential OOMs and lock contention in write path streams
[ https://issues.apache.org/jira/browse/CASSANDRA-13299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134749#comment-16134749 ] Benjamin Roth commented on CASSANDRA-13299: --- Sorry for the late response, I was on vacation. No, I am not working on that ticket. But thanks a lot for your efforts (not only) on that ticket! > Potential OOMs and lock contention in write path streams > > > Key: CASSANDRA-13299 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13299 > Project: Cassandra > Issue Type: Improvement >Reporter: Benjamin Roth >Assignee: ZhaoYang > > I see a potential OOM, when a stream (e.g. repair) goes through the write > path as it is with MVs. > StreamReceiveTask gets a bunch of SSTableReaders. These produce rowiterators > and they again produce mutations. So every partition creates a single > mutation, which in case of (very) big partitions can result in (very) big > mutations. Those are created on heap and stay there until they finished > processing. > I don't think it is necessary to create a single mutation for each partition. > Why don't we implement a PartitionUpdateGeneratorIterator that takes a > UnfilteredRowIterator and a max size and spits out PartitionUpdates to be > used to create and apply mutations? > The max size should be something like min(reasonable_absolute_max_size, > max_mutation_size, commitlog_segment_size / 2). reasonable_absolute_max_size > could be like 16M or sth. > A mutation shouldn't be too large as it also affects MV partition locking. > The longer a MV partition is locked during a stream, the higher chances are > that WTE's occur during streams. > I could also imagine that a max number of updates per mutation regardless of > size in bytes could make sense to avoid lock contention. > Love to get feedback and suggestions, incl. naming suggestions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-13299) Potential OOMs and lock contention in write path streams
[ https://issues.apache.org/jira/browse/CASSANDRA-13299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhaoYang reassigned CASSANDRA-13299: Assignee: ZhaoYang > Potential OOMs and lock contention in write path streams > > > Key: CASSANDRA-13299 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13299 > Project: Cassandra > Issue Type: Improvement >Reporter: Benjamin Roth >Assignee: ZhaoYang > > I see a potential OOM, when a stream (e.g. repair) goes through the write > path as it is with MVs. > StreamReceiveTask gets a bunch of SSTableReaders. These produce rowiterators > and they again produce mutations. So every partition creates a single > mutation, which in case of (very) big partitions can result in (very) big > mutations. Those are created on heap and stay there until they finished > processing. > I don't think it is necessary to create a single mutation for each partition. > Why don't we implement a PartitionUpdateGeneratorIterator that takes a > UnfilteredRowIterator and a max size and spits out PartitionUpdates to be > used to create and apply mutations? > The max size should be something like min(reasonable_absolute_max_size, > max_mutation_size, commitlog_segment_size / 2). reasonable_absolute_max_size > could be like 16M or sth. > A mutation shouldn't be too large as it also affects MV partition locking. > The longer a MV partition is locked during a stream, the higher chances are > that WTE's occur during streams. > I could also imagine that a max number of updates per mutation regardless of > size in bytes could make sense to avoid lock contention. > Love to get feedback and suggestions, incl. naming suggestions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13299) Potential OOMs and lock contention in write path streams
[ https://issues.apache.org/jira/browse/CASSANDRA-13299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134733#comment-16134733 ] ZhaoYang commented on CASSANDRA-13299: -- [trunk|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13299-trunk] [dtest|https://github.com/riptano/cassandra-dtest/commits/CASSANDRA-13299 ] Changes: 1. Throttle by number of base unfiltered. default is 100. 2. A pair of open/close range tombstone could have any number of unshadowed rows in between, in the patch, simply cache the range tombstones to avoid exceeding the limit. And apply cached range tombstones, in next batch. Note: One partition deletion or a range deletion could cause huge number of view rows to be removed, thus view mutation may fail to apply due to WTE or max_mutation_size, but it could be resolved separately in CASSANDRA-12783. Here, I only address the issue of holding entire partition into memory when repairing base with mv. > Potential OOMs and lock contention in write path streams > > > Key: CASSANDRA-13299 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13299 > Project: Cassandra > Issue Type: Improvement >Reporter: Benjamin Roth > > I see a potential OOM, when a stream (e.g. repair) goes through the write > path as it is with MVs. > StreamReceiveTask gets a bunch of SSTableReaders. These produce rowiterators > and they again produce mutations. So every partition creates a single > mutation, which in case of (very) big partitions can result in (very) big > mutations. Those are created on heap and stay there until they finished > processing. > I don't think it is necessary to create a single mutation for each partition. > Why don't we implement a PartitionUpdateGeneratorIterator that takes a > UnfilteredRowIterator and a max size and spits out PartitionUpdates to be > used to create and apply mutations? > The max size should be something like min(reasonable_absolute_max_size, > max_mutation_size, commitlog_segment_size / 2). reasonable_absolute_max_size > could be like 16M or sth. > A mutation shouldn't be too large as it also affects MV partition locking. > The longer a MV partition is locked during a stream, the higher chances are > that WTE's occur during streams. > I could also imagine that a max number of updates per mutation regardless of > size in bytes could make sense to avoid lock contention. > Love to get feedback and suggestions, incl. naming suggestions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13767) update a row which was inserted with 'IF NOT EXISTS' key word will fail siently
[ https://issues.apache.org/jira/browse/CASSANDRA-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mmh updated CASSANDRA-13767: Description: First, create keyspace and a table using the following {code:java} CREATE KEYSPACE scheduler WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'} AND durable_writes = true; CREATE TABLE scheduler.job_info ( id timeuuid PRIMARY KEY, create_time int, cur_retry int, cur_run_times int, expire_time int, max_retry int, max_run_times int, payload text, period int, retry_interval int, status tinyint, topic text, type text, update_time int ) with caching = {'keys':'ALL', 'rows_per_partition':'NONE'}; {code} then, execute the following cql: {code:java} insert into job_info (id, create_time) values (5be224c6-8231-11e7-9619-9801b2a97471, 0) IF NOT EXISTS; insert into job_info (id, create_time) values (5be224c6-8231-11e7-9619-9801b2a97471, 1); select * from job_info; {code} You will find that create_time is still 0, it is not updated. but, if you remove the IF NOT EXISTS keyword in the first cql, the update will success. was: First, create keyspace and a table using the following {code:java} CREATE KEYSPACE scheduler WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'} AND durable_writes = true; CREATE TABLE scheduler.job_info ( id timeuuid PRIMARY KEY, create_time int, cur_retry int, cur_run_times int, expire_time int, max_retry int, max_run_times int, payload text, period int, retry_interval int, status tinyint, topic text, type text, update_time int ) with caching = {'keys':'ALL', 'rows_per_partition':'NONE'}; {code} then, execute the following cql: {code:java} insert into job_info (id, create_time) values (5be224c6-8231-11e7-9619-9801a7a97471, 0) IF NOT EXISTS; insert into job_info (id, create_time) values (5be224c6-8231-11e7-9619-9801a7a97471, 1); select * from job_info; {code} You will find that create_time is still 0, it is not updated. but, if you remove the IF NOT EXISTS keyword in the first cql, the update will success. > update a row which was inserted with 'IF NOT EXISTS' key word will fail > siently > --- > > Key: CASSANDRA-13767 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13767 > Project: Cassandra > Issue Type: Bug > Components: CQL > Environment: [cqlsh 5.0.1 | Cassandra 3.11.0 | CQL spec 3.4.4 | > Native protocol v4] > cassandra python driver = 3.5.0 > Run in dcoker with the following: > docker run --name cassandra -v /data/cassandra:/var/lib/cassandra > -p9042:9042 -p9160:9160 -p7000:7000 -p7001:7001 cassandra >Reporter: mmh >Priority: Minor > Fix For: 3.11.0 > > > First, create keyspace and a table using the following > {code:java} > CREATE KEYSPACE scheduler WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': '1'} AND durable_writes = true; > CREATE TABLE scheduler.job_info ( > id timeuuid PRIMARY KEY, > create_time int, > cur_retry int, > cur_run_times int, > expire_time int, > max_retry int, > max_run_times int, > payload text, > period int, > retry_interval int, > status tinyint, > topic text, > type text, > update_time int > ) with caching = {'keys':'ALL', 'rows_per_partition':'NONE'}; > {code} > then, execute the following cql: > {code:java} > insert into job_info (id, create_time) values > (5be224c6-8231-11e7-9619-9801b2a97471, 0) IF NOT EXISTS; > insert into job_info (id, create_time) values > (5be224c6-8231-11e7-9619-9801b2a97471, 1); > select * from job_info; > {code} > You will find that create_time is still 0, it is not updated. > but, if you remove the IF NOT EXISTS keyword in the first cql, the update > will success. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-12284) Cqlsh not supporting Copy command
[ https://issues.apache.org/jira/browse/CASSANDRA-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134679#comment-16134679 ] allenbao edited comment on CASSANDRA-12284 at 8/21/17 4:03 AM: --- hello, I installed cqlsh(5.0.4) for my cassandra server(3.11.0) using pip command ON centos6.6。 The “COPY FROM” command also print the following error: Traceback (most recent call last): File "/usr/local/bin/cqlsh", line 1118, in onecmd self.handle_statement(st, statementtext) File "/usr/local/bin/cqlsh", line 1155, in handle_statement return custom_handler(parsed) File "/usr/local/bin/cqlsh", line 1819, in do_copy rows = self.perform_csv_import(ks, cf, columns, fname, opts) File "/usr/local/bin/cqlsh", line 1831, in perform_csv_import csv_options, dialect_options, unrecognized_options = copyutil.parse_options(self, opts) AttributeError: 'module' object has no attribute 'parse_options' was (Author: allenbao): hello, I installed cqlsh(5.0.4) for my cassandra server(3.11.0) using pip command ON centos6.6。 The “COPY TO” command also print the following error: Traceback (most recent call last): File "/usr/local/bin/cqlsh", line 1118, in onecmd self.handle_statement(st, statementtext) File "/usr/local/bin/cqlsh", line 1155, in handle_statement return custom_handler(parsed) File "/usr/local/bin/cqlsh", line 1819, in do_copy rows = self.perform_csv_import(ks, cf, columns, fname, opts) File "/usr/local/bin/cqlsh", line 1831, in perform_csv_import csv_options, dialect_options, unrecognized_options = copyutil.parse_options(self, opts) AttributeError: 'module' object has no attribute 'parse_options' > Cqlsh not supporting Copy command > - > > Key: CASSANDRA-12284 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12284 > Project: Cassandra > Issue Type: Bug > Components: CQL > Environment: Debian and OSX(yosemite) >Reporter: Abhinav Johri > Fix For: 3.3 > > > I installed cqlsh for my cassandra server using pip command. > I wanted to copy a table as CSV to my local system so I used COPY TO command > but it threw me the following error. > Traceback (most recent call last): > File "/usr/local/bin/cqlsh", line 1133, in onecmd > self.handle_statement(st, statementtext) > File "/usr/local/bin/cqlsh", line 1170, in handle_statement > return custom_handler(parsed) > File "/usr/local/bin/cqlsh", line 1837, in do_copy > rows = self.perform_csv_export(ks, cf, columns, fname, opts) > File "/usr/local/bin/cqlsh", line 1956, in perform_csv_export > csv_options, dialect_options, unrecognized_options = > copyutil.parse_options(self, opts) > AttributeError: 'module' object has no attribute 'parse_options' -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-12284) Cqlsh not supporting Copy command
[ https://issues.apache.org/jira/browse/CASSANDRA-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134683#comment-16134683 ] Stefania commented on CASSANDRA-12284: -- Please refer to the comments above, we are not responsible for the pypi releases of cqlsh. > Cqlsh not supporting Copy command > - > > Key: CASSANDRA-12284 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12284 > Project: Cassandra > Issue Type: Bug > Components: CQL > Environment: Debian and OSX(yosemite) >Reporter: Abhinav Johri > Fix For: 3.3 > > > I installed cqlsh for my cassandra server using pip command. > I wanted to copy a table as CSV to my local system so I used COPY TO command > but it threw me the following error. > Traceback (most recent call last): > File "/usr/local/bin/cqlsh", line 1133, in onecmd > self.handle_statement(st, statementtext) > File "/usr/local/bin/cqlsh", line 1170, in handle_statement > return custom_handler(parsed) > File "/usr/local/bin/cqlsh", line 1837, in do_copy > rows = self.perform_csv_export(ks, cf, columns, fname, opts) > File "/usr/local/bin/cqlsh", line 1956, in perform_csv_export > csv_options, dialect_options, unrecognized_options = > copyutil.parse_options(self, opts) > AttributeError: 'module' object has no attribute 'parse_options' -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-12284) Cqlsh not supporting Copy command
[ https://issues.apache.org/jira/browse/CASSANDRA-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134679#comment-16134679 ] allenbao commented on CASSANDRA-12284: -- hello, I installed cqlsh(5.0.4) for my cassandra server(3.11.0) using pip command ON centos6.6。 The “COPY TO” command also print the following error: Traceback (most recent call last): File "/usr/local/bin/cqlsh", line 1118, in onecmd self.handle_statement(st, statementtext) File "/usr/local/bin/cqlsh", line 1155, in handle_statement return custom_handler(parsed) File "/usr/local/bin/cqlsh", line 1819, in do_copy rows = self.perform_csv_import(ks, cf, columns, fname, opts) File "/usr/local/bin/cqlsh", line 1831, in perform_csv_import csv_options, dialect_options, unrecognized_options = copyutil.parse_options(self, opts) AttributeError: 'module' object has no attribute 'parse_options' > Cqlsh not supporting Copy command > - > > Key: CASSANDRA-12284 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12284 > Project: Cassandra > Issue Type: Bug > Components: CQL > Environment: Debian and OSX(yosemite) >Reporter: Abhinav Johri > Fix For: 3.3 > > > I installed cqlsh for my cassandra server using pip command. > I wanted to copy a table as CSV to my local system so I used COPY TO command > but it threw me the following error. > Traceback (most recent call last): > File "/usr/local/bin/cqlsh", line 1133, in onecmd > self.handle_statement(st, statementtext) > File "/usr/local/bin/cqlsh", line 1170, in handle_statement > return custom_handler(parsed) > File "/usr/local/bin/cqlsh", line 1837, in do_copy > rows = self.perform_csv_export(ks, cf, columns, fname, opts) > File "/usr/local/bin/cqlsh", line 1956, in perform_csv_export > csv_options, dialect_options, unrecognized_options = > copyutil.parse_options(self, opts) > AttributeError: 'module' object has no attribute 'parse_options' -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-12938) cassandra-stress hangs on error
[ https://issues.apache.org/jira/browse/CASSANDRA-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134675#comment-16134675 ] Stefania edited comment on CASSANDRA-12938 at 8/21/17 3:49 AM: --- \+1, running dtests on our internal CI, I will commit to 3.11+ if results are OK. was (Author: stefania): +1, running dtests on our internal CI, I will commit to 3.11+ if results are OK. > cassandra-stress hangs on error > --- > > Key: CASSANDRA-12938 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12938 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: James Falcon >Assignee: Eduard Tudenhoefner > Fix For: 3.11.x > > > After encountering a fatal error, cassandra-stress hangs. Having not run a > previous stress write, can be reproduced with: > {code} > cassandra-stress read n=1000 -rate threads=2 > {code} > Here's the full output > {code} > Stress Settings > Command: > Type: read > Count: 1,000 > No Warmup: false > Consistency Level: LOCAL_ONE > Target Uncertainty: not applicable > Key Size (bytes): 10 > Counter Increment Distibution: add=fixed(1) > Rate: > Auto: false > Thread Count: 2 > OpsPer Sec: 0 > Population: > Distribution: Gaussian: min=1,max=1000,mean=500.50,stdev=166.50 > Order: ARBITRARY > Wrap: false > Insert: > Revisits: Uniform: min=1,max=100 > Visits: Fixed: key=1 > Row Population Ratio: Ratio: divisor=1.00;delegate=Fixed: key=1 > Batch Type: not batching > Columns: > Max Columns Per Key: 5 > Column Names: [C0, C1, C2, C3, C4] > Comparator: AsciiType > Timestamp: null > Variable Column Count: false > Slice: false > Size Distribution: Fixed: key=34 > Count Distribution: Fixed: key=5 > Errors: > Ignore: false > Tries: 10 > Log: > No Summary: false > No Settings: false > File: null > Interval Millis: 1000 > Level: NORMAL > Mode: > API: JAVA_DRIVER_NATIVE > Connection Style: CQL_PREPARED > CQL Version: CQL3 > Protocol Version: V4 > Username: null > Password: null > Auth Provide Class: null > Max Pending Per Connection: 128 > Connections Per Host: 8 > Compression: NONE > Node: > Nodes: [localhost] > Is White List: false > Datacenter: null > Schema: > Keyspace: keyspace1 > Replication Strategy: org.apache.cassandra.locator.SimpleStrategy > Replication Strategy Pptions: {replication_factor=1} > Table Compression: null > Table Compaction Strategy: null > Table Compaction Strategy Options: {} > Transport: > factory=org.apache.cassandra.thrift.TFramedTransportFactory; > truststore=null; truststore-password=null; keystore=null; > keystore-password=null; ssl-protocol=TLS; ssl-alg=SunX509; store-type=JKS; > ssl-ciphers=TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA; > Port: > Native Port: 9042 > Thrift Port: 9160 > JMX Port: 9042 > Send To Daemon: > *not set* > Graph: > File: null > Revision: unknown > Title: null > Operation: READ > TokenRange: > Wrap: false > Split Factor: 1 > Sleeping 2s... > Warming up READ with 250 iterations... > Connected to cluster: falcon-test2, max pending requests per connection 128, > max connections per host 8 > Datatacenter: Cassandra; Host: localhost/127.0.0.1; Rack: rack1 > Failed to connect over JMX; not collecting these stats > Connected to cluster: falcon-test2, max pending requests per connection 128, > max connections per host 8 > Datatacenter: Cassandra; Host: localhost/127.0.0.1; Rack: rack1 > com.datastax.driver.core.exceptions.InvalidQueryException: Keyspace > 'keyspace1' does not exist > Connected to cluster: falcon-test2, max pending requests per connection 128, > max connections per host 8 > Datatacenter: Cassandra; Host: localhost/127.0.0.1; Rack: rack1 > com.datastax.driver.core.exceptions.InvalidQueryException: Keyspace > 'keyspace1' does not exist > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-12938) cassandra-stress hangs on error
[ https://issues.apache.org/jira/browse/CASSANDRA-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134675#comment-16134675 ] Stefania commented on CASSANDRA-12938: -- +1, running dtests on our internal CI, I will commit to 3.11+ if results are OK. > cassandra-stress hangs on error > --- > > Key: CASSANDRA-12938 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12938 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: James Falcon >Assignee: Eduard Tudenhoefner > Fix For: 3.11.x > > > After encountering a fatal error, cassandra-stress hangs. Having not run a > previous stress write, can be reproduced with: > {code} > cassandra-stress read n=1000 -rate threads=2 > {code} > Here's the full output > {code} > Stress Settings > Command: > Type: read > Count: 1,000 > No Warmup: false > Consistency Level: LOCAL_ONE > Target Uncertainty: not applicable > Key Size (bytes): 10 > Counter Increment Distibution: add=fixed(1) > Rate: > Auto: false > Thread Count: 2 > OpsPer Sec: 0 > Population: > Distribution: Gaussian: min=1,max=1000,mean=500.50,stdev=166.50 > Order: ARBITRARY > Wrap: false > Insert: > Revisits: Uniform: min=1,max=100 > Visits: Fixed: key=1 > Row Population Ratio: Ratio: divisor=1.00;delegate=Fixed: key=1 > Batch Type: not batching > Columns: > Max Columns Per Key: 5 > Column Names: [C0, C1, C2, C3, C4] > Comparator: AsciiType > Timestamp: null > Variable Column Count: false > Slice: false > Size Distribution: Fixed: key=34 > Count Distribution: Fixed: key=5 > Errors: > Ignore: false > Tries: 10 > Log: > No Summary: false > No Settings: false > File: null > Interval Millis: 1000 > Level: NORMAL > Mode: > API: JAVA_DRIVER_NATIVE > Connection Style: CQL_PREPARED > CQL Version: CQL3 > Protocol Version: V4 > Username: null > Password: null > Auth Provide Class: null > Max Pending Per Connection: 128 > Connections Per Host: 8 > Compression: NONE > Node: > Nodes: [localhost] > Is White List: false > Datacenter: null > Schema: > Keyspace: keyspace1 > Replication Strategy: org.apache.cassandra.locator.SimpleStrategy > Replication Strategy Pptions: {replication_factor=1} > Table Compression: null > Table Compaction Strategy: null > Table Compaction Strategy Options: {} > Transport: > factory=org.apache.cassandra.thrift.TFramedTransportFactory; > truststore=null; truststore-password=null; keystore=null; > keystore-password=null; ssl-protocol=TLS; ssl-alg=SunX509; store-type=JKS; > ssl-ciphers=TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA; > Port: > Native Port: 9042 > Thrift Port: 9160 > JMX Port: 9042 > Send To Daemon: > *not set* > Graph: > File: null > Revision: unknown > Title: null > Operation: READ > TokenRange: > Wrap: false > Split Factor: 1 > Sleeping 2s... > Warming up READ with 250 iterations... > Connected to cluster: falcon-test2, max pending requests per connection 128, > max connections per host 8 > Datatacenter: Cassandra; Host: localhost/127.0.0.1; Rack: rack1 > Failed to connect over JMX; not collecting these stats > Connected to cluster: falcon-test2, max pending requests per connection 128, > max connections per host 8 > Datatacenter: Cassandra; Host: localhost/127.0.0.1; Rack: rack1 > com.datastax.driver.core.exceptions.InvalidQueryException: Keyspace > 'keyspace1' does not exist > Connected to cluster: falcon-test2, max pending requests per connection 128, > max connections per host 8 > Datatacenter: Cassandra; Host: localhost/127.0.0.1; Rack: rack1 > com.datastax.driver.core.exceptions.InvalidQueryException: Keyspace > 'keyspace1' does not exist > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13773) cassandra-stress writes even data when n=0
[ https://issues.apache.org/jira/browse/CASSANDRA-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134665#comment-16134665 ] Stefania commented on CASSANDRA-13773: -- I've started the dtests on our internal CI, if there is a way to run them on CircleCI please launch them and I haven't set it up yet. I've tested a bit locally as well, and I think we are better off skipping the command entirely when {{n=0}}, not just the warm-up, otherwise {{cassandra-stress write n=0}} (with no rate specified) will loop over different rates and sleep for no good reason whatsoever. So I suggest that we create the schema and then exit, see [here|https://github.com/apache/cassandra/compare/trunk...stef1927:13773-3.0#diff-fd2f2d2364937fcb1c0d73c8314f1418R57]. > cassandra-stress writes even data when n=0 > -- > > Key: CASSANDRA-13773 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13773 > Project: Cassandra > Issue Type: Bug > Components: Stress >Reporter: Eduard Tudenhoefner >Assignee: Eduard Tudenhoefner >Priority: Minor > Fix For: 3.0.15 > > > This is very unintuitive as > {code} > cassandra-stress write n=0 -rate threads=1 > {code} > will do inserts even with *n=0*. I guess most people won't ever run with > *n=0* but this is a nice shortcut for creating some schema without using > *cqlsh* > This is happening because we're writing *50k* rows of warmup data as can be > seen below: > {code} > cqlsh> select count(*) from keyspace1.standard1 ; > count > --- > 5 > (1 rows) > {code} > We can avoid writing warmup data using > {code} > cassandra-stress write n=0 no-warmup -rate threads=1 > {code} > but I would still expect to have *0* rows written when specifying *n=0*. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-12783) Break up large MV mutations to prevent OOMs
[ https://issues.apache.org/jira/browse/CASSANDRA-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134653#comment-16134653 ] ZhaoYang commented on CASSANDRA-12783: -- [~KurtG] Hi, did you start with this issue? do you mind sharing your approach here? > Break up large MV mutations to prevent OOMs > --- > > Key: CASSANDRA-12783 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12783 > Project: Cassandra > Issue Type: Bug > Components: Local Write-Read Paths, Materialized Views >Reporter: Carl Yeksigian >Assignee: Kurt Greaves > Fix For: 4.x > > > We only use the code path added in CASSANDRA-12268 for the view builder > because otherwise we would break the contract of the batchlog, where some > mutations may be written and pushed out before the whole batch log has been > saved. > We would need to ensure that all of the updates make it to the batchlog > before allowing the batchlog manager to try to replay them, but also before > we start pushing out updates to the paired replicas. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13773) cassandra-stress writes even data when n=0
[ https://issues.apache.org/jira/browse/CASSANDRA-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134651#comment-16134651 ] Stefania commented on CASSANDRA-13773: -- The patch LGTM however, whilst it doesn't make sense to perform warmup when n=0, it still changes the behavior seen by the user. Therefore, I am not entirely sure this patch should go into 3.0, any thoughts [~tjake]? Regarding CI, I think what we care is that dtests using cassandra-stress still work, I don't think cassandra-stress impacts unit tests at all. > cassandra-stress writes even data when n=0 > -- > > Key: CASSANDRA-13773 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13773 > Project: Cassandra > Issue Type: Bug > Components: Stress >Reporter: Eduard Tudenhoefner >Assignee: Eduard Tudenhoefner >Priority: Minor > Fix For: 3.0.15 > > > This is very unintuitive as > {code} > cassandra-stress write n=0 -rate threads=1 > {code} > will do inserts even with *n=0*. I guess most people won't ever run with > *n=0* but this is a nice shortcut for creating some schema without using > *cqlsh* > This is happening because we're writing *50k* rows of warmup data as can be > seen below: > {code} > cqlsh> select count(*) from keyspace1.standard1 ; > count > --- > 5 > (1 rows) > {code} > We can avoid writing warmup data using > {code} > cassandra-stress write n=0 no-warmup -rate threads=1 > {code} > but I would still expect to have *0* rows written when specifying *n=0*. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134638#comment-16134638 ] mck commented on CASSANDRA-13418: - [~rgerard]'s patch: || branch || testall || dtest || | [trunk_13418|https://github.com/criteo-forks/cassandra/tree/CASSANDRA-13418] | [testall|https://circleci.com/gh/thelastpickle/cassandra/21] | [dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/194] | > Allow TWCS to ignore overlaps when dropping fully expired sstables > -- > > Key: CASSANDRA-13418 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13418 > Project: Cassandra > Issue Type: Improvement > Components: Compaction >Reporter: Corentin Chary > Labels: twcs > Attachments: twcs-cleanup.png > > > http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If > you really want read-repairs you're going to have sstables blocking the > expiration of other fully expired SSTables because they overlap. > You can set unchecked_tombstone_compaction = true or tombstone_threshold to a > very low value and that will purge the blockers of old data that should > already have expired, thus removing the overlaps and allowing the other > SSTables to expire. > The thing is that this is rather CPU intensive and not optimal. If you have > time series, you might not care if all your data doesn't exactly expire at > the right time, or if data re-appears for some time, as long as it gets > deleted as soon as it can. And in this situation I believe it would be really > beneficial to allow users to simply ignore overlapping SSTables when looking > for fully expired ones. > To the question: why would you need read-repairs ? > - Full repairs basically take longer than the TTL of the data on my dataset, > so this isn't really effective. > - Even with a 10% chances of doing a repair, we found out that this would be > enough to greatly reduce entropy of the most used data (and if you have > timeseries, you're likely to have a dashboard doing the same important > queries over and over again). > - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow. > I'll try to come up with a patch demonstrating how this would work, try it on > our system and report the effects. > cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13717) INSERT statement fails when Tuple type is used as clustering column with default DESC order
[ https://issues.apache.org/jira/browse/CASSANDRA-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134567#comment-16134567 ] Stavros Kontopoulos commented on CASSANDRA-13717: - [~jjirsa] Any update or something I should do? > INSERT statement fails when Tuple type is used as clustering column with > default DESC order > --- > > Key: CASSANDRA-13717 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13717 > Project: Cassandra > Issue Type: Bug > Environment: Cassandra 3.11 >Reporter: Anastasios Kichidis >Assignee: Stavros Kontopoulos >Priority: Critical > Attachments: example_queries.cql, fix_13717 > > > When a column family is created and a Tuple is used on clustering column with > default clustering order DESC, then the INSERT statement fails. > For example, the following table will make the INSERT statement fail with > error message "Invalid tuple type literal for tdemo of type > frozen>" , although the INSERT statement is correct > (works as expected when the default order is ASC) > {noformat} > create table test_table ( > id int, > tdemo tuple, > primary key (id, tdemo) > ) with clustering order by (tdemo desc); > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-13767) update a row which was inserted with 'IF NOT EXISTS' key word will fail siently
[ https://issues.apache.org/jira/browse/CASSANDRA-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134453#comment-16134453 ] mmh edited comment on CASSANDRA-13767 at 8/20/17 2:46 PM: -- where can i find the details about timestamps used in LWT/Paxos queries and "normal" queries? The operation I posted in the issue is a simple way to reproduce the issue. In my real code, the insert operation and the update operation belongs to two HTTP API, I first call the insert api, it takes about 20ms to complete, and then the update api is called. It is running in a test environment, both the client and cassandra server run on the same machine. So the timestamp should not be a problem, and the insert operation should have a timestamp earlier than the update operation by 20ms. So, what's the detail of timestamps used in LWT/Paxos queries and "normal" queries? was (Author: myrfy001): where can i find the details about timestamps used in LWT/Paxos queries and "normal" queries? The operation I posted in the issue is a simple way to reproduce the issue. In my real code, the insert operation and the update operation belongs to two HTTP API, I first call the insert api, it takes about 20ms to complete, and then the update api is called. It is running in a test environment, both the client and cassandra server run on the same machine. So the timestamp should not be a problem. So, what's the detail of timestamps used in LWT/Paxos queries and "normal" queries? > update a row which was inserted with 'IF NOT EXISTS' key word will fail > siently > --- > > Key: CASSANDRA-13767 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13767 > Project: Cassandra > Issue Type: Bug > Components: CQL > Environment: [cqlsh 5.0.1 | Cassandra 3.11.0 | CQL spec 3.4.4 | > Native protocol v4] > cassandra python driver = 3.5.0 > Run in dcoker with the following: > docker run --name cassandra -v /data/cassandra:/var/lib/cassandra > -p9042:9042 -p9160:9160 -p7000:7000 -p7001:7001 cassandra >Reporter: mmh >Priority: Minor > Fix For: 3.11.0 > > > First, create keyspace and a table using the following > {code:java} > CREATE KEYSPACE scheduler WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': '1'} AND durable_writes = true; > CREATE TABLE scheduler.job_info ( > id timeuuid PRIMARY KEY, > create_time int, > cur_retry int, > cur_run_times int, > expire_time int, > max_retry int, > max_run_times int, > payload text, > period int, > retry_interval int, > status tinyint, > topic text, > type text, > update_time int > ) with caching = {'keys':'ALL', 'rows_per_partition':'NONE'}; > {code} > then, execute the following cql: > {code:java} > insert into job_info (id, create_time) values > (5be224c6-8231-11e7-9619-9801a7a97471, 0) IF NOT EXISTS; > insert into job_info (id, create_time) values > (5be224c6-8231-11e7-9619-9801a7a97471, 1); > select * from job_info; > {code} > You will find that create_time is still 0, it is not updated. > but, if you remove the IF NOT EXISTS keyword in the first cql, the update > will success. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13767) update a row which was inserted with 'IF NOT EXISTS' key word will fail siently
[ https://issues.apache.org/jira/browse/CASSANDRA-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134453#comment-16134453 ] mmh commented on CASSANDRA-13767: - where can i find the details about timestamps used in LWT/Paxos queries and "normal" queries? The operation I posted in the issue is a simple way to reproduce the issue. In my real code, the insert operation and the update operation belongs to two HTTP API, I first call the insert api, it takes about 20ms to complete, and then the update api is called. It is running in a test environment, both the client and cassandra server run on the same machine. So the timestamp should not be a problem. So, what's the detail of timestamps used in LWT/Paxos queries and "normal" queries? > update a row which was inserted with 'IF NOT EXISTS' key word will fail > siently > --- > > Key: CASSANDRA-13767 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13767 > Project: Cassandra > Issue Type: Bug > Components: CQL > Environment: [cqlsh 5.0.1 | Cassandra 3.11.0 | CQL spec 3.4.4 | > Native protocol v4] > cassandra python driver = 3.5.0 > Run in dcoker with the following: > docker run --name cassandra -v /data/cassandra:/var/lib/cassandra > -p9042:9042 -p9160:9160 -p7000:7000 -p7001:7001 cassandra >Reporter: mmh >Priority: Minor > Fix For: 3.11.0 > > > First, create keyspace and a table using the following > {code:java} > CREATE KEYSPACE scheduler WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': '1'} AND durable_writes = true; > CREATE TABLE scheduler.job_info ( > id timeuuid PRIMARY KEY, > create_time int, > cur_retry int, > cur_run_times int, > expire_time int, > max_retry int, > max_run_times int, > payload text, > period int, > retry_interval int, > status tinyint, > topic text, > type text, > update_time int > ) with caching = {'keys':'ALL', 'rows_per_partition':'NONE'}; > {code} > then, execute the following cql: > {code:java} > insert into job_info (id, create_time) values > (5be224c6-8231-11e7-9619-9801a7a97471, 0) IF NOT EXISTS; > insert into job_info (id, create_time) values > (5be224c6-8231-11e7-9619-9801a7a97471, 1); > select * from job_info; > {code} > You will find that create_time is still 0, it is not updated. > but, if you remove the IF NOT EXISTS keyword in the first cql, the update > will success. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org