[jira] [Commented] (CASSANDRA-15187) Cann't table and materialized view's timestamp column order by function well?

2019-06-27 Thread Jeff Jirsa (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874679#comment-16874679
 ] 

Jeff Jirsa commented on CASSANDRA-15187:


This JIRA is meant for bug reports. Please use the mailing list or slack for 
operational questions.



> Cann't table and materialized view's timestamp column order by  function well?
> --
>
> Key: CASSANDRA-15187
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15187
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Syntax
>Reporter: gloCalHelp.com
>Priority: Normal
>
> Cann't table and materialized view's timestamp column order by  function well?
> I am using cassandra3.11.3 on centos6.9 with python2.7.13, when I create a 
> table as below:
> CREATE TABLE hygl_jcsj.hyjg_ods_yy_gps_novar4 (
>     clcph text,    
>     dwsj timestamp,    
>     bc decimal,    
>     blbs decimal,  
>     cjbzh text,    
>     ckryid decimal,    
>     clid decimal,  
>     clmc text,     
>     ddfx decimal,  
>     ddrq timestamp,    
>     ddsj text,     
>     dlzbs decimal,     
>     dwrq timestamp,    
>     dwsk text,     
>     fcsxh decimal,     
>     fwj decimal,   
>     gd decimal,    
>     gdjd decimal,  
>     gdwd decimal,  
>     jd decimal,    
>     jsdlc decimal,     
>     jszjl decimal,     
>     jxzjl decimal,     
>     kxbs decimal,  
>     sfaxlxs decimal,   
>     sfcs decimal,  
>     sjgxsj timestamp,  
>     sjid text,     
>     sjlx decimal,  
>     sjlyxt decimal,    
>     sjsfzh text,   
>     sjwtid text,   
>     sjxm text,     
>     sjzlfj decimal,    
>     sssd decimal,  
>     szzdid decimal,    
>     szzdmc text,   
>     szzdxh decimal,    
>     wd decimal,    
>     xlbm text,     
>     xlid decimal,  
>     xlmc text,     
>     xslc decimal,  
>     xxfssj timestamp,  
>     xxjssj timestamp,  
>     xxrksj timestamp,  
>     xzzdid decimal,    
>     xzzdmc text,   
>     xzzdxh decimal,    
>     yxfx decimal,  
>     yygpsxxjlid decimal,   
>     yyzt decimal,  
>     PRIMARY KEY (clcph, dwsj)  
> ) WITH CLUSTERING ORDER BY (dwsj ASC)
>     AND bloom_filter_fp_chance = 0.01
>     AND caching = \{'keys': 'ALL', 'rows_per_partition': 'NONE'}
>     AND comment = 'GPS数据'
>     AND compaction = \{'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
>     AND compression = \{'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>     AND crc_check_chance = 1.0
>     AND dclocal_read_repair_chance = 0.0
>     AND default_time_to_live = 0
>     AND gc_grace_seconds = 864000
>     AND max_index_interval = 2048
>     AND memtable_flush_period_in_ms = 360
>     AND min_index_interval = 128
>     AND read_repair_chance = 0.0
>     AND speculative_retry = '99PERCENTILE';
> the column dwsj is order by nature order of ASC, but when I use cqlsh -e 
> "select clcph,dwsj from hygl_jcsj.hyjg_ods_yy_gps_novar4  limit 18",
> the result is:
> | clcph   \| dwsj| | | | |
> |-+-| | | | |
> | a85161782800835 \| 2019-06-27 11:39:42.00+| | | | |
> | a85161785390963 \| 2019-06-27 13:06:54.00+| | | | |
> |  a8516178847003 \| 2019-06-25 10:51:18.00+| | | | |
> |  a8516178847003 \| 2019-06-27 10:06:56.00+| | | | |
> | 

[jira] [Commented] (CASSANDRA-15187) Cann't table and materialized view's timestamp column order by function well?

2019-06-27 Thread gloCalHelp.com (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874678#comment-16874678
 ] 

gloCalHelp.com commented on CASSANDRA-15187:


But isn't there a method to get total data's ordered list from all partitions?

 

And by the way, would someone can teach me these 3 questions:

1, is there a command or utility to check a running query's execution status?

2, is a materialized view's data all link to the base table? otherwise, why the 
view's creating method is finished so quick?

3, how can I check a running cassandra's runtime variables?

> Cann't table and materialized view's timestamp column order by  function well?
> --
>
> Key: CASSANDRA-15187
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15187
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Syntax
>Reporter: gloCalHelp.com
>Priority: Normal
>
> Cann't table and materialized view's timestamp column order by  function well?
> I am using cassandra3.11.3 on centos6.9 with python2.7.13, when I create a 
> table as below:
> CREATE TABLE hygl_jcsj.hyjg_ods_yy_gps_novar4 (
>     clcph text,    
>     dwsj timestamp,    
>     bc decimal,    
>     blbs decimal,  
>     cjbzh text,    
>     ckryid decimal,    
>     clid decimal,  
>     clmc text,     
>     ddfx decimal,  
>     ddrq timestamp,    
>     ddsj text,     
>     dlzbs decimal,     
>     dwrq timestamp,    
>     dwsk text,     
>     fcsxh decimal,     
>     fwj decimal,   
>     gd decimal,    
>     gdjd decimal,  
>     gdwd decimal,  
>     jd decimal,    
>     jsdlc decimal,     
>     jszjl decimal,     
>     jxzjl decimal,     
>     kxbs decimal,  
>     sfaxlxs decimal,   
>     sfcs decimal,  
>     sjgxsj timestamp,  
>     sjid text,     
>     sjlx decimal,  
>     sjlyxt decimal,    
>     sjsfzh text,   
>     sjwtid text,   
>     sjxm text,     
>     sjzlfj decimal,    
>     sssd decimal,  
>     szzdid decimal,    
>     szzdmc text,   
>     szzdxh decimal,    
>     wd decimal,    
>     xlbm text,     
>     xlid decimal,  
>     xlmc text,     
>     xslc decimal,  
>     xxfssj timestamp,  
>     xxjssj timestamp,  
>     xxrksj timestamp,  
>     xzzdid decimal,    
>     xzzdmc text,   
>     xzzdxh decimal,    
>     yxfx decimal,  
>     yygpsxxjlid decimal,   
>     yyzt decimal,  
>     PRIMARY KEY (clcph, dwsj)  
> ) WITH CLUSTERING ORDER BY (dwsj ASC)
>     AND bloom_filter_fp_chance = 0.01
>     AND caching = \{'keys': 'ALL', 'rows_per_partition': 'NONE'}
>     AND comment = 'GPS数据'
>     AND compaction = \{'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
>     AND compression = \{'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>     AND crc_check_chance = 1.0
>     AND dclocal_read_repair_chance = 0.0
>     AND default_time_to_live = 0
>     AND gc_grace_seconds = 864000
>     AND max_index_interval = 2048
>     AND memtable_flush_period_in_ms = 360
>     AND min_index_interval = 128
>     AND read_repair_chance = 0.0
>     AND speculative_retry = '99PERCENTILE';
> the column dwsj is order by nature order of ASC, but when I use cqlsh -e 
> "select clcph,dwsj from hygl_jcsj.hyjg_ods_yy_gps_novar4  limit 18",
> the result is:
> | clcph   \| dwsj| | | | |
> 

[jira] [Updated] (CASSANDRA-15187) Cann't table and materialized view's timestamp column order by function well?

2019-06-27 Thread Jeff Jirsa (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-15187:
---
Resolution: Not A Bug
Status: Resolved  (was: Triage Needed)

The clustering order by in the schema specifies the order of rows +within a 
partition+.

In your example, you're querying without a partition key in the WHERE , so 
+there's no ordering expected across partition keys+, and the results +ARE 
sorted as expected+ in the rows where there are multiple rows per partition.


> Cann't table and materialized view's timestamp column order by  function well?
> --
>
> Key: CASSANDRA-15187
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15187
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Syntax
>Reporter: gloCalHelp.com
>Priority: Normal
>
> Cann't table and materialized view's timestamp column order by  function well?
> I am using cassandra3.11.3 on centos6.9 with python2.7.13, when I create a 
> table as below:
> CREATE TABLE hygl_jcsj.hyjg_ods_yy_gps_novar4 (
>     clcph text,    
>     dwsj timestamp,    
>     bc decimal,    
>     blbs decimal,  
>     cjbzh text,    
>     ckryid decimal,    
>     clid decimal,  
>     clmc text,     
>     ddfx decimal,  
>     ddrq timestamp,    
>     ddsj text,     
>     dlzbs decimal,     
>     dwrq timestamp,    
>     dwsk text,     
>     fcsxh decimal,     
>     fwj decimal,   
>     gd decimal,    
>     gdjd decimal,  
>     gdwd decimal,  
>     jd decimal,    
>     jsdlc decimal,     
>     jszjl decimal,     
>     jxzjl decimal,     
>     kxbs decimal,  
>     sfaxlxs decimal,   
>     sfcs decimal,  
>     sjgxsj timestamp,  
>     sjid text,     
>     sjlx decimal,  
>     sjlyxt decimal,    
>     sjsfzh text,   
>     sjwtid text,   
>     sjxm text,     
>     sjzlfj decimal,    
>     sssd decimal,  
>     szzdid decimal,    
>     szzdmc text,   
>     szzdxh decimal,    
>     wd decimal,    
>     xlbm text,     
>     xlid decimal,  
>     xlmc text,     
>     xslc decimal,  
>     xxfssj timestamp,  
>     xxjssj timestamp,  
>     xxrksj timestamp,  
>     xzzdid decimal,    
>     xzzdmc text,   
>     xzzdxh decimal,    
>     yxfx decimal,  
>     yygpsxxjlid decimal,   
>     yyzt decimal,  
>     PRIMARY KEY (clcph, dwsj)  
> ) WITH CLUSTERING ORDER BY (dwsj ASC)
>     AND bloom_filter_fp_chance = 0.01
>     AND caching = \{'keys': 'ALL', 'rows_per_partition': 'NONE'}
>     AND comment = 'GPS数据'
>     AND compaction = \{'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
>     AND compression = \{'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>     AND crc_check_chance = 1.0
>     AND dclocal_read_repair_chance = 0.0
>     AND default_time_to_live = 0
>     AND gc_grace_seconds = 864000
>     AND max_index_interval = 2048
>     AND memtable_flush_period_in_ms = 360
>     AND min_index_interval = 128
>     AND read_repair_chance = 0.0
>     AND speculative_retry = '99PERCENTILE';
> the column dwsj is order by nature order of ASC, but when I use cqlsh -e 
> "select clcph,dwsj from hygl_jcsj.hyjg_ods_yy_gps_novar4  limit 18",
> the result is:
> | clcph   \| dwsj| | | | |
> |-+-| | | | |
> | a85161782800835 

[jira] [Created] (CASSANDRA-15187) Cann't table and materialized view's timestamp column order by function well?

2019-06-27 Thread gloCalHelp.com (JIRA)
gloCalHelp.com created CASSANDRA-15187:
--

 Summary: Cann't table and materialized view's timestamp column 
order by  function well?
 Key: CASSANDRA-15187
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15187
 Project: Cassandra
  Issue Type: Bug
  Components: CQL/Syntax
Reporter: gloCalHelp.com


Cann't table and materialized view's timestamp column order by  function well?

I am using cassandra3.11.3 on centos6.9 with python2.7.13, when I create a 
table as below:

CREATE TABLE hygl_jcsj.hyjg_ods_yy_gps_novar4 (
    clcph text,    
    dwsj timestamp,    
    bc decimal,    
    blbs decimal,  
    cjbzh text,    
    ckryid decimal,    
    clid decimal,  
    clmc text,     
    ddfx decimal,  
    ddrq timestamp,    
    ddsj text,     
    dlzbs decimal,     
    dwrq timestamp,    
    dwsk text,     
    fcsxh decimal,     
    fwj decimal,   
    gd decimal,    
    gdjd decimal,  
    gdwd decimal,  
    jd decimal,    
    jsdlc decimal,     
    jszjl decimal,     
    jxzjl decimal,     
    kxbs decimal,  
    sfaxlxs decimal,   
    sfcs decimal,  
    sjgxsj timestamp,  
    sjid text,     
    sjlx decimal,  
    sjlyxt decimal,    
    sjsfzh text,   
    sjwtid text,   
    sjxm text,     
    sjzlfj decimal,    
    sssd decimal,  
    szzdid decimal,    
    szzdmc text,   
    szzdxh decimal,    
    wd decimal,    
    xlbm text,     
    xlid decimal,  
    xlmc text,     
    xslc decimal,  
    xxfssj timestamp,  
    xxjssj timestamp,  
    xxrksj timestamp,  
    xzzdid decimal,    
    xzzdmc text,   
    xzzdxh decimal,    
    yxfx decimal,  
    yygpsxxjlid decimal,   
    yyzt decimal,  
    PRIMARY KEY (clcph, dwsj)  
) WITH CLUSTERING ORDER BY (dwsj ASC)
    AND bloom_filter_fp_chance = 0.01
    AND caching = \{'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND comment = 'GPS数据'
    AND compaction = \{'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
    AND compression = \{'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.0
    AND default_time_to_live = 0
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 360
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99PERCENTILE';

the column dwsj is order by nature order of ASC, but when I use cqlsh -e 
"select clcph,dwsj from hygl_jcsj.hyjg_ods_yy_gps_novar4  limit 18",

the result is:
| clcph   \| dwsj| | | | |
|-+-| | | | |
| a85161782800835 \| 2019-06-27 11:39:42.00+| | | | |
| a85161785390963 \| 2019-06-27 13:06:54.00+| | | | |
|  a8516178847003 \| 2019-06-25 10:51:18.00+| | | | |
|  a8516178847003 \| 2019-06-27 10:06:56.00+| | | | |
| a85161785095735 \| 2019-06-27 12:55:55.00+| | | | |
| a85161783068534 \| 2019-06-27 11:48:24.00+| | | | |
|  a8516178475869 \| 2019-06-25 10:51:18.00+| | | | |
|  a8516178475869 \| 2019-06-27 09:53:04.00+| | | | |
| a85161781283975 \| 2019-06-27 10:23:22.00+| | | | |
|  a8516178463883 \| 2019-06-25 10:51:18.00+| | | | |
|  a8516178463883 \| 2019-06-27 09:52:35.00+| | | | |
| a85161781409966 \| 2019-06-27 10:28:02.00+| | | | |
| a85161782554262 \| 2019-06-27 

[jira] [Commented] (CASSANDRA-11626) cqlsh fails and exits on non-ascii chars

2019-06-27 Thread gloCalHelp.com (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-11626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874618#comment-16874618
 ] 

gloCalHelp.com commented on CASSANDRA-11626:


I have found the solution on csdn net,  in order to thanks author and for the 
new coming newbie, put the link here:

[https://blog.csdn.net/l1028386804/article/details/78976807] , the 1st method 
is ok on Centos6.9 with python2.7

> cqlsh fails and exits on non-ascii chars
> 
>
> Key: CASSANDRA-11626
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11626
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Tools
>Reporter: Robert Stupp
>Assignee: Tyler Hobbs
>Priority: Low
>  Labels: cqlsh
> Fix For: 2.2.7, 3.0.7, 3.7
>
>
> Just seen on cqlsh on current trunk:
> To repro, copy {{ä}} (german umlaut) to cqlsh and press return.
> cqlsh errors out and immediately exits.
> {code}
> $ bin/cqlsh
> Connected to Test Cluster at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 2.1.13-SNAPSHOT | CQL spec 3.2.1 | Native protocol 
> v3]
> Use HELP for help.
> cqlsh> ä
> Invalid syntax at line 1, char 1
> Traceback (most recent call last):
>   File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 2636, in 
> 
> main(*read_options(sys.argv[1:], os.environ))
>   File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 2625, in main
> shell.cmdloop()
>   File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 1114, in 
> cmdloop
> if self.onecmd(self.statement.getvalue()):
>   File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 1139, in onecmd
> self.printerr('  %s' % statementline)
>   File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 2314, in 
> printerr
> self.writeresult(text, color, newline=newline, out=sys.stderr)
>   File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 2303, in 
> writeresult
> out.write(self.applycolor(str(text), color) + ('\n' if newline else ''))
> UnicodeEncodeError: 'ascii' codec can't encode character u'\xe4' in position 
> 2: ordinal not in range(128)
> $ 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15175) Evaluate 200 node, compression=on, encryption=all

2019-06-27 Thread Norman Maurer (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874475#comment-16874475
 ] 

Norman Maurer commented on CASSANDRA-15175:
---

[~jolynch] so is it fair to say that there is nothing for me to investigate for 
now on the Netty side ?

> Evaluate 200 node, compression=on, encryption=all
> -
>
> Key: CASSANDRA-15175
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15175
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Test/benchmark
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Normal
> Attachments: 30x_14400cRPS-14400cWPS.svg, 
> 30x_LQ_21600cRPS-14400cWPS.svg, ShortbufferExceptions.png, 
> odd_netty_jdk_tls_cpu_usage.png, trunk_14400cRPS-14400cWPS.svg, 
> trunk_187000cRPS-14400cWPS.svg, trunk_187kcRPS_14kcWPS.png, 
> trunk_22000cRPS-14400cWPS-jdk.svg, trunk_22000cRPS-14400cWPS-openssl.svg, 
> trunk_220kcRPS_14kcWPS.png, trunk_252kcRPS-14kcWPS.png, 
> trunk_93500cRPS-14400cWPS.svg, trunk_LQ_14400cRPS-14400cWPS.svg, 
> trunk_LQ_21600cRPS-14400cWPS.svg, trunk_vs_30x_125kcRPS_14kcWPS.png, 
> trunk_vs_30x_14kRPS_14kcWPS_load.png, trunk_vs_30x_14kcRPS_14kcWPS.png, 
> trunk_vs_30x_14kcRPS_14kcWPS_schedstat_delays.png, 
> trunk_vs_30x_156kcRPS_14kcWPS.png, trunk_vs_30x_24kcRPS_14kcWPS.png, 
> trunk_vs_30x_24kcRPS_14kcWPS_load.png, trunk_vs_30x_31kcRPS_14kcWPS.png, 
> trunk_vs_30x_62kcRPS_14kcWPS.png, trunk_vs_30x_93kcRPS_14kcWPS.png, 
> trunk_vs_30x_LQ_14kcRPS_14kcWPS.png, trunk_vs_30x_LQ_21kcRPS_14kcWPS.png, 
> trunk_vs_30x_LQ_64kcRPS_14kcWPS.png, trunk_vs_30x_LQ_jdk_summary.png, 
> trunk_vs_30x_LQ_openssl_21kcRPS_14kcWPS.png, 
> trunk_vs_30x_LQ_tcnative_summary.png, trunk_vs_30x_summary.png
>
>
> Tracks evaluating a 192 node cluster with compression and encryption on.
> Test setup at (reproduced below)
> [https://docs.google.com/spreadsheets/d/1Vq_wC2q-rcG7UWim-t2leZZ4GgcuAjSREMFbG0QGy20/edit#gid=1336583053]
>  
> |Test Setup| |
> |Baseline|3.0.19
> @d7d00036|
> |Candiate|trunk
> @abb0e177|
> | | |
> |Workload| |
> |Write size|4kb random|
> |Read size|4kb random|
> |Per Node Data|110GiB|
> |Generator|ndbench|
> |Key Distribution|Uniform|
> |SSTable Compr|Off|
> |Internode TLS|On (jdk)|
> |Internode Compr|On|
> |Compaction|LCS (320 MiB)|
> |Repair|Off|
> | | |
> |Hardware| |
> |Instance Type|i3.xlarge|
> |Deployment|96 us-east-1, 96 eu-west-1|
> |Region node count|96|
> | | |
> |OS Settings| |
> |IO scheduler|kyber|
> |Net qdisc|tc-fq|
> |readahead|32kb|
> |Java Version|OpenJDK 1.8.0_202 (Zulu)|
> | | |



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15175) Evaluate 200 node, compression=on, encryption=all

2019-06-27 Thread Joseph Lynch (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874410#comment-16874410
 ] 

Joseph Lynch commented on CASSANDRA-15175:
--

Alright, I've run the scaling test with both (jdk) TLS and (openssl) TLS (I 
dropped the statically linked boringssl jar).

With jdk TLS after switching the default cipher to 
{{TLS_RSA_WITH_AES_128_CBC_SHA}}:
 !trunk_vs_30x_LQ_jdk_summary.png! 
With openssl TLS again with the default cipher the same as 30x:
 !trunk_vs_30x_LQ_tcnative_summary.png!  

So the summary is, we have a minor regression in average performance for 
LOCAL_QUORUM. Flamegraphs are attached for root cause. The good news is that 
the tail is significantly better.

Action items:
* Make sure that cipher we use isn't GCM by default
* Determine why writes got slower, still outstanding

I am now moving on to a QUORUM test.

> Evaluate 200 node, compression=on, encryption=all
> -
>
> Key: CASSANDRA-15175
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15175
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Test/benchmark
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Normal
> Attachments: 30x_14400cRPS-14400cWPS.svg, 
> 30x_LQ_21600cRPS-14400cWPS.svg, ShortbufferExceptions.png, 
> odd_netty_jdk_tls_cpu_usage.png, trunk_14400cRPS-14400cWPS.svg, 
> trunk_187000cRPS-14400cWPS.svg, trunk_187kcRPS_14kcWPS.png, 
> trunk_22000cRPS-14400cWPS-jdk.svg, trunk_22000cRPS-14400cWPS-openssl.svg, 
> trunk_220kcRPS_14kcWPS.png, trunk_252kcRPS-14kcWPS.png, 
> trunk_93500cRPS-14400cWPS.svg, trunk_LQ_14400cRPS-14400cWPS.svg, 
> trunk_LQ_21600cRPS-14400cWPS.svg, trunk_vs_30x_125kcRPS_14kcWPS.png, 
> trunk_vs_30x_14kRPS_14kcWPS_load.png, trunk_vs_30x_14kcRPS_14kcWPS.png, 
> trunk_vs_30x_14kcRPS_14kcWPS_schedstat_delays.png, 
> trunk_vs_30x_156kcRPS_14kcWPS.png, trunk_vs_30x_24kcRPS_14kcWPS.png, 
> trunk_vs_30x_24kcRPS_14kcWPS_load.png, trunk_vs_30x_31kcRPS_14kcWPS.png, 
> trunk_vs_30x_62kcRPS_14kcWPS.png, trunk_vs_30x_93kcRPS_14kcWPS.png, 
> trunk_vs_30x_LQ_14kcRPS_14kcWPS.png, trunk_vs_30x_LQ_21kcRPS_14kcWPS.png, 
> trunk_vs_30x_LQ_64kcRPS_14kcWPS.png, trunk_vs_30x_LQ_jdk_summary.png, 
> trunk_vs_30x_LQ_openssl_21kcRPS_14kcWPS.png, 
> trunk_vs_30x_LQ_tcnative_summary.png, trunk_vs_30x_summary.png
>
>
> Tracks evaluating a 192 node cluster with compression and encryption on.
> Test setup at (reproduced below)
> [https://docs.google.com/spreadsheets/d/1Vq_wC2q-rcG7UWim-t2leZZ4GgcuAjSREMFbG0QGy20/edit#gid=1336583053]
>  
> |Test Setup| |
> |Baseline|3.0.19
> @d7d00036|
> |Candiate|trunk
> @abb0e177|
> | | |
> |Workload| |
> |Write size|4kb random|
> |Read size|4kb random|
> |Per Node Data|110GiB|
> |Generator|ndbench|
> |Key Distribution|Uniform|
> |SSTable Compr|Off|
> |Internode TLS|On (jdk)|
> |Internode Compr|On|
> |Compaction|LCS (320 MiB)|
> |Repair|Off|
> | | |
> |Hardware| |
> |Instance Type|i3.xlarge|
> |Deployment|96 us-east-1, 96 eu-west-1|
> |Region node count|96|
> | | |
> |OS Settings| |
> |IO scheduler|kyber|
> |Net qdisc|tc-fq|
> |readahead|32kb|
> |Java Version|OpenJDK 1.8.0_202 (Zulu)|
> | | |



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15175) Evaluate 200 node, compression=on, encryption=all

2019-06-27 Thread Joseph Lynch (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Lynch updated CASSANDRA-15175:
-
Attachment: trunk_vs_30x_LQ_tcnative_summary.png

> Evaluate 200 node, compression=on, encryption=all
> -
>
> Key: CASSANDRA-15175
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15175
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Test/benchmark
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Normal
> Attachments: 30x_14400cRPS-14400cWPS.svg, 
> 30x_LQ_21600cRPS-14400cWPS.svg, ShortbufferExceptions.png, 
> odd_netty_jdk_tls_cpu_usage.png, trunk_14400cRPS-14400cWPS.svg, 
> trunk_187000cRPS-14400cWPS.svg, trunk_187kcRPS_14kcWPS.png, 
> trunk_22000cRPS-14400cWPS-jdk.svg, trunk_22000cRPS-14400cWPS-openssl.svg, 
> trunk_220kcRPS_14kcWPS.png, trunk_252kcRPS-14kcWPS.png, 
> trunk_93500cRPS-14400cWPS.svg, trunk_LQ_14400cRPS-14400cWPS.svg, 
> trunk_LQ_21600cRPS-14400cWPS.svg, trunk_vs_30x_125kcRPS_14kcWPS.png, 
> trunk_vs_30x_14kRPS_14kcWPS_load.png, trunk_vs_30x_14kcRPS_14kcWPS.png, 
> trunk_vs_30x_14kcRPS_14kcWPS_schedstat_delays.png, 
> trunk_vs_30x_156kcRPS_14kcWPS.png, trunk_vs_30x_24kcRPS_14kcWPS.png, 
> trunk_vs_30x_24kcRPS_14kcWPS_load.png, trunk_vs_30x_31kcRPS_14kcWPS.png, 
> trunk_vs_30x_62kcRPS_14kcWPS.png, trunk_vs_30x_93kcRPS_14kcWPS.png, 
> trunk_vs_30x_LQ_14kcRPS_14kcWPS.png, trunk_vs_30x_LQ_21kcRPS_14kcWPS.png, 
> trunk_vs_30x_LQ_64kcRPS_14kcWPS.png, trunk_vs_30x_LQ_jdk_summary.png, 
> trunk_vs_30x_LQ_openssl_21kcRPS_14kcWPS.png, 
> trunk_vs_30x_LQ_tcnative_summary.png, trunk_vs_30x_summary.png
>
>
> Tracks evaluating a 192 node cluster with compression and encryption on.
> Test setup at (reproduced below)
> [https://docs.google.com/spreadsheets/d/1Vq_wC2q-rcG7UWim-t2leZZ4GgcuAjSREMFbG0QGy20/edit#gid=1336583053]
>  
> |Test Setup| |
> |Baseline|3.0.19
> @d7d00036|
> |Candiate|trunk
> @abb0e177|
> | | |
> |Workload| |
> |Write size|4kb random|
> |Read size|4kb random|
> |Per Node Data|110GiB|
> |Generator|ndbench|
> |Key Distribution|Uniform|
> |SSTable Compr|Off|
> |Internode TLS|On (jdk)|
> |Internode Compr|On|
> |Compaction|LCS (320 MiB)|
> |Repair|Off|
> | | |
> |Hardware| |
> |Instance Type|i3.xlarge|
> |Deployment|96 us-east-1, 96 eu-west-1|
> |Region node count|96|
> | | |
> |OS Settings| |
> |IO scheduler|kyber|
> |Net qdisc|tc-fq|
> |readahead|32kb|
> |Java Version|OpenJDK 1.8.0_202 (Zulu)|
> | | |



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15175) Evaluate 200 node, compression=on, encryption=all

2019-06-27 Thread Joseph Lynch (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Lynch updated CASSANDRA-15175:
-
Attachment: trunk_vs_30x_LQ_jdk_summary.png

> Evaluate 200 node, compression=on, encryption=all
> -
>
> Key: CASSANDRA-15175
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15175
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Test/benchmark
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Normal
> Attachments: 30x_14400cRPS-14400cWPS.svg, 
> 30x_LQ_21600cRPS-14400cWPS.svg, ShortbufferExceptions.png, 
> odd_netty_jdk_tls_cpu_usage.png, trunk_14400cRPS-14400cWPS.svg, 
> trunk_187000cRPS-14400cWPS.svg, trunk_187kcRPS_14kcWPS.png, 
> trunk_22000cRPS-14400cWPS-jdk.svg, trunk_22000cRPS-14400cWPS-openssl.svg, 
> trunk_220kcRPS_14kcWPS.png, trunk_252kcRPS-14kcWPS.png, 
> trunk_93500cRPS-14400cWPS.svg, trunk_LQ_14400cRPS-14400cWPS.svg, 
> trunk_LQ_21600cRPS-14400cWPS.svg, trunk_vs_30x_125kcRPS_14kcWPS.png, 
> trunk_vs_30x_14kRPS_14kcWPS_load.png, trunk_vs_30x_14kcRPS_14kcWPS.png, 
> trunk_vs_30x_14kcRPS_14kcWPS_schedstat_delays.png, 
> trunk_vs_30x_156kcRPS_14kcWPS.png, trunk_vs_30x_24kcRPS_14kcWPS.png, 
> trunk_vs_30x_24kcRPS_14kcWPS_load.png, trunk_vs_30x_31kcRPS_14kcWPS.png, 
> trunk_vs_30x_62kcRPS_14kcWPS.png, trunk_vs_30x_93kcRPS_14kcWPS.png, 
> trunk_vs_30x_LQ_14kcRPS_14kcWPS.png, trunk_vs_30x_LQ_21kcRPS_14kcWPS.png, 
> trunk_vs_30x_LQ_64kcRPS_14kcWPS.png, trunk_vs_30x_LQ_jdk_summary.png, 
> trunk_vs_30x_LQ_openssl_21kcRPS_14kcWPS.png, 
> trunk_vs_30x_LQ_tcnative_summary.png, trunk_vs_30x_summary.png
>
>
> Tracks evaluating a 192 node cluster with compression and encryption on.
> Test setup at (reproduced below)
> [https://docs.google.com/spreadsheets/d/1Vq_wC2q-rcG7UWim-t2leZZ4GgcuAjSREMFbG0QGy20/edit#gid=1336583053]
>  
> |Test Setup| |
> |Baseline|3.0.19
> @d7d00036|
> |Candiate|trunk
> @abb0e177|
> | | |
> |Workload| |
> |Write size|4kb random|
> |Read size|4kb random|
> |Per Node Data|110GiB|
> |Generator|ndbench|
> |Key Distribution|Uniform|
> |SSTable Compr|Off|
> |Internode TLS|On (jdk)|
> |Internode Compr|On|
> |Compaction|LCS (320 MiB)|
> |Repair|Off|
> | | |
> |Hardware| |
> |Instance Type|i3.xlarge|
> |Deployment|96 us-east-1, 96 eu-west-1|
> |Region node count|96|
> | | |
> |OS Settings| |
> |IO scheduler|kyber|
> |Net qdisc|tc-fq|
> |readahead|32kb|
> |Java Version|OpenJDK 1.8.0_202 (Zulu)|
> | | |



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15175) Evaluate 200 node, compression=on, encryption=all

2019-06-27 Thread Joseph Lynch (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Lynch updated CASSANDRA-15175:
-
Attachment: trunk_vs_30x_LQ_openssl_21kcRPS_14kcWPS.png

> Evaluate 200 node, compression=on, encryption=all
> -
>
> Key: CASSANDRA-15175
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15175
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Test/benchmark
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Normal
> Attachments: 30x_14400cRPS-14400cWPS.svg, 
> 30x_LQ_21600cRPS-14400cWPS.svg, ShortbufferExceptions.png, 
> odd_netty_jdk_tls_cpu_usage.png, trunk_14400cRPS-14400cWPS.svg, 
> trunk_187000cRPS-14400cWPS.svg, trunk_187kcRPS_14kcWPS.png, 
> trunk_22000cRPS-14400cWPS-jdk.svg, trunk_22000cRPS-14400cWPS-openssl.svg, 
> trunk_220kcRPS_14kcWPS.png, trunk_252kcRPS-14kcWPS.png, 
> trunk_93500cRPS-14400cWPS.svg, trunk_LQ_14400cRPS-14400cWPS.svg, 
> trunk_LQ_21600cRPS-14400cWPS.svg, trunk_vs_30x_125kcRPS_14kcWPS.png, 
> trunk_vs_30x_14kRPS_14kcWPS_load.png, trunk_vs_30x_14kcRPS_14kcWPS.png, 
> trunk_vs_30x_14kcRPS_14kcWPS_schedstat_delays.png, 
> trunk_vs_30x_156kcRPS_14kcWPS.png, trunk_vs_30x_24kcRPS_14kcWPS.png, 
> trunk_vs_30x_24kcRPS_14kcWPS_load.png, trunk_vs_30x_31kcRPS_14kcWPS.png, 
> trunk_vs_30x_62kcRPS_14kcWPS.png, trunk_vs_30x_93kcRPS_14kcWPS.png, 
> trunk_vs_30x_LQ_14kcRPS_14kcWPS.png, trunk_vs_30x_LQ_21kcRPS_14kcWPS.png, 
> trunk_vs_30x_LQ_64kcRPS_14kcWPS.png, 
> trunk_vs_30x_LQ_openssl_21kcRPS_14kcWPS.png, trunk_vs_30x_summary.png
>
>
> Tracks evaluating a 192 node cluster with compression and encryption on.
> Test setup at (reproduced below)
> [https://docs.google.com/spreadsheets/d/1Vq_wC2q-rcG7UWim-t2leZZ4GgcuAjSREMFbG0QGy20/edit#gid=1336583053]
>  
> |Test Setup| |
> |Baseline|3.0.19
> @d7d00036|
> |Candiate|trunk
> @abb0e177|
> | | |
> |Workload| |
> |Write size|4kb random|
> |Read size|4kb random|
> |Per Node Data|110GiB|
> |Generator|ndbench|
> |Key Distribution|Uniform|
> |SSTable Compr|Off|
> |Internode TLS|On (jdk)|
> |Internode Compr|On|
> |Compaction|LCS (320 MiB)|
> |Repair|Off|
> | | |
> |Hardware| |
> |Instance Type|i3.xlarge|
> |Deployment|96 us-east-1, 96 eu-west-1|
> |Region node count|96|
> | | |
> |OS Settings| |
> |IO scheduler|kyber|
> |Net qdisc|tc-fq|
> |readahead|32kb|
> |Java Version|OpenJDK 1.8.0_202 (Zulu)|
> | | |



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15013) Message Flusher queue can grow unbounded, potentially running JVM out of memory

2019-06-27 Thread Sumanth Pasupuleti (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874366#comment-16874366
 ] 

Sumanth Pasupuleti commented on CASSANDRA-15013:


[~benedict] Makes sense. I pushed a 
[change|https://github.com/sumanth-pasupuleti/cassandra/commit/0c75ecf7b6f0824786b840c6cba167eb393b92ce]
 to 
[this|https://github.com/sumanth-pasupuleti/cassandra/commits/15013_trunk_2] 
branch.

Per node defaults to 1/10th of heap size, per IP defaults to 1/40th of heap 
size.

Similar test 
[results|https://circleci.com/workflow-run/c61d7df2-c77a-4eab-a954-a59f6165f372]
 as previous run.

> Message Flusher queue can grow unbounded, potentially running JVM out of 
> memory
> ---
>
> Key: CASSANDRA-15013
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15013
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Client
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0, 3.0.x, 3.11.x
>
> Attachments: BlockedEpollEventLoopFromHeapDump.png, 
> BlockedEpollEventLoopFromThreadDump.png, RequestExecutorQueueFull.png, heap 
> dump showing each ImmediateFlusher taking upto 600MB.png
>
>
> This is a follow-up ticket out of CASSANDRA-14855, to make the Flusher queue 
> bounded, since, in the current state, items get added to the queue without 
> any checks on queue size, nor with any checks on netty outbound buffer to 
> check the isWritable state.
> We are seeing this issue hit our production 3.0 clusters quite often.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15182) cqlsh syntax error output fails with UnicodeEncodeError

2019-06-27 Thread Michael Shuler (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-15182:
---
Fix Version/s: 3.11.x
   3.0.x
   2.2.x
   4.0

> cqlsh syntax error output fails with UnicodeEncodeError
> ---
>
> Key: CASSANDRA-15182
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15182
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
>Reporter: gloCalHelp.com
>Priority: Normal
> Fix For: 4.0, 2.2.x, 3.0.x, 3.11.x
>
>
> I use cqlsh 5.0.1 with cassandra 3.11.3 with python2.7.13 in Centos 6.9.
> when I run this cql command: bin/cqlsh hadoop4 -u dba -p ** --debug  
> -e "INSERT INTO HYGL_JCSJ.hyjg_ods_yy_gps_novar3 
> (clcph,dwsj,bc,blbs,cjbzh,ckryid,clid,clmc,ddfx,ddrq,fwj,gd,gdjd,gdwd,jsdlc,jszjl,jxzjl,sjid,sjsfzh,sjxm,sssd,xlmc)
>  VALUES 
> ('黑A00888D','2019-06-2509:57:19',0,,'',,,'379-7038',1434,'2019-06-25',275,0,126723690,45726990
>  ,796.0,2205,746,'null','null','null',0,'379');"
> I get the error message as below:
> Using CQL driver:  '/home/cassandra/cas3.11.3/bin/../lib/cassandra-driver-internal-only-3.11.0-bb96859b.zip/cassandra-driver-3.11.0-bb96859b/cassandra/__init__.py'>
> Using connect timeout: 5 seconds
> Using 'utf-8' encoding
> Using ssl: False
> Traceback (most recent call last):
>  File "/home/cassandra/cas3.11.3/bin/cqlsh.py", line 926, in onecmd
>  self.handle_statement(st, statementtext)
>  File "/home/cassandra/cas3.11.3/bin/cqlsh.py", line 966, in handle_statement
>  return self.perform_statement(cqlruleset.cql_extract_orig(tokens, srcstr))
>  File "/home/cassandra/cas3.11.3/bin/cqlsh.py", line 1000, in 
> perform_statement
>  success, future = self.perform_simple_statement(stmt)
>  File "/home/cassandra/cas3.11.3/bin/cqlsh.py", line 1053, in 
> perform_simple_statement
>  self.printerr(unicode(err.__class__.__name__) + u": " + 
> err.message.decode(encoding='utf-8'))
>  File "/usr/local/python27/lib/python2.7/encodings/utf_8.py", line 16, in 
> decode
>  return codecs.utf_8_decode(input, errors, True)
> UnicodeEncodeError: 'ascii' codec can't encode character u'\u9ed1' in 
> position 60: ordinal not in range(128)
>  
> this issue seems different with the select command issue on  
> https://issues.apache.org/jira/browse/CASSANDRA-10875 
> and other method to add "-*- coding: utf-8 -*- " in the head of cqlsh.py ,  
> can anyone hurry up to teach me?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15182) cqlsh syntax error output fails with UnicodeEncodeError

2019-06-27 Thread Michael Shuler (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-15182:
---
 Severity: Low
   Complexity: Low Hanging Fruit
Discovered By: User Report
 Bug Category: Parent values: Correctness(12982)Level 1 values: Semantic 
Failure(12988)
   Status: Open  (was: Triage Needed)

Reproduced in cassandra-2.2, cassandra-3.0, cassandra-3.11, and trunk branch 
HEADs.

> cqlsh syntax error output fails with UnicodeEncodeError
> ---
>
> Key: CASSANDRA-15182
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15182
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
>Reporter: gloCalHelp.com
>Priority: Normal
>
> I use cqlsh 5.0.1 with cassandra 3.11.3 with python2.7.13 in Centos 6.9.
> when I run this cql command: bin/cqlsh hadoop4 -u dba -p ** --debug  
> -e "INSERT INTO HYGL_JCSJ.hyjg_ods_yy_gps_novar3 
> (clcph,dwsj,bc,blbs,cjbzh,ckryid,clid,clmc,ddfx,ddrq,fwj,gd,gdjd,gdwd,jsdlc,jszjl,jxzjl,sjid,sjsfzh,sjxm,sssd,xlmc)
>  VALUES 
> ('黑A00888D','2019-06-2509:57:19',0,,'',,,'379-7038',1434,'2019-06-25',275,0,126723690,45726990
>  ,796.0,2205,746,'null','null','null',0,'379');"
> I get the error message as below:
> Using CQL driver:  '/home/cassandra/cas3.11.3/bin/../lib/cassandra-driver-internal-only-3.11.0-bb96859b.zip/cassandra-driver-3.11.0-bb96859b/cassandra/__init__.py'>
> Using connect timeout: 5 seconds
> Using 'utf-8' encoding
> Using ssl: False
> Traceback (most recent call last):
>  File "/home/cassandra/cas3.11.3/bin/cqlsh.py", line 926, in onecmd
>  self.handle_statement(st, statementtext)
>  File "/home/cassandra/cas3.11.3/bin/cqlsh.py", line 966, in handle_statement
>  return self.perform_statement(cqlruleset.cql_extract_orig(tokens, srcstr))
>  File "/home/cassandra/cas3.11.3/bin/cqlsh.py", line 1000, in 
> perform_statement
>  success, future = self.perform_simple_statement(stmt)
>  File "/home/cassandra/cas3.11.3/bin/cqlsh.py", line 1053, in 
> perform_simple_statement
>  self.printerr(unicode(err.__class__.__name__) + u": " + 
> err.message.decode(encoding='utf-8'))
>  File "/usr/local/python27/lib/python2.7/encodings/utf_8.py", line 16, in 
> decode
>  return codecs.utf_8_decode(input, errors, True)
> UnicodeEncodeError: 'ascii' codec can't encode character u'\u9ed1' in 
> position 60: ordinal not in range(128)
>  
> this issue seems different with the select command issue on  
> https://issues.apache.org/jira/browse/CASSANDRA-10875 
> and other method to add "-*- coding: utf-8 -*- " in the head of cqlsh.py ,  
> can anyone hurry up to teach me?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15182) cqlsh syntax error output fails with UnicodeEncodeError

2019-06-27 Thread Michael Shuler (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-15182:
---
Summary: cqlsh syntax error output fails with UnicodeEncodeError  (was: 
cqlsh syntax error output fails to )

> cqlsh syntax error output fails with UnicodeEncodeError
> ---
>
> Key: CASSANDRA-15182
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15182
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
>Reporter: gloCalHelp.com
>Priority: Normal
>
> I use cqlsh 5.0.1 with cassandra 3.11.3 with python2.7.13 in Centos 6.9.
> when I run this cql command: bin/cqlsh hadoop4 -u dba -p ** --debug  
> -e "INSERT INTO HYGL_JCSJ.hyjg_ods_yy_gps_novar3 
> (clcph,dwsj,bc,blbs,cjbzh,ckryid,clid,clmc,ddfx,ddrq,fwj,gd,gdjd,gdwd,jsdlc,jszjl,jxzjl,sjid,sjsfzh,sjxm,sssd,xlmc)
>  VALUES 
> ('黑A00888D','2019-06-2509:57:19',0,,'',,,'379-7038',1434,'2019-06-25',275,0,126723690,45726990
>  ,796.0,2205,746,'null','null','null',0,'379');"
> I get the error message as below:
> Using CQL driver:  '/home/cassandra/cas3.11.3/bin/../lib/cassandra-driver-internal-only-3.11.0-bb96859b.zip/cassandra-driver-3.11.0-bb96859b/cassandra/__init__.py'>
> Using connect timeout: 5 seconds
> Using 'utf-8' encoding
> Using ssl: False
> Traceback (most recent call last):
>  File "/home/cassandra/cas3.11.3/bin/cqlsh.py", line 926, in onecmd
>  self.handle_statement(st, statementtext)
>  File "/home/cassandra/cas3.11.3/bin/cqlsh.py", line 966, in handle_statement
>  return self.perform_statement(cqlruleset.cql_extract_orig(tokens, srcstr))
>  File "/home/cassandra/cas3.11.3/bin/cqlsh.py", line 1000, in 
> perform_statement
>  success, future = self.perform_simple_statement(stmt)
>  File "/home/cassandra/cas3.11.3/bin/cqlsh.py", line 1053, in 
> perform_simple_statement
>  self.printerr(unicode(err.__class__.__name__) + u": " + 
> err.message.decode(encoding='utf-8'))
>  File "/usr/local/python27/lib/python2.7/encodings/utf_8.py", line 16, in 
> decode
>  return codecs.utf_8_decode(input, errors, True)
> UnicodeEncodeError: 'ascii' codec can't encode character u'\u9ed1' in 
> position 60: ordinal not in range(128)
>  
> this issue seems different with the select command issue on  
> https://issues.apache.org/jira/browse/CASSANDRA-10875 
> and other method to add "-*- coding: utf-8 -*- " in the head of cqlsh.py ,  
> can anyone hurry up to teach me?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15182) cqlsh syntax error output fails to

2019-06-27 Thread Michael Shuler (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-15182:
---
Summary: cqlsh syntax error output fails to   (was: cqlsh utf_8.py, line 
16, in decode return codecs.utf_8_decode(input, errors, 
True):1:'ascii' codec can't encode character u'\u9ed1' in position 60: 
ordinal not in range(128))

> cqlsh syntax error output fails to 
> ---
>
> Key: CASSANDRA-15182
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15182
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
>Reporter: gloCalHelp.com
>Priority: Normal
>
> I use cqlsh 5.0.1 with cassandra 3.11.3 with python2.7.13 in Centos 6.9.
> when I run this cql command: bin/cqlsh hadoop4 -u dba -p ** --debug  
> -e "INSERT INTO HYGL_JCSJ.hyjg_ods_yy_gps_novar3 
> (clcph,dwsj,bc,blbs,cjbzh,ckryid,clid,clmc,ddfx,ddrq,fwj,gd,gdjd,gdwd,jsdlc,jszjl,jxzjl,sjid,sjsfzh,sjxm,sssd,xlmc)
>  VALUES 
> ('黑A00888D','2019-06-2509:57:19',0,,'',,,'379-7038',1434,'2019-06-25',275,0,126723690,45726990
>  ,796.0,2205,746,'null','null','null',0,'379');"
> I get the error message as below:
> Using CQL driver:  '/home/cassandra/cas3.11.3/bin/../lib/cassandra-driver-internal-only-3.11.0-bb96859b.zip/cassandra-driver-3.11.0-bb96859b/cassandra/__init__.py'>
> Using connect timeout: 5 seconds
> Using 'utf-8' encoding
> Using ssl: False
> Traceback (most recent call last):
>  File "/home/cassandra/cas3.11.3/bin/cqlsh.py", line 926, in onecmd
>  self.handle_statement(st, statementtext)
>  File "/home/cassandra/cas3.11.3/bin/cqlsh.py", line 966, in handle_statement
>  return self.perform_statement(cqlruleset.cql_extract_orig(tokens, srcstr))
>  File "/home/cassandra/cas3.11.3/bin/cqlsh.py", line 1000, in 
> perform_statement
>  success, future = self.perform_simple_statement(stmt)
>  File "/home/cassandra/cas3.11.3/bin/cqlsh.py", line 1053, in 
> perform_simple_statement
>  self.printerr(unicode(err.__class__.__name__) + u": " + 
> err.message.decode(encoding='utf-8'))
>  File "/usr/local/python27/lib/python2.7/encodings/utf_8.py", line 16, in 
> decode
>  return codecs.utf_8_decode(input, errors, True)
> UnicodeEncodeError: 'ascii' codec can't encode character u'\u9ed1' in 
> position 60: ordinal not in range(128)
>  
> this issue seems different with the select command issue on  
> https://issues.apache.org/jira/browse/CASSANDRA-10875 
> and other method to add "-*- coding: utf-8 -*- " in the head of cqlsh.py ,  
> can anyone hurry up to teach me?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15182) cqlsh utf_8.py, line 16, in decode return codecs.utf_8_decode(input, errors, True):1:'ascii' codec can't encode character u'\u9ed1' in position 60: ordi

2019-06-27 Thread Michael Shuler (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874266#comment-16874266
 ] 

Michael Shuler commented on CASSANDRA-15182:


Thanks Yuki, that helps! Simplified reproduction - CQL syntax is wrong, but the 
syntax error output encoding is problematic:
{noformat}
ccm create --version=git:cassandra-3.11 --nodes=1 --start test
ccm node1 cqlsh --debug -e "CREATE KEYSPACE test WITH replication = {'class': 
'SimpleStrategy', 'replication_factor': 1};"
ccm node1 cqlsh --debug -e "CREATE TABLE test.scratch (a text, b decimal, 
PRIMARY KEY (a));"
# show syntax error with INSERT:
ccm node1 cqlsh --debug -e "INSERT INTO test.scratch (a, b) VALUES 
('A00888D',);"
# show unicode problem when syntax error is output:
ccm node1 cqlsh --debug -e "INSERT INTO test.scratch (a, b) VALUES 
('黑A00888D',);"
{noformat}

Full output of above:
{noformat}
mshuler@hana:~$ ccm create --version=git:cassandra-3.11 --nodes=1 --start test
https://gitbox.apache.org/repos/asf/cassandra.git git:cassandra-3.11
10:22:53,842 ccm INFO Cloning Cassandra...
10:23:20,346 ccm INFO Cloning Cassandra (from local cache)
10:23:21,129 ccm INFO Checking out requested branch (cassandra-3.11)
10:23:21,927 ccm INFO Compiling Cassandra cassandra-3.11 ...
Current cluster is now: test
mshuler@hana:~$ ccm node1 cqlsh --debug -e "CREATE KEYSPACE test WITH 
replication = {'class': 'SimpleStrategy', 'replication_factor': 1};"Using CQL 
driver: 
Using connect timeout: 5 seconds
Using 'utf-8' encoding
Using ssl: False
mshuler@hana:~$ ccm node1 cqlsh --debug -e "CREATE TABLE test.scratch (a text, 
b decimal, PRIMARY KEY (a));"
Using CQL driver: 
Using connect timeout: 5 seconds
Using 'utf-8' encoding
Using ssl: False
mshuler@hana:~$ ccm node1 cqlsh --debug -e "INSERT INTO test.scratch (a, b) 
VALUES ('A00888D',);"
Using CQL driver: 
Using connect timeout: 5 seconds
Using 'utf-8' encoding
Using ssl: False
:1:SyntaxException: line 1:50 no viable alternative at input ')' (..., 
b) VALUES ('A00888D',[)]...)
mshuler@hana:~$ ccm node1 cqlsh --debug -e "INSERT INTO test.scratch (a, b) 
VALUES ('黑A00888D',);"
Using CQL driver: 
Using connect timeout: 5 seconds
Using 'utf-8' encoding
Using ssl: False
Traceback (most recent call last):
  File "/home/mshuler/.ccm/repository/gitCOLONcassandra-3.11/bin/cqlsh.py", 
line 925, in onecmd
self.handle_statement(st, statementtext)
  File "/home/mshuler/.ccm/repository/gitCOLONcassandra-3.11/bin/cqlsh.py", 
line 965, in handle_statement
return self.perform_statement(cqlruleset.cql_extract_orig(tokens, srcstr))
  File "/home/mshuler/.ccm/repository/gitCOLONcassandra-3.11/bin/cqlsh.py", 
line 999, in perform_statement
success, future = self.perform_simple_statement(stmt)
  File "/home/mshuler/.ccm/repository/gitCOLONcassandra-3.11/bin/cqlsh.py", 
line 1052, in perform_simple_statement
self.printerr(unicode(err.__class__.__name__) + u": " + 
err.message.decode(encoding='utf-8'))
  File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeEncodeError: 'ascii' codec can't encode character u'\u9ed1' in position 
63: ordinal not in range(128)
{noformat}

> cqlsh utf_8.py, line 16, in decode return codecs.utf_8_decode(input, 
> errors, True):1:'ascii' codec can't encode character u'\u9ed1' in 
> position 60: ordinal not in range(128)
> 
>
> Key: CASSANDRA-15182
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15182
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
>Reporter: gloCalHelp.com
>Priority: Normal
>
> I use cqlsh 5.0.1 with cassandra 3.11.3 with python2.7.13 in Centos 6.9.
> when I run this cql command: bin/cqlsh hadoop4 -u dba -p ** --debug  
> -e "INSERT INTO HYGL_JCSJ.hyjg_ods_yy_gps_novar3 
> (clcph,dwsj,bc,blbs,cjbzh,ckryid,clid,clmc,ddfx,ddrq,fwj,gd,gdjd,gdwd,jsdlc,jszjl,jxzjl,sjid,sjsfzh,sjxm,sssd,xlmc)
>  VALUES 
> ('黑A00888D','2019-06-2509:57:19',0,,'',,,'379-7038',1434,'2019-06-25',275,0,126723690,45726990
>  ,796.0,2205,746,'null','null','null',0,'379');"
> I get the error message as below:
> Using CQL driver:  '/home/cassandra/cas3.11.3/bin/../lib/cassandra-driver-internal-only-3.11.0-bb96859b.zip/cassandra-driver-3.11.0-bb96859b/cassandra/__init__.py'>
> Using connect timeout: 5 seconds
> Using 'utf-8' encoding
> Using ssl: False
> Traceback (most recent call last):
>  File "/home/cassandra/cas3.11.3/bin/cqlsh.py", line 926, in onecmd
>  self.handle_statement(st, statementtext)
>  File "/home/cassandra/cas3.11.3/bin/cqlsh.py", line 966, in handle_statement
>  return 

[jira] [Updated] (CASSANDRA-15178) Skipping illegal legacy cells can break reverse iteration of indexed partitions

2019-06-27 Thread Sam Tunnicliffe (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-15178:

  Fix Version/s: 3.11.5
 3.0.19
Source Control Link: 
https://github.com/apache/cassandra/commit/f4b6e1d51f683e0c77c6ff7f199373052b082b9e
  Since Version: 3.0.0
 Status: Resolved  (was: Ready to Commit)
 Resolution: Fixed

Thanks, committed to 3.0 in {{f4b6e1d51f683e0c77c6ff7f199373052b082b9e}} and 
merged to 3.11 and trunk (with {{-s ours}})

> Skipping illegal legacy cells can break reverse iteration of indexed 
> partitions
> ---
>
> Key: CASSANDRA-15178
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15178
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Normal
> Fix For: 3.0.19, 3.11.5
>
>
> The fix for CASSANDRA-15086 interacts badly with the accounting of bytes read 
> from disk when indexed partitions are read in reverse. The skipped columns 
> can cause the tracking of where CQL rows span index block boundaries to be 
> incorrectly calculated, leading to rows being missing from read results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cassandra-3.11 updated (75fde25 -> 83baa27)

2019-06-27 Thread samt
This is an automated email from the ASF dual-hosted git repository.

samt pushed a change to branch cassandra-3.11
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from 75fde25  Merge branch 'cassandra-3.0' into cassandra-3.11
 new f4b6e1d  Filter illegal legacy cells when collating rows
 new 83baa27  Merge branch 'cassandra-3.0' into cassandra-3.11

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CHANGES.txt|   1 +
 .../cassandra/db/IllegalLegacyColumnException.java |  41 
 src/java/org/apache/cassandra/db/LegacyLayout.java |  76 +-
 src/java/org/apache/cassandra/db/ReadCommand.java  |   2 +-
 .../cassandra/db/UnfilteredDeserializer.java   |   7 --
 .../cassandra/db/UnknownColumnException.java   |  16 ++-
 .../apache/cassandra/thrift/CassandraServer.java   |  34 +++
 .../apache/cassandra/thrift/ThriftValidation.java  |  10 +-
 ..._ka_with_illegal_cell_names_indexed-ka-1-CRC.db | Bin 0 -> 8 bytes
 ...ka_with_illegal_cell_names_indexed-ka-1-Data.db | Bin 0 -> 6487 bytes
 ...ith_illegal_cell_names_indexed-ka-1-Digest.sha1 |   1 +
 ..._with_illegal_cell_names_indexed-ka-1-Filter.db | Bin 0 -> 16 bytes
 ...a_with_illegal_cell_names_indexed-ka-1-Index.db | Bin 0 -> 453 bytes
 ..._illegal_cell_names_indexed-ka-1-Statistics.db} | Bin 4450 -> 4472 bytes
 ...a_with_illegal_cell_names_indexed-ka-1-TOC.txt} |   8 +-
 .../cql3/validation/ThriftIllegalColumnsTest.java  | 100 +++
 .../apache/cassandra/db/LegacyCellNameTest.java|  18 +++-
 .../cassandra/io/sstable/LegacySSTableTest.java| 109 -
 18 files changed, 269 insertions(+), 154 deletions(-)
 delete mode 100644 
src/java/org/apache/cassandra/db/IllegalLegacyColumnException.java
 create mode 100644 
test/data/legacy-sstables/ka/legacy_tables/legacy_ka_with_illegal_cell_names_indexed/legacy_tables-legacy_ka_with_illegal_cell_names_indexed-ka-1-CRC.db
 create mode 100644 
test/data/legacy-sstables/ka/legacy_tables/legacy_ka_with_illegal_cell_names_indexed/legacy_tables-legacy_ka_with_illegal_cell_names_indexed-ka-1-Data.db
 create mode 100644 
test/data/legacy-sstables/ka/legacy_tables/legacy_ka_with_illegal_cell_names_indexed/legacy_tables-legacy_ka_with_illegal_cell_names_indexed-ka-1-Digest.sha1
 create mode 100644 
test/data/legacy-sstables/ka/legacy_tables/legacy_ka_with_illegal_cell_names_indexed/legacy_tables-legacy_ka_with_illegal_cell_names_indexed-ka-1-Filter.db
 create mode 100644 
test/data/legacy-sstables/ka/legacy_tables/legacy_ka_with_illegal_cell_names_indexed/legacy_tables-legacy_ka_with_illegal_cell_names_indexed-ka-1-Index.db
 copy 
test/data/legacy-sstables/ka/legacy_tables/{legacy_ka_14766/legacy_tables-legacy_ka_14766-ka-1-Statistics.db
 => 
legacy_ka_with_illegal_cell_names_indexed/legacy_tables-legacy_ka_with_illegal_cell_names_indexed-ka-1-Statistics.db}
 (89%)
 copy 
test/data/legacy-sstables/ka/legacy_tables/{legacy_ka_with_illegal_cell_names/legacy_tables-legacy_ka_with_illegal_cell_names-ka-1-TOC.txt
 => 
legacy_ka_with_illegal_cell_names_indexed/legacy_tables-legacy_ka_with_illegal_cell_names_indexed-ka-1-TOC.txt}
 (100%)
 create mode 100644 
test/unit/org/apache/cassandra/cql3/validation/ThriftIllegalColumnsTest.java


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 01/01: Merge branch 'cassandra-3.0' into cassandra-3.11

2019-06-27 Thread samt
This is an automated email from the ASF dual-hosted git repository.

samt pushed a commit to branch cassandra-3.11
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit 83baa27104ba4f8e20ec030c905a3110556bc629
Merge: 75fde25 f4b6e1d
Author: Sam Tunnicliffe 
AuthorDate: Thu Jun 27 16:25:43 2019 +0100

Merge branch 'cassandra-3.0' into cassandra-3.11

 CHANGES.txt|   1 +
 .../cassandra/db/IllegalLegacyColumnException.java |  41 
 src/java/org/apache/cassandra/db/LegacyLayout.java |  76 +-
 src/java/org/apache/cassandra/db/ReadCommand.java  |   2 +-
 .../cassandra/db/UnfilteredDeserializer.java   |   7 --
 .../cassandra/db/UnknownColumnException.java   |  16 ++-
 .../apache/cassandra/thrift/CassandraServer.java   |  34 +++
 .../apache/cassandra/thrift/ThriftValidation.java  |  10 +-
 ..._ka_with_illegal_cell_names_indexed-ka-1-CRC.db | Bin 0 -> 8 bytes
 ...ka_with_illegal_cell_names_indexed-ka-1-Data.db | Bin 0 -> 6487 bytes
 ...ith_illegal_cell_names_indexed-ka-1-Digest.sha1 |   1 +
 ..._with_illegal_cell_names_indexed-ka-1-Filter.db | Bin 0 -> 16 bytes
 ...a_with_illegal_cell_names_indexed-ka-1-Index.db | Bin 0 -> 453 bytes
 ...h_illegal_cell_names_indexed-ka-1-Statistics.db | Bin 0 -> 4472 bytes
 ...ka_with_illegal_cell_names_indexed-ka-1-TOC.txt |   8 ++
 .../cql3/validation/ThriftIllegalColumnsTest.java  | 100 +++
 .../apache/cassandra/db/LegacyCellNameTest.java|  18 +++-
 .../cassandra/io/sstable/LegacySSTableTest.java| 109 -
 18 files changed, 273 insertions(+), 150 deletions(-)

diff --cc CHANGES.txt
index e2aa652,d34406b..e433f7d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,5 +1,9 @@@
 -3.0.19
 +3.11.5
 + * Fix cassandra-env.sh to use $CASSANDRA_CONF to find cassandra-jaas.config 
(CASSANDRA-14305)
 + * Fixed nodetool cfstats printing index name twice (CASSANDRA-14903)
 + * Add flag to disable SASI indexes, and warnings on creation 
(CASSANDRA-14866)
 +Merged from 3.0:
+  * Skipping illegal legacy cells can break reverse iteration of indexed 
partitions (CASSANDRA-15178)
   * Handle paging states serialized with a different version than the 
session's (CASSANDRA-15176)
   * Throw IOE instead of asserting on unsupporter peer versions 
(CASSANDRA-15066)
   * Update token metadata when handling MOVING/REMOVING_TOKEN events 
(CASSANDRA-15120)
diff --cc src/java/org/apache/cassandra/db/LegacyLayout.java
index 50fd945,b03f56e..765d39b
--- a/src/java/org/apache/cassandra/db/LegacyLayout.java
+++ b/src/java/org/apache/cassandra/db/LegacyLayout.java
@@@ -185,22 -184,16 +185,16 @@@ public abstract class LegacyLayou
  return new LegacyCellName(clustering, null, null);
  
  ColumnDefinition def = metadata.getColumnDefinition(column);
- if ((def == null) || def.isPrimaryKeyColumn())
+ 
+ if (metadata.isCompactTable())
  {
- // If it's a compact table, it means the column is in fact a 
"dynamic" one
- if (metadata.isCompactTable())
+ if (def == null || def.isPrimaryKeyColumn())
+ // If it's a compact table, it means the column is in fact a 
"dynamic" one
 -return new LegacyCellName(new Clustering(column), 
metadata.compactValueColumn(), null);
 +return new LegacyCellName(Clustering.make(column), 
metadata.compactValueColumn(), null);
- 
- if (def == null)
- {
- throw new UnknownColumnException(metadata, column);
- }
- else
- {
- noSpamLogger.warn("Illegal cell name for CQL3 table {}.{}. {} 
is defined as a primary key column",
-  metadata.ksName, metadata.cfName, 
stringify(column));
- throw new IllegalLegacyColumnException(metadata, column);
- }
+ }
+ else if (def == null)
+ {
+ throw new UnknownColumnException(metadata, column);
  }
  
  ByteBuffer collectionElement = metadata.isCompound() ? 
CompositeType.extractComponent(cellname, metadata.comparator.size() + 1) : null;
diff --cc src/java/org/apache/cassandra/thrift/CassandraServer.java
index 868f937,163eb2d..444a938
--- a/src/java/org/apache/cassandra/thrift/CassandraServer.java
+++ b/src/java/org/apache/cassandra/thrift/CassandraServer.java
@@@ -2203,9 -2176,9 +2203,9 @@@ public class CassandraServer implement
  PartitionUpdate update = 
PartitionUpdate.singleRowUpdate(metadata, key, 
BTreeRow.singleCellRow(name.clustering, cell));
  
  org.apache.cassandra.db.Mutation mutation = new 
org.apache.cassandra.db.Mutation(update);
 -doInsert(consistency_level, Arrays.asList(new 
CounterMutation(mutation, ThriftConversion.fromThrift(consistency_level;
 +doInsert(consistency_level, Arrays.asList(new 
CounterMutation(mutation, 

[cassandra] branch trunk updated (883b0d8 -> 0273250)

2019-06-27 Thread samt
This is an automated email from the ASF dual-hosted git repository.

samt pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from 883b0d8  Merge branch 'cassandra-3.11' into trunk
 new f4b6e1d  Filter illegal legacy cells when collating rows
 new 83baa27  Merge branch 'cassandra-3.0' into cassandra-3.11
 new 0273250  Merge branch 'cassandra-3.11' into trunk

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 01/01: Merge branch 'cassandra-3.11' into trunk

2019-06-27 Thread samt
This is an automated email from the ASF dual-hosted git repository.

samt pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit 02732501a1a81f488695de53c16f432e94bbe7fa
Merge: 883b0d8 83baa27
Author: Sam Tunnicliffe 
AuthorDate: Thu Jun 27 16:27:28 2019 +0100

Merge branch 'cassandra-3.11' into trunk



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cassandra-3.0 updated: Filter illegal legacy cells when collating rows

2019-06-27 Thread samt
This is an automated email from the ASF dual-hosted git repository.

samt pushed a commit to branch cassandra-3.0
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/cassandra-3.0 by this push:
 new f4b6e1d  Filter illegal legacy cells when collating rows
f4b6e1d is described below

commit f4b6e1d51f683e0c77c6ff7f199373052b082b9e
Author: Sam Tunnicliffe 
AuthorDate: Thu Jun 13 14:44:26 2019 +0100

Filter illegal legacy cells when collating rows

Alternative solution for CASSANDRA-15086, which allows the illegal cells to
be read from disk and deserialized as normal so as not to interfere with
tracking bytes read during reverse iteration of indexed partitions.

Patch by Sam Tunnicliffe; reviewed by Marcus Eriksson for CASSANDRA-15178
---
 CHANGES.txt|   1 +
 .../cassandra/db/IllegalLegacyColumnException.java |  41 
 src/java/org/apache/cassandra/db/LegacyLayout.java |  76 +-
 src/java/org/apache/cassandra/db/ReadCommand.java  |   2 +-
 .../cassandra/db/UnfilteredDeserializer.java   |   7 --
 .../cassandra/db/UnknownColumnException.java   |  16 ++-
 .../apache/cassandra/thrift/CassandraServer.java   |  34 +++
 .../apache/cassandra/thrift/ThriftValidation.java  |  10 +-
 ..._ka_with_illegal_cell_names_indexed-ka-1-CRC.db | Bin 0 -> 8 bytes
 ...ka_with_illegal_cell_names_indexed-ka-1-Data.db | Bin 0 -> 6487 bytes
 ...ith_illegal_cell_names_indexed-ka-1-Digest.sha1 |   1 +
 ..._with_illegal_cell_names_indexed-ka-1-Filter.db | Bin 0 -> 16 bytes
 ...a_with_illegal_cell_names_indexed-ka-1-Index.db | Bin 0 -> 453 bytes
 ...h_illegal_cell_names_indexed-ka-1-Statistics.db | Bin 0 -> 4472 bytes
 ...ka_with_illegal_cell_names_indexed-ka-1-TOC.txt |   8 ++
 .../cql3/validation/ThriftIllegalColumnsTest.java  | 100 +++
 .../apache/cassandra/db/LegacyCellNameTest.java|  18 +++-
 .../cassandra/io/sstable/LegacySSTableTest.java| 109 -
 18 files changed, 273 insertions(+), 150 deletions(-)

diff --git a/CHANGES.txt b/CHANGES.txt
index eae2815..d34406b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.19
+ * Skipping illegal legacy cells can break reverse iteration of indexed 
partitions (CASSANDRA-15178)
  * Handle paging states serialized with a different version than the session's 
(CASSANDRA-15176)
  * Throw IOE instead of asserting on unsupporter peer versions 
(CASSANDRA-15066)
  * Update token metadata when handling MOVING/REMOVING_TOKEN events 
(CASSANDRA-15120)
diff --git a/src/java/org/apache/cassandra/db/IllegalLegacyColumnException.java 
b/src/java/org/apache/cassandra/db/IllegalLegacyColumnException.java
deleted file mode 100644
index b70d248..000
--- a/src/java/org/apache/cassandra/db/IllegalLegacyColumnException.java
+++ /dev/null
@@ -1,41 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.cassandra.db;
-
-import java.nio.ByteBuffer;
-
-import org.apache.cassandra.config.CFMetaData;
-
-import static org.apache.cassandra.db.LegacyLayout.stringify;
-
-/**
- * Exception thrown when we attempt to decode a legacy cellname
- * and the column name component refers to a primary key column.
- */
-public class IllegalLegacyColumnException extends Exception
-{
-public final ByteBuffer columnName;
-
-public IllegalLegacyColumnException(CFMetaData metaData, ByteBuffer 
columnName)
-{
-super(String.format("Illegal cell name for CQL3 table %s.%s. %s is 
defined as a primary key column",
-metaData.ksName, metaData.cfName, 
stringify(columnName)));
-this.columnName = columnName;
-}
-}
diff --git a/src/java/org/apache/cassandra/db/LegacyLayout.java 
b/src/java/org/apache/cassandra/db/LegacyLayout.java
index cfaa71f..b03f56e 100644
--- a/src/java/org/apache/cassandra/db/LegacyLayout.java
+++ b/src/java/org/apache/cassandra/db/LegacyLayout.java
@@ -124,7 +124,7 @@ public abstract class LegacyLayout
 }
 
 public static LegacyCellName decodeCellName(CFMetaData metadata, 
ByteBuffer superColumnName, ByteBuffer cellname)
-throws UnknownColumnException, 

[jira] [Updated] (CASSANDRA-15185) Update quilt patches for 4.0, 3.11, 3.0

2019-06-27 Thread Michael Shuler (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-15185:
---
Status: Resolved  (was: Ready to Commit)
Resolution: Fixed

dpatch is used in 3.0/3.11, fixed those branches up and `-s ours` merge in 
trunk.
https://github.com/apache/cassandra/commit/dc23631ab17c2c0cc21b05732f905bb23cb67682

> Update quilt patches for 4.0, 3.11, 3.0
> ---
>
> Key: CASSANDRA-15185
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15185
> Project: Cassandra
>  Issue Type: Task
>  Components: Build, Packaging
>Reporter: Michael Shuler
>Assignee: Michael Shuler
>Priority: High
> Fix For: 4.0
>
> Attachments: 0001-Update-log-directory-patch-for-4.0-deb-package.patch
>
>
> {noformat}
> (trunk)mshuler@hana:~/git/cassandra$ dpkg-buildpackage -uc -us
> <...>
> dpkg-source: info: building cassandra in cassandra_4.0.dsc
>  debian/rules build
> QUILT_PATCHES=debian/patches \
> quilt --quiltrc /dev/null push -a || test $? = 2
> Applying patch cassandra_logdir_fix.diff
> patching file bin/cassandra
> Hunk #1 FAILED at 171.
> 1 out of 1 hunk FAILED -- rejects in file bin/cassandra
> patching file conf/cassandra-env.sh
> Patch cassandra_logdir_fix.diff does not apply (enforce with -f)
> make: *** [/usr/share/quilt/quilt.make:18: debian/stamp-patched] Error 1
> dpkg-buildpackage: error: debian/rules build subprocess returned exit status 2
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15185) Update quilt patches for 4.0, 3.11, 3.0

2019-06-27 Thread Michael Shuler (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-15185:
---
Status: Review In Progress  (was: Patch Available)

> Update quilt patches for 4.0, 3.11, 3.0
> ---
>
> Key: CASSANDRA-15185
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15185
> Project: Cassandra
>  Issue Type: Task
>  Components: Build, Packaging
>Reporter: Michael Shuler
>Assignee: Michael Shuler
>Priority: High
> Fix For: 4.0
>
> Attachments: 0001-Update-log-directory-patch-for-4.0-deb-package.patch
>
>
> {noformat}
> (trunk)mshuler@hana:~/git/cassandra$ dpkg-buildpackage -uc -us
> <...>
> dpkg-source: info: building cassandra in cassandra_4.0.dsc
>  debian/rules build
> QUILT_PATCHES=debian/patches \
> quilt --quiltrc /dev/null push -a || test $? = 2
> Applying patch cassandra_logdir_fix.diff
> patching file bin/cassandra
> Hunk #1 FAILED at 171.
> 1 out of 1 hunk FAILED -- rejects in file bin/cassandra
> patching file conf/cassandra-env.sh
> Patch cassandra_logdir_fix.diff does not apply (enforce with -f)
> make: *** [/usr/share/quilt/quilt.make:18: debian/stamp-patched] Error 1
> dpkg-buildpackage: error: debian/rules build subprocess returned exit status 2
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15185) Update quilt patches for 4.0, 3.11, 3.0

2019-06-27 Thread Michael Shuler (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-15185:
---
Status: Patch Available  (was: In Progress)

> Update quilt patches for 4.0, 3.11, 3.0
> ---
>
> Key: CASSANDRA-15185
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15185
> Project: Cassandra
>  Issue Type: Task
>  Components: Build, Packaging
>Reporter: Michael Shuler
>Assignee: Michael Shuler
>Priority: High
> Fix For: 4.0
>
> Attachments: 0001-Update-log-directory-patch-for-4.0-deb-package.patch
>
>
> {noformat}
> (trunk)mshuler@hana:~/git/cassandra$ dpkg-buildpackage -uc -us
> <...>
> dpkg-source: info: building cassandra in cassandra_4.0.dsc
>  debian/rules build
> QUILT_PATCHES=debian/patches \
> quilt --quiltrc /dev/null push -a || test $? = 2
> Applying patch cassandra_logdir_fix.diff
> patching file bin/cassandra
> Hunk #1 FAILED at 171.
> 1 out of 1 hunk FAILED -- rejects in file bin/cassandra
> patching file conf/cassandra-env.sh
> Patch cassandra_logdir_fix.diff does not apply (enforce with -f)
> make: *** [/usr/share/quilt/quilt.make:18: debian/stamp-patched] Error 1
> dpkg-buildpackage: error: debian/rules build subprocess returned exit status 2
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15185) Update quilt patches for 4.0, 3.11, 3.0

2019-06-27 Thread Michael Shuler (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-15185:
---
Status: Ready to Commit  (was: Review In Progress)

> Update quilt patches for 4.0, 3.11, 3.0
> ---
>
> Key: CASSANDRA-15185
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15185
> Project: Cassandra
>  Issue Type: Task
>  Components: Build, Packaging
>Reporter: Michael Shuler
>Assignee: Michael Shuler
>Priority: High
> Fix For: 4.0
>
> Attachments: 0001-Update-log-directory-patch-for-4.0-deb-package.patch
>
>
> {noformat}
> (trunk)mshuler@hana:~/git/cassandra$ dpkg-buildpackage -uc -us
> <...>
> dpkg-source: info: building cassandra in cassandra_4.0.dsc
>  debian/rules build
> QUILT_PATCHES=debian/patches \
> quilt --quiltrc /dev/null push -a || test $? = 2
> Applying patch cassandra_logdir_fix.diff
> patching file bin/cassandra
> Hunk #1 FAILED at 171.
> 1 out of 1 hunk FAILED -- rejects in file bin/cassandra
> patching file conf/cassandra-env.sh
> Patch cassandra_logdir_fix.diff does not apply (enforce with -f)
> make: *** [/usr/share/quilt/quilt.make:18: debian/stamp-patched] Error 1
> dpkg-buildpackage: error: debian/rules build subprocess returned exit status 2
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cassandra-3.11 updated (5514696 -> 75fde25)

2019-06-27 Thread mshuler
This is an automated email from the ASF dual-hosted git repository.

mshuler pushed a change to branch cassandra-3.11
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from 5514696  Merge branch 'cassandra-3.0' into cassandra-3.11
 new dc23631  Update log directory patch for deb package
 new 75fde25  Merge branch 'cassandra-3.0' into cassandra-3.11

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 debian/patches/002cassandra_logdir_fix.dpatch | 26 +-
 1 file changed, 13 insertions(+), 13 deletions(-)
 mode change 100644 => 100755 debian/patches/002cassandra_logdir_fix.dpatch


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 01/01: Merge branch 'cassandra-3.0' into cassandra-3.11

2019-06-27 Thread mshuler
This is an automated email from the ASF dual-hosted git repository.

mshuler pushed a commit to branch cassandra-3.11
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit 75fde25a75d97a558d69e8083000ce225c3251bb
Merge: 5514696 dc23631
Author: Michael Shuler 
AuthorDate: Thu Jun 27 09:52:42 2019 -0500

Merge branch 'cassandra-3.0' into cassandra-3.11

 debian/patches/002cassandra_logdir_fix.dpatch | 26 +-
 1 file changed, 13 insertions(+), 13 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 01/01: Merge branch 'cassandra-3.11' into trunk

2019-06-27 Thread mshuler
This is an automated email from the ASF dual-hosted git repository.

mshuler pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit 883b0d8cdd4ba3c97a02b1b5165021ccd283
Merge: 2ed2b87 75fde25
Author: Michael Shuler 
AuthorDate: Thu Jun 27 09:55:07 2019 -0500

Merge branch 'cassandra-3.11' into trunk



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated (2ed2b87 -> 883b0d8)

2019-06-27 Thread mshuler
This is an automated email from the ASF dual-hosted git repository.

mshuler pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from 2ed2b87  Fix accessing java.nio.Bits.totalCapacity in Java11
 new dc23631  Update log directory patch for deb package
 new 75fde25  Merge branch 'cassandra-3.0' into cassandra-3.11
 new 883b0d8  Merge branch 'cassandra-3.11' into trunk

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cassandra-3.0 updated: Update log directory patch for deb package

2019-06-27 Thread mshuler
This is an automated email from the ASF dual-hosted git repository.

mshuler pushed a commit to branch cassandra-3.0
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/cassandra-3.0 by this push:
 new dc23631  Update log directory patch for deb package
dc23631 is described below

commit dc23631ab17c2c0cc21b05732f905bb23cb67682
Author: Michael Shuler 
AuthorDate: Thu Jun 27 09:49:43 2019 -0500

Update log directory patch for deb package

Patch by Michael Shuler; Reviewed by Jon Haddad for CASSANDRA-15185
---
 debian/patches/002cassandra_logdir_fix.dpatch | 26 +-
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/debian/patches/002cassandra_logdir_fix.dpatch 
b/debian/patches/002cassandra_logdir_fix.dpatch
old mode 100644
new mode 100755
index 87387b9..5e76b91
--- a/debian/patches/002cassandra_logdir_fix.dpatch
+++ b/debian/patches/002cassandra_logdir_fix.dpatch
@@ -6,21 +6,21 @@
 
 @DPATCH@
 diff -urNad '--exclude=CVS' '--exclude=.svn' '--exclude=.git' 
'--exclude=.arch' '--exclude=.hg' '--exclude=_darcs' '--exclude=.bzr' 
cassandra~/bin/cassandra cassandra/bin/cassandra
 cassandra~/bin/cassandra   2015-10-27 14:35:22.0 -0500
-+++ cassandra/bin/cassandra2015-10-27 14:41:38.0 -0500
-@@ -139,7 +139,7 @@
- props="$3"
- class="$4"
- cassandra_parms="-Dlogback.configurationFile=logback.xml"
--cassandra_parms="$cassandra_parms -Dcassandra.logdir=$CASSANDRA_HOME/logs"
-+cassandra_parms="$cassandra_parms -Dcassandra.logdir=/var/log/cassandra"
- cassandra_parms="$cassandra_parms 
-Dcassandra.storagedir=$cassandra_storagedir"
+--- cassandra~/bin/cassandra   2019-06-27 09:35:32.0 -0500
 cassandra/bin/cassandra2019-06-27 09:43:28.756343141 -0500
+@@ -127,7 +127,7 @@
+ fi
  
- if [ "x$pidpath" != "x" ]; then
+ if [ -z "$CASSANDRA_LOG_DIR" ]; then
+-  CASSANDRA_LOG_DIR=$CASSANDRA_HOME/logs
++  CASSANDRA_LOG_DIR=/var/log/cassandra
+ fi
+ 
+ # Special-case path variables.
 diff -urNad '--exclude=CVS' '--exclude=.svn' '--exclude=.git' 
'--exclude=.arch' '--exclude=.hg' '--exclude=_darcs' '--exclude=.bzr' 
cassandra~/conf/cassandra-env.sh cassandra/conf/cassandra-env.sh
 cassandra~/conf/cassandra-env.sh   2015-10-27 14:40:39.0 -0500
-+++ cassandra/conf/cassandra-env.sh2015-10-27 14:42:40.647449856 -0500
-@@ -204,7 +204,7 @@
+--- cassandra~/conf/cassandra-env.sh   2019-06-27 09:35:32.0 -0500
 cassandra/conf/cassandra-env.sh2019-06-27 09:42:25.747715490 -0500
+@@ -122,7 +122,7 @@
  esac
  
  #GC log path has to be defined here because it needs to access CASSANDRA_HOME


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15185) Update quilt patches for 4.0, 3.11, 3.0

2019-06-27 Thread Michael Shuler (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-15185:
---
Status: Open  (was: Resolved)

CASSANDRA-15090 was committed to 3.11 and 3.0 branches, as well, so we'll need 
to commit the same patch update and merge up. Sorry I missed that - thought 
this was just trunk yesterday.

> Update quilt patches for 4.0, 3.11, 3.0
> ---
>
> Key: CASSANDRA-15185
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15185
> Project: Cassandra
>  Issue Type: Task
>  Components: Build, Packaging
>Reporter: Michael Shuler
>Assignee: Michael Shuler
>Priority: High
> Fix For: 4.0
>
> Attachments: 0001-Update-log-directory-patch-for-4.0-deb-package.patch
>
>
> {noformat}
> (trunk)mshuler@hana:~/git/cassandra$ dpkg-buildpackage -uc -us
> <...>
> dpkg-source: info: building cassandra in cassandra_4.0.dsc
>  debian/rules build
> QUILT_PATCHES=debian/patches \
> quilt --quiltrc /dev/null push -a || test $? = 2
> Applying patch cassandra_logdir_fix.diff
> patching file bin/cassandra
> Hunk #1 FAILED at 171.
> 1 out of 1 hunk FAILED -- rejects in file bin/cassandra
> patching file conf/cassandra-env.sh
> Patch cassandra_logdir_fix.diff does not apply (enforce with -f)
> make: *** [/usr/share/quilt/quilt.make:18: debian/stamp-patched] Error 1
> dpkg-buildpackage: error: debian/rules build subprocess returned exit status 2
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15185) Update quilt patches for 4.0, 3.11, 3.0

2019-06-27 Thread Michael Shuler (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-15185:
---
Summary: Update quilt patches for 4.0, 3.11, 3.0  (was: Update quilt 
patches for 4.0)

> Update quilt patches for 4.0, 3.11, 3.0
> ---
>
> Key: CASSANDRA-15185
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15185
> Project: Cassandra
>  Issue Type: Task
>  Components: Build, Packaging
>Reporter: Michael Shuler
>Assignee: Michael Shuler
>Priority: High
> Fix For: 4.0
>
> Attachments: 0001-Update-log-directory-patch-for-4.0-deb-package.patch
>
>
> {noformat}
> (trunk)mshuler@hana:~/git/cassandra$ dpkg-buildpackage -uc -us
> <...>
> dpkg-source: info: building cassandra in cassandra_4.0.dsc
>  debian/rules build
> QUILT_PATCHES=debian/patches \
> quilt --quiltrc /dev/null push -a || test $? = 2
> Applying patch cassandra_logdir_fix.diff
> patching file bin/cassandra
> Hunk #1 FAILED at 171.
> 1 out of 1 hunk FAILED -- rejects in file bin/cassandra
> patching file conf/cassandra-env.sh
> Patch cassandra_logdir_fix.diff does not apply (enforce with -f)
> make: *** [/usr/share/quilt/quilt.make:18: debian/stamp-patched] Error 1
> dpkg-buildpackage: error: debian/rules build subprocess returned exit status 2
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15013) Message Flusher queue can grow unbounded, potentially running JVM out of memory

2019-06-27 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874184#comment-16874184
 ] 

Benedict commented on CASSANDRA-15013:
--

Yes, I guess 3GiB per IP is probably too high, as is 5GiB per node.  Not really 
sure what a good default is - probably it should be a function of heap size 
like most of our other limits.

> Message Flusher queue can grow unbounded, potentially running JVM out of 
> memory
> ---
>
> Key: CASSANDRA-15013
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15013
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Client
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0, 3.0.x, 3.11.x
>
> Attachments: BlockedEpollEventLoopFromHeapDump.png, 
> BlockedEpollEventLoopFromThreadDump.png, RequestExecutorQueueFull.png, heap 
> dump showing each ImmediateFlusher taking upto 600MB.png
>
>
> This is a follow-up ticket out of CASSANDRA-14855, to make the Flusher queue 
> bounded, since, in the current state, items get added to the queue without 
> any checks on queue size, nor with any checks on netty outbound buffer to 
> check the isWritable state.
> We are seeing this issue hit our production 3.0 clusters quite often.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15013) Message Flusher queue can grow unbounded, potentially running JVM out of memory

2019-06-27 Thread Sumanth Pasupuleti (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874183#comment-16874183
 ] 

Sumanth Pasupuleti commented on CASSANDRA-15013:


Thanks for catching the {{channelInactive}} case. lgtm.

One thing remaining I suppose, is to revisit the defaults 
[https://github.com/apache/cassandra/commit/98126f5d887228f5e88eca66f007873b52a0aacf#diff-b66584c9ce7b64019b5db5a531deeda1R173]
{code:java}
// TODO: Revisit limit
public volatile long native_transport_max_concurrent_requests_in_bytes_per_ip = 
30L;
public volatile long native_transport_max_concurrent_requests_in_bytes = 
50L;{code}

> Message Flusher queue can grow unbounded, potentially running JVM out of 
> memory
> ---
>
> Key: CASSANDRA-15013
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15013
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Client
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0, 3.0.x, 3.11.x
>
> Attachments: BlockedEpollEventLoopFromHeapDump.png, 
> BlockedEpollEventLoopFromThreadDump.png, RequestExecutorQueueFull.png, heap 
> dump showing each ImmediateFlusher taking upto 600MB.png
>
>
> This is a follow-up ticket out of CASSANDRA-14855, to make the Flusher queue 
> bounded, since, in the current state, items get added to the queue without 
> any checks on queue size, nor with any checks on netty outbound buffer to 
> check the isWritable state.
> We are seeing this issue hit our production 3.0 clusters quite often.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15013) Message Flusher queue can grow unbounded, potentially running JVM out of memory

2019-06-27 Thread Sumanth Pasupuleti (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874183#comment-16874183
 ] 

Sumanth Pasupuleti edited comment on CASSANDRA-15013 at 6/27/19 2:12 PM:
-

Thanks for catching the {{channelInactive}} case. lgtm.

One thing remaining I suppose, is to revisit the default limits 
[https://github.com/apache/cassandra/commit/98126f5d887228f5e88eca66f007873b52a0aacf#diff-b66584c9ce7b64019b5db5a531deeda1R173]
{code:java}
// TODO: Revisit limit
public volatile long native_transport_max_concurrent_requests_in_bytes_per_ip = 
30L;
public volatile long native_transport_max_concurrent_requests_in_bytes = 
50L;{code}


was (Author: sumanth.pasupuleti):
Thanks for catching the {{channelInactive}} case. lgtm.

One thing remaining I suppose, is to revisit the defaults 
[https://github.com/apache/cassandra/commit/98126f5d887228f5e88eca66f007873b52a0aacf#diff-b66584c9ce7b64019b5db5a531deeda1R173]
{code:java}
// TODO: Revisit limit
public volatile long native_transport_max_concurrent_requests_in_bytes_per_ip = 
30L;
public volatile long native_transport_max_concurrent_requests_in_bytes = 
50L;{code}

> Message Flusher queue can grow unbounded, potentially running JVM out of 
> memory
> ---
>
> Key: CASSANDRA-15013
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15013
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Client
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0, 3.0.x, 3.11.x
>
> Attachments: BlockedEpollEventLoopFromHeapDump.png, 
> BlockedEpollEventLoopFromThreadDump.png, RequestExecutorQueueFull.png, heap 
> dump showing each ImmediateFlusher taking upto 600MB.png
>
>
> This is a follow-up ticket out of CASSANDRA-14855, to make the Flusher queue 
> bounded, since, in the current state, items get added to the queue without 
> any checks on queue size, nor with any checks on netty outbound buffer to 
> check the isWritable state.
> We are seeing this issue hit our production 3.0 clusters quite often.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15013) Message Flusher queue can grow unbounded, potentially running JVM out of memory

2019-06-27 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874159#comment-16874159
 ] 

Benedict commented on CASSANDRA-15013:
--

Thanks [~sumanth.pasupuleti].  I realised I had forgotten to handle the case of 
{{channelInactive}} whilst paused, so I've pushed a tiny follow-up 
modification.  If it looks good to you, I'll merge the lot into trunk.

> Message Flusher queue can grow unbounded, potentially running JVM out of 
> memory
> ---
>
> Key: CASSANDRA-15013
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15013
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Client
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0, 3.0.x, 3.11.x
>
> Attachments: BlockedEpollEventLoopFromHeapDump.png, 
> BlockedEpollEventLoopFromThreadDump.png, RequestExecutorQueueFull.png, heap 
> dump showing each ImmediateFlusher taking upto 600MB.png
>
>
> This is a follow-up ticket out of CASSANDRA-14855, to make the Flusher queue 
> bounded, since, in the current state, items get added to the queue without 
> any checks on queue size, nor with any checks on netty outbound buffer to 
> check the isWritable state.
> We are seeing this issue hit our production 3.0 clusters quite often.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15186) InternodeOutboundMetrics overloaded bytes/count mixup

2019-06-27 Thread Marcus Olsson (JIRA)
Marcus Olsson created CASSANDRA-15186:
-

 Summary: InternodeOutboundMetrics overloaded bytes/count mixup
 Key: CASSANDRA-15186
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15186
 Project: Cassandra
  Issue Type: Bug
  Components: Observability/Metrics
Reporter: Marcus Olsson


In 
[https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/metrics/InternodeOutboundMetrics.java]
 there is a small mixup between overloaded count and bytes, in 
[LargeMessageDroppedTasksDueToOverload|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/metrics/InternodeOutboundMetrics.java#L129]
 and 
[UrgentMessageDroppedTasksDueToOverload|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/metrics/InternodeOutboundMetrics.java#L151].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14757) GCInspector "Error accessing field of java.nio.Bits" under java11

2019-06-27 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14757:

Source Control Link: 
https://github.com/apache/cassandra/commit/2ed2b87b634c1b9d9ec9b3ba3f580f1be753972a
  Since Version: 4.0
 Status: Resolved  (was: Ready to Commit)
 Resolution: Fixed

Committed with 2ed2b87b634c1b9d9ec9b3ba3f580f1be753972a

> GCInspector "Error accessing field of java.nio.Bits" under java11
> -
>
> Key: CASSANDRA-14757
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14757
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability/Metrics
>Reporter: Jason Brown
>Assignee: Robert Stupp
>Priority: Low
>  Labels: Java11, pull-request-available
> Fix For: 4.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Running under java11, {{GCInspector}} throws the following exception:
> {noformat}
> DEBUG [main] 2018-09-18 05:18:25,905 GCInspector.java:78 - Error accessing 
> field of java.nio.Bits
> java.lang.NoSuchFieldException: totalCapacity
> at java.base/java.lang.Class.getDeclaredField(Class.java:2412)
> at 
> org.apache.cassandra.service.GCInspector.(GCInspector.java:72)
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:308)
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:590)
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679)
> {noformat}
> This is because {{GCInspector}} uses reflection to read the {{totalCapacity}} 
> from {{java.nio.Bits}}. This field was renamed to {{TOTAL_CAPACITY}} 
> somewhere between java8 and java11.
> Note: this is a rather harmless error, as we only look at 
> {{Bits.totalCapacity}} for metrics collection on how much direct memory is 
> being used by {{ByteBuffer}}s. If we fail to read the field, we simply return 
> -1 for the metric value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated: Fix accessing java.nio.Bits.totalCapacity in Java11

2019-06-27 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2ed2b87  Fix accessing java.nio.Bits.totalCapacity in Java11
2ed2b87 is described below

commit 2ed2b87b634c1b9d9ec9b3ba3f580f1be753972a
Author: Mick Semb Wever 
AuthorDate: Sun Apr 7 21:29:19 2019 +1000

Fix accessing java.nio.Bits.totalCapacity in Java11

 patch by Mick Semb Wever; reviewed by Robert Stupp for CASSANDRA-14757
---
 src/java/org/apache/cassandra/service/GCInspector.java | 11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/src/java/org/apache/cassandra/service/GCInspector.java 
b/src/java/org/apache/cassandra/service/GCInspector.java
index 657d3ad..e0a935d 100644
--- a/src/java/org/apache/cassandra/service/GCInspector.java
+++ b/src/java/org/apache/cassandra/service/GCInspector.java
@@ -69,7 +69,16 @@ public class GCInspector implements NotificationListener, 
GCInspectorMXBean
 try
 {
 Class bitsClass = Class.forName("java.nio.Bits");
-Field f = bitsClass.getDeclaredField("totalCapacity");
+Field f;
+try
+{
+f = bitsClass.getDeclaredField("totalCapacity");
+}
+catch (NoSuchFieldException ex)
+{
+// in Java11 it changed name to "TOTAL_CAPACITY"
+f = bitsClass.getDeclaredField("TOTAL_CAPACITY");
+}
 f.setAccessible(true);
 temp = f;
 }


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14772) Fix issues in audit / full query log interactions

2019-06-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16873990#comment-16873990
 ] 

Per Otterström commented on CASSANDRA-14772:


+1

I'm not able to reproduce the failing dtest locally.

> Fix issues in audit / full query log interactions
> -
>
> Key: CASSANDRA-14772
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14772
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/CQL, Legacy/Tools
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Normal
> Fix For: 4.0
>
>
> There are some problems with the audit + full query log code that need to be 
> resolved before 4.0 is released:
> * Fix performance regression in FQL that makes it less usable than it should 
> be.
> * move full query log specific code to a separate package 
> * do some audit log class renames (I keep reading {{BinLogAuditLogger}} vs 
> {{BinAuditLogger}} wrong for example)
> * avoid parsing the CQL queries twice in {{QueryMessage}} when audit log is 
> enabled.
> * add a new tool to dump audit logs (ie, let fqltool be full query log 
> specific). fqltool crashes when pointed to them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15041) UncheckedExecutionException if authentication/authorization query fails

2019-06-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16873909#comment-16873909
 ] 

Per Otterström commented on CASSANDRA-15041:


Thanks for taking the time to review.

bq. Whilst the new dtests for handling unavailability do what they claim, 
they’re not doing so in the way one might expect.

Yeah, had to spend some time to get this sorted in my head. There are certainly 
a few surprises in how the caches affect the result, and one another. What's 
more, behavior is a bit different between versions since we're caching the 
super-user flag in 4.0, while in pre-4.0 we're not. For this reason, a stale 
entry in the permissions cache on pre-4.0 will result in a query to pull up the 
role's super-user flag even if the role cache is still up to date. The 
resulting error message is a bit confusing, but this is more intuitive in 4.0 I 
think. Also, I believe this makes it hard to use a non-super-user role in the 
tests and still get clean code and consistent results across versions, so 
didn't change that. Open to suggestions if you have some ideas...

I've added some comments around assumptions and test strategy in the dtests.

One thing I discovered while working on this is that the assert_exception 
helper (and friends) don't verify error message properly. I fixed this locally 
while verifying, but will create a separate ticket for this since it will 
affect a few places where the helpers are used.

bq. So I just have these nits

Fixed those. Also, since I removed the stack trace dump on error level when 
background updates fail (pre-4.0), I've re-added the trace log we used to have 
in the AuthCache.

bq. I’ve been running the tests with the HIRES circle configuration

Thanks. Tried to make it work on free service, but I give up...


> UncheckedExecutionException if authentication/authorization query fails
> ---
>
> Key: CASSANDRA-15041
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15041
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Authorization
>Reporter: Per Otterström
>Assignee: Per Otterström
>Priority: Normal
> Fix For: 2.2.15, 3.0.19, 3.11.5, 4.0
>
>
> If cache update for permissions/credentials/roles fails with 
> UnavailableException this comes back to client as UncheckedExecutionException.
> Stack trace on server side:
> {noformat}
> ERROR [Native-Transport-Requests-1] 2019-03-04 16:30:51,537 
> ErrorMessage.java:384 - Unexpected exception during request
> com.google.common.util.concurrent.UncheckedExecutionException: 
> com.google.common.util.concurrent.UncheckedExecutionException: 
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
> consistency level QUORUM
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203) 
> ~[guava-18.0.jar:na]
> at com.google.common.cache.LocalCache.get(LocalCache.java:3937) 
> ~[guava-18.0.jar:na]
> at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3941) 
> ~[guava-18.0.jar:na]
> at 
> com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4824)
>  ~[guava-18.0.jar:na]
> at org.apache.cassandra.auth.AuthCache.get(AuthCache.java:97) 
> ~[apache-cassandra-3.11.4.jar:3.11.4]
> at 
> org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:45)
>  ~[apache-cassandra-3.11.4.jar:3.11.4]
> at 
> org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
>  ~[apache-cassandra-3.11.4.jar:3.11.4]
> at 
> org.apache.cassandra.service.ClientState.authorize(ClientState.java:439) 
> ~[apache-cassandra-3.11.4.jar:3.11.4]
> at 
> org.apache.cassandra.service.ClientState.checkPermissionOnResourceChain(ClientState.java:368)
>  ~[apache-cassandra-3.11.4.jar:3.11.4]
> at 
> org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:345)
>  ~[apache-cassandra-3.11.4.jar:3.11.4]
> at 
> org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:332) 
> ~[apache-cassandra-3.11.4.jar:3.11.4]
> at 
> org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:310)
>  ~[apache-cassandra-3.11.4.jar:3.11.4]
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:211)
>  ~[apache-cassandra-3.11.4.jar:3.11.4]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:222)
>  ~[apache-cassandra-3.11.4.jar:3.11.4]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:532)
>  ~[apache-cassandra-3.11.4.jar:3.11.4]
> at 
> 

[jira] [Comment Edited] (CASSANDRA-15013) Message Flusher queue can grow unbounded, potentially running JVM out of memory

2019-06-27 Thread Sumanth Pasupuleti (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16873870#comment-16873870
 ] 

Sumanth Pasupuleti edited comment on CASSANDRA-15013 at 6/27/19 7:12 AM:
-

+1 on the suggestions [~benedict]. I have applied your commit on my 
[branch|https://github.com/sumanth-pasupuleti/cassandra/commits/15013_trunk_2] 
and ran the tests.
UTs and JVM DTests pass. All Dtests pass except for 6 failures which seem 
unrelated. 
https://circleci.com/workflow-run/04b77dd7-7dca-49d4-8328-e55b357fcca6


was (Author: sumanth.pasupuleti):
+1 on the suggestions [~benedict]. I have applied your commit on my branch and 
ran the tests.
UTs and JVM DTests pass. All Dtests pass except for 6 failures which seem 
unrelated. 
https://circleci.com/workflow-run/04b77dd7-7dca-49d4-8328-e55b357fcca6

> Message Flusher queue can grow unbounded, potentially running JVM out of 
> memory
> ---
>
> Key: CASSANDRA-15013
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15013
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Client
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0, 3.0.x, 3.11.x
>
> Attachments: BlockedEpollEventLoopFromHeapDump.png, 
> BlockedEpollEventLoopFromThreadDump.png, RequestExecutorQueueFull.png, heap 
> dump showing each ImmediateFlusher taking upto 600MB.png
>
>
> This is a follow-up ticket out of CASSANDRA-14855, to make the Flusher queue 
> bounded, since, in the current state, items get added to the queue without 
> any checks on queue size, nor with any checks on netty outbound buffer to 
> check the isWritable state.
> We are seeing this issue hit our production 3.0 clusters quite often.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15013) Message Flusher queue can grow unbounded, potentially running JVM out of memory

2019-06-27 Thread Sumanth Pasupuleti (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16873870#comment-16873870
 ] 

Sumanth Pasupuleti commented on CASSANDRA-15013:


+1 on the suggestions [~benedict]. I have applied your commit on my branch and 
ran the tests.
UTs and JVM DTests pass. All Dtests pass except for 6 failures which seem 
unrelated. 
https://circleci.com/workflow-run/04b77dd7-7dca-49d4-8328-e55b357fcca6

> Message Flusher queue can grow unbounded, potentially running JVM out of 
> memory
> ---
>
> Key: CASSANDRA-15013
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15013
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Client
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0, 3.0.x, 3.11.x
>
> Attachments: BlockedEpollEventLoopFromHeapDump.png, 
> BlockedEpollEventLoopFromThreadDump.png, RequestExecutorQueueFull.png, heap 
> dump showing each ImmediateFlusher taking upto 600MB.png
>
>
> This is a follow-up ticket out of CASSANDRA-14855, to make the Flusher queue 
> bounded, since, in the current state, items get added to the queue without 
> any checks on queue size, nor with any checks on netty outbound buffer to 
> check the isWritable state.
> We are seeing this issue hit our production 3.0 clusters quite often.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org