[jira] [Created] (CASSANDRA-15420) CVE-2019-0205(Apache Thrift all versions up to and including 0.12.0) on version Cassendra 3.11.4

2019-11-12 Thread Abhishek Singh (Jira)
Abhishek Singh created CASSANDRA-15420:
--

 Summary: CVE-2019-0205(Apache Thrift all versions up to and 
including 0.12.0) on version Cassendra 3.11.4
 Key: CASSANDRA-15420
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15420
 Project: Cassandra
  Issue Type: Bug
Reporter: Abhishek Singh


*Description :**Description :* *Severity :* CVE CVSS 3: 7.5Sonatype CVSS 3: 7.5
 
 *Weakness :* CVE CWE: 835
 
 *Source :* National Vulnerability Database
 
 *Categories :* Data 
 *Description from CVE :* In Apache Thrift all versions up to and including 
0.12.0, a server or client may run into an endless loop when feed with specific 
input data. Because the issue had already been partially fixed in version 
0.11.0, depending on the installed version it affects only certain language 
bindings.
 
 *Explanation :* This issue has undergone the Sonatype Fast-Track process. For 
more information, please see the Sonatype Knowledge Base Guide. 
 *Detection :* The application is vulnerable by using this component. 
 *Recommendation :* We recommend upgrading to a version of this component that 
is not vulnerable to this specific issue.Note: If this component is included as 
a bundled/transitive dependency of another component, there may not be an 
upgrade path. In this instance, we recommend contacting the maintainers who 
included the vulnerable package. Alternatively, we recommend investigating 
alternative components or a potential mitigating control. 
 *Advisories :* Project: 
http://mail-archives.apache.org/mod_mbox/thrift-dev/201910.m…
 
 *CVSS Details :* CVE CVSS 3: 7.5CVSS Vector: 
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
*Occurences (Paths) :* ["apache-cassandra.zip" ; "apache-cassandra.zip"]
*CVE :* CVE-2019-0205
*URL :* http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0205
*Remediation :* This component does not have any non-vulnerable Version. Please 
contact the vendor to get this vulnerability fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15419) sonatype-2013-0069(The setuptools package is vulnerable to Directory Traversal) on Cassendra 3.11.4

2019-11-12 Thread Abhishek Singh (Jira)
Abhishek Singh created CASSANDRA-15419:
--

 Summary: sonatype-2013-0069(The setuptools package is vulnerable 
to Directory Traversal) on Cassendra 3.11.4
 Key: CASSANDRA-15419
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15419
 Project: Cassandra
  Issue Type: Bug
Reporter: Abhishek Singh


*Description :**Description :* *Severity :* Sonatype CVSS 3: 7.5CVE CVSS 2.0: 
0.0
 
 *Weakness :* Sonatype CWE: 22
 
 *Source :* Sonatype Data Research
 
 *Categories :* Data 
 *Explanation :* The setuptools package is vulnerable to Directory Traversal. 
The _install[] function and _build_egg[] function in the ez_setup.py file 
creates setuptools as a .tar.gz file for distribution and allows files to be 
extracted to arbitrary locations. An attacker can exploit this vulnerability by 
uploading a tar archive that contains filenames starting with directory 
traversal characters such as [../../../../../etc/passwd] or symbolic links 
which, when untarred, will overwrite arbitrary files. 
 *Detection :* The application is vulnerable by using this component. 
 *Recommendation :* We recommend upgrading to a version of this component that 
is not vulnerable to this specific issue. 
 *Root Cause :* apache-cassandra-3.11.4-bin.tar.gzsetuptools-0.9.6/ez_setup.py 
: [0.7.3, 3.0b1]
 
 *Advisories :* Project: https://github.com/pypa/setuptools/issues/7
 
 *CVSS Details :* Sonatype CVSS 3: 7.5CVSS Vector: 
CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N
*Occurences (Paths) :* ["apache-cassandra.zip" ; "apache-cassandra.zip"]
*CVE :* sonatype-2013-0069
*URL :* No URL Present.
*Remediation :* This component does not have any non-vulnerable Version. Please 
contact the vendor to get this vulnerability fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15418) CVE-2019-16869(Netty is vulnerable to HTTP Request Smuggling) of severity 7.5 for Cassendra 2.2.5

2019-11-12 Thread Abhishek Singh (Jira)
Abhishek Singh created CASSANDRA-15418:
--

 Summary: CVE-2019-16869(Netty is vulnerable to HTTP Request 
Smuggling) of severity 7.5 for Cassendra 2.2.5
 Key: CASSANDRA-15418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15418
 Project: Cassandra
  Issue Type: Bug
Reporter: Abhishek Singh


*Description :**Description :* *Severity :* CVE CVSS 3: 7.5Sonatype CVSS 3: 7.5
 
 *Weakness :* CVE CWE: 444
 
 *Source :* National Vulnerability Database
 
 *Categories :* Data 
 *Description from CVE :* Netty before 4.1.42.Final mishandles whitespace 
before the colon in HTTP headers , which leads to HTTP request smuggling.
 
 *Explanation :* Netty is vulnerable to HTTP Request Smuggling. The splitHeader 
method in HttpObjectDecoder.class does not properly handle HTTP headers 
containing whitespace between the header field-name and colon. An attacker can 
exploit this by sending such a header containing this white space and have the 
header end up being parsed by one endpoint and not another, due to 
inconsistencies in how the whitespace in the header is handled. 
 *Detection :* The application is vulnerable by using this component. 
 *Recommendation :* We recommend upgrading to a version of this component that 
is not vulnerable to this specific issue. 
 *Root Cause :* 
Cassandra-2.2.5.nupkgio/netty/handler/codec/http/HttpObjectDecoder.class : 
[4.0.0.Beta1, 4.1.42.Final]
 
 *Advisories :* Project: https://github.com/netty/netty/issues/9571
 
 *CVSS Details :* CVE CVSS 3: 7.5CVSS Vector: 
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N
*Occurences (Paths) :* [" apache-cassandra.zip/bin/cassandra.bat" ; " 
apache-cassandra.zip/bin/cassandra.in.bat" ; " 
apache-cassandra.zip/bin/cassandra.in.sh" ; " 
apache-cassandra.zip/bin/cqlsh.bat" ; " apache-cassandra.zip/bin/debug-cql.bat" 
; " apache-cassandra.zip/bin/source-conf.ps1" ; " 
apache-cassandra.zip/bin/sstableloader.bat" ; " 
apache-cassandra.zip/bin/sstablescrub.bat" ; " 
apache-cassandra.zip/bin/sstableupgrade.bat" ; " 
apache-cassandra.zip/bin/sstableverify.bat" ; " 
apache-cassandra.zip/bin/stop-server" ; " 
apache-cassandra.zip/bin/stop-server.ps1" ; " 
apache-cassandra.zip/conf/README.txt" ; " 
apache-cassandra.zip/conf/cassandra-rackdc.properties" ; " 
apache-cassandra.zip/conf/cassandra-topology.properties" ; " 
apache-cassandra.zip/conf/commitlog_archiving.properties" ; " 
apache-cassandra.zip/conf/triggers/README.txt" ; " 
apache-cassandra.zip/lib/ST4-4.0.8.jar" ; " 
apache-cassandra.zip/lib/airline-0.6.jar" ; " 
apache-cassandra.zip/lib/antlr-runtime-3.5.2.jar" ; " 
apache-cassandra.zip/lib/commons-cli-1.1.jar" ; " 
apache-cassandra.zip/lib/commons-lang3-3.1.jar" ; " 
apache-cassandra.zip/lib/commons-math3-3.2.jar" ; " 
apache-cassandra.zip/lib/compress-lzf-0.8.4.jar" ; " 
apache-cassandra.zip/lib/concurrentlinkedhashmap-lru-1.4.jar" ; " 
apache-cassandra.zip/lib/disruptor-3.0.1.jar" ; " 
apache-cassandra.zip/lib/futures-2.1.6-py2.py3-none-any.zip" ; " 
apache-cassandra.zip/lib/high-scale-lib-1.0.6.jar" ; " 
apache-cassandra.zip/lib/jamm-0.3.0.jar" ; " 
apache-cassandra.zip/lib/javax.inject.jar" ; " 
apache-cassandra.zip/lib/jbcrypt-0.3m.jar" ; " 
apache-cassandra.zip/lib/jcl-over-slf4j-1.7.7.jar" ; " 
apache-cassandra.zip/lib/joda-time-2.4.jar" ; " 
apache-cassandra.zip/lib/json-simple-1.1.jar" ; " 
apache-cassandra.zip/lib/libthrift-0.9.2.jar" ; " 
apache-cassandra.zip/lib/licenses/ST4-4.0.8.txt" ; " 
apache-cassandra.zip/lib/licenses/antlr-runtime-3.5.2.txt" ; " 
apache-cassandra.zip/lib/licenses/compress-lzf-0.8.4.txt" ; " 
apache-cassandra.zip/lib/licenses/concurrent-trees-2.4.0.txt" ; " 
apache-cassandra.zip/lib/licenses/ecj-4.4.2.txt" ; " 
apache-cassandra.zip/lib/licenses/futures-2.1.6.txt" ; " 
apache-cassandra.zip/lib/licenses/high-scale-lib-1.0.6.txt" ; " 
apache-cassandra.zip/lib/licenses/jbcrypt-0.3m.txt" ; " 
apache-cassandra.zip/lib/licenses/jcl-over-slf4j-1.7.7.txt" ; " 
apache-cassandra.zip/lib/licenses/jna-4.2.2.txt" ; " 
apache-cassandra.zip/lib/licenses/jstackjunit-0.0.1.txt" ; " 
apache-cassandra.zip/lib/licenses/log4j-over-slf4j-1.7.7.txt" ; " 
apache-cassandra.zip/lib/licenses/logback-classic-1.1.3.txt" ; " 
apache-cassandra.zip/lib/licenses/logback-core-1.1.3.txt" ; " 
apache-cassandra.zip/lib/licenses/lz4-1.3.0.txt" ; " 
apache-cassandra.zip/lib/licenses/metrics-core-3.1.5.txt" ; " 
apache-cassandra.zip/lib/licenses/metrics-jvm-3.1.5.txt" ; " 
apache-cassandra.zip/lib/licenses/ohc-0.4.4.txt" ; " 
apache-cassandra.zip/lib/licenses/reporter-config-base-3.0.3.txt" ; " 
apache-cassandra.zip/lib/licenses/reporter-config3-3.0.3.txt" ; " 
apache-cassandra.zip/lib/licenses/sigar-1.6.4.txt" ; " 
apache-cassandra.zip/lib/licenses/six-1.7.3.txt" ; " 
apache-cassandra.zip/lib/licenses/slf4j-api-1.7.7.txt" ; " 
apache-cassandra.zip/lib/licenses/stream-2.5.2.txt" ; " 
apache-cassandra.zip/lib/log4j-over-slf4j-1.7.7.jar" ; " 

[jira] [Created] (CASSANDRA-15417) CVE-2019-16869(Netty is vulnerable to HTTP Request Smuggling) of severity 7.5

2019-11-12 Thread Abhishek Singh (Jira)
Abhishek Singh created CASSANDRA-15417:
--

 Summary: CVE-2019-16869(Netty is vulnerable to HTTP Request 
Smuggling) of severity 7.5
 Key: CASSANDRA-15417
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15417
 Project: Cassandra
  Issue Type: Bug
Reporter: Abhishek Singh


*Description :*
*Severity :* CVE CVSS 3: 7.5Sonatype CVSS 3: 7.5

*Weakness :* CVE CWE: 444

*Source :* National Vulnerability Database

*Categories :* Data

*Description from CVE :* Netty before 4.1.42.Final mishandles whitespace before 
the colon in HTTP headers , which leads to HTTP request smuggling.

*Explanation :* Netty is vulnerable to HTTP Request Smuggling. The splitHeader 
method in HttpObjectDecoder.class does not properly handle HTTP headers 
containing whitespace between the header field-name and colon. An attacker can 
exploit this by sending such a header containing this white space and have the 
header end up being parsed by one endpoint and not another, due to 
inconsistencies in how the whitespace in the header is handled.

*Detection :* The application is vulnerable by using this component.

*Recommendation :* We recommend upgrading to a version of this component that 
is not vulnerable to this specific issue.

*Root Cause :* 
apache-cassandra-3.11.4-bin.tar.gzio/netty/handler/codec/http/HttpObjectDecoder.class
 : [4.0.0.Beta1, 4.1.42.Final]

*Advisories :* Project: [https://github.com/netty/netty/issues/9571]

*CVSS Details :* CVE CVSS 3: 7.5CVSS Vector: 
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N

*Occurences (Paths) :* ["apache-cassandra.zip" ; "apache-cassandra.zip"]

*CVE :* CVE-2019-16869

*URL :* [http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16869]

*Remediation :* This component does not have any non-vulnerable Version. Please 
contact the vendor to get this vulnerability fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15416) CVE-2017-7525 ( jackson-databind is vulnerable to Remote Code Execution) on version 3.11.4

2019-11-12 Thread Abhishek Singh (Jira)
Abhishek Singh created CASSANDRA-15416:
--

 Summary: CVE-2017-7525 ( jackson-databind is vulnerable to Remote 
Code Execution) on version 3.11.4
 Key: CASSANDRA-15416
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15416
 Project: Cassandra
  Issue Type: Bug
Reporter: Abhishek Singh


*Description :*
*Severity :* CVE CVSS 2.0: 7.5Sonatype CVSS 3: 8.5

*Weakness :* CVE CWE: 502

*Source :* National Vulnerability Database

*Categories :* Data

*Description from CVE :* A deserialization flaw was discovered in the 
jackson-databind, versions before 2.6.7.1, 2.7.9.1 and 2.8.9, which could allow 
an unauthenticated user to perform code execution by sending the maliciously 
crafted input to the readValue method of the ObjectMapper.

*Explanation :* jackson-databind is vulnerable to Remote Code Execution [RCE]. 
The createBeanDeserializer[] function in the BeanDeserializerFactory class 
allows untrusted Java objects to be deserialized. A remote attacker can exploit 
this by uploading a malicious serialized object that will result in RCE if the 
application attempts to deserialize it.
NOTE: This vulnerability is also tracked by the Apache Struts team as S2-055.

*Detection :* The application is vulnerable by using this component, when 
default typing is enabled.
Note: Spring Security has provided their own fix for this vulnerability 
[CVE-2017-4995]. If this component is being used as part of Spring Security, 
then you are not vulnerable if you are running Spring Security 4.2.3.RELEASE or 
greater for 4.x or Spring Security 5.0.0.M2 or greater for 5.x.

*Recommendation :* : As of version 2.10.0, Jackson now provides a safe default 
typing solution that fully mitigates this vulnerability.
Reference: [https://medium.com/@cowtowncoder/jackson-2-10-features-cd880674d8a2]
In order to mitigate this vulnerability, we recommend upgrading to at least 
version 2.10.0 and changing any usages of enableDefaultTyping[] to 
activateDefaultTyping[].
Alternatively, if upgrading is not a viable option, this vulnerability can be 
mitigated by disabling default typing. Instead, you will need to implement your 
own:

It is also possible to customize global defaulting, using 
ObjectMapper.setDefaultTyping[...] – you just have to implement your own 
TypeResolverBuilder [which is not very difficult]; and by doing so, can 
actually configure all aspects of type information. Builder itself is just a 
short-cut for building actual handlers.

Reference: 
[https://github.com/FasterXML/jackson-docs/wiki/JacksonPolymorphicDeserialization]
Examples of implementing your own typing can be found by looking at this Stack 
Overflow article.

*Root Cause :* 
apache-cassandra-3.11.4-bin.tar.gzorg/codehaus/jackson/map/deser/BeanDeserializerFactory.class
 : [0.9.8, ]

*Advisories :* Project: 
[https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2017-7525]

*CVSS Details :* CVE CVSS 2.0: 7.5CVSS Vector: AV:N/AC:L/Au:N/C:P/I:P/A:P

*Occurences (Paths) :* ["apache-cassandra.zip" ; "apache-cassandra.zip"]

*CVE :* CVE-2017-7525

*URL :* [http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-7525]

*Remediation :* This component does not have any non-vulnerable Version. Please 
contact the vendor to get this vulnerability fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15415) CVE-2019-0205 (Apache Thrift all versions up to and including 0.12.0 vulnerable) of severity 7.5

2019-11-12 Thread Abhishek Singh (Jira)
Abhishek Singh created CASSANDRA-15415:
--

 Summary: CVE-2019-0205 (Apache Thrift all versions up to and 
including 0.12.0 vulnerable) of severity 7.5
 Key: CASSANDRA-15415
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15415
 Project: Cassandra
  Issue Type: Bug
Reporter: Abhishek Singh


*Description :**Description :* *Severity :* CVE CVSS 3: 7.5Sonatype CVSS 3: 7.5
 
 *Weakness :* CVE CWE: 835
 
 *Source :* National Vulnerability Database
 
 *Categories :* Data 
 *Description from CVE :* In Apache Thrift all versions up to and including 
0.12.0, a server or client may run into an endless loop when feed with specific 
input data. Because the issue had already been partially fixed in version 
0.11.0, depending on the installed version it affects only certain language 
bindings.
 
 *Explanation :* This issue has undergone the Sonatype Fast-Track process. For 
more information, please see the Sonatype Knowledge Base Guide. 
 *Detection :* The application is vulnerable by using this component. 
 *Recommendation :* We recommend upgrading to a version of this component that 
is not vulnerable to this specific issue.Note: If this component is included as 
a bundled/transitive dependency of another component, there may not be an 
upgrade path. In this instance, we recommend contacting the maintainers who 
included the vulnerable package. Alternatively, we recommend investigating 
alternative components or a potential mitigating control. 
 *Advisories :* Project: 
http://mail-archives.apache.org/mod_mbox/thrift-dev/201910.m…
 
 *CVSS Details :* CVE CVSS 3: 7.5CVSS Vector: 
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
*Occurences (Paths) :* 
["TSO/windows_bao_devstudio_installer_8.2.01.zip/files/5d8b80e7a292.zip/plugins/com.bmc.ao.ui.studio.plugin_1.0.0.jar/lib/ecj-4.4.2.jar"
 ; " apache-cassandra.zip/bin/cassandra.bat" ; " 
apache-cassandra.zip/bin/cassandra.in.bat" ; " 
apache-cassandra.zip/bin/cassandra.in.sh" ; " 
apache-cassandra.zip/bin/cqlsh.bat" ; " apache-cassandra.zip/bin/debug-cql.bat" 
; " apache-cassandra.zip/bin/source-conf.ps1" ; " 
apache-cassandra.zip/bin/sstableloader.bat" ; " 
apache-cassandra.zip/bin/sstablescrub.bat" ; " 
apache-cassandra.zip/bin/sstableupgrade.bat" ; " 
apache-cassandra.zip/bin/sstableverify.bat" ; " 
apache-cassandra.zip/bin/stop-server" ; " 
apache-cassandra.zip/bin/stop-server.ps1" ; " 
apache-cassandra.zip/conf/README.txt" ; " 
apache-cassandra.zip/conf/cassandra-rackdc.properties" ; " 
apache-cassandra.zip/conf/cassandra-topology.properties" ; " 
apache-cassandra.zip/conf/commitlog_archiving.properties" ; " 
apache-cassandra.zip/conf/triggers/README.txt" ; " 
apache-cassandra.zip/lib/ST4-4.0.8.jar" ; " 
apache-cassandra.zip/lib/airline-0.6.jar" ; " 
apache-cassandra.zip/lib/antlr-runtime-3.5.2.jar" ; " 
apache-cassandra.zip/lib/commons-cli-1.1.jar" ; " 
apache-cassandra.zip/lib/commons-lang3-3.1.jar" ; " 
apache-cassandra.zip/lib/commons-math3-3.2.jar" ; " 
apache-cassandra.zip/lib/compress-lzf-0.8.4.jar" ; " 
apache-cassandra.zip/lib/concurrentlinkedhashmap-lru-1.4.jar" ; " 
apache-cassandra.zip/lib/disruptor-3.0.1.jar" ; " 
apache-cassandra.zip/lib/futures-2.1.6-py2.py3-none-any.zip" ; " 
apache-cassandra.zip/lib/high-scale-lib-1.0.6.jar" ; " 
apache-cassandra.zip/lib/jamm-0.3.0.jar" ; " 
apache-cassandra.zip/lib/javax.inject.jar" ; " 
apache-cassandra.zip/lib/jbcrypt-0.3m.jar" ; " 
apache-cassandra.zip/lib/jcl-over-slf4j-1.7.7.jar" ; " 
apache-cassandra.zip/lib/joda-time-2.4.jar" ; " 
apache-cassandra.zip/lib/json-simple-1.1.jar" ; " 
apache-cassandra.zip/lib/libthrift-0.9.2.jar" ; " 
apache-cassandra.zip/lib/licenses/ST4-4.0.8.txt" ; " 
apache-cassandra.zip/lib/licenses/antlr-runtime-3.5.2.txt" ; " 
apache-cassandra.zip/lib/licenses/compress-lzf-0.8.4.txt" ; " 
apache-cassandra.zip/lib/licenses/concurrent-trees-2.4.0.txt" ; " 
apache-cassandra.zip/lib/licenses/ecj-4.4.2.txt" ; " 
apache-cassandra.zip/lib/licenses/futures-2.1.6.txt" ; " 
apache-cassandra.zip/lib/licenses/high-scale-lib-1.0.6.txt" ; " 
apache-cassandra.zip/lib/licenses/jbcrypt-0.3m.txt" ; " 
apache-cassandra.zip/lib/licenses/jcl-over-slf4j-1.7.7.txt" ; " 
apache-cassandra.zip/lib/licenses/jna-4.2.2.txt" ; " 
apache-cassandra.zip/lib/licenses/jstackjunit-0.0.1.txt" ; " 
apache-cassandra.zip/lib/licenses/log4j-over-slf4j-1.7.7.txt" ; " 
apache-cassandra.zip/lib/licenses/logback-classic-1.1.3.txt" ; " 
apache-cassandra.zip/lib/licenses/logback-core-1.1.3.txt" ; " 
apache-cassandra.zip/lib/licenses/lz4-1.3.0.txt" ; " 
apache-cassandra.zip/lib/licenses/metrics-core-3.1.5.txt" ; " 
apache-cassandra.zip/lib/licenses/metrics-jvm-3.1.5.txt" ; " 
apache-cassandra.zip/lib/licenses/ohc-0.4.4.txt" ; " 
apache-cassandra.zip/lib/licenses/reporter-config-base-3.0.3.txt" ; " 
apache-cassandra.zip/lib/licenses/reporter-config3-3.0.3.txt" ; " 
apache-cassandra.zip/lib/licenses/sigar-1.6.4.txt" ; 

[jira] [Created] (CASSANDRA-15414) sonatype-2018-0119 (Netty is vulnerable to a Denial of Service (DoS) attack)

2019-11-12 Thread Abhishek Singh (Jira)
Abhishek Singh created CASSANDRA-15414:
--

 Summary: sonatype-2018-0119 (Netty is vulnerable to a Denial of 
Service (DoS) attack)
 Key: CASSANDRA-15414
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15414
 Project: Cassandra
  Issue Type: Bug
Reporter: Abhishek Singh


*Description :*
*Severity :* Sonatype CVSS 3.0: 7.5

*Weakness :* Sonatype CWE: 400

*Source :* Sonatype Data Research

*Categories :* Data

*Explanation :* Netty is vulnerable to a Denial of Service (DoS) attack.The 
OpenSslEngine class does not have a mechanism to reject remotely initiated SSL 
renegotiation requests.An attacker can exploit this vulnerability by sending a 
large number of SSL renegotiation requests, causing the application to attempt 
to process all of them and tying up CPU and memory resources until the 
application becomes unresponsive or crashed, resulting in a Denial of Service.

*Detection :* The application is vulnerable by using this component.

*Recommendation :* We recommend upgrading to a version of this component that 
is not vulnerable to this specific issue.

*Root Cause :* Cassandra-2.2.5.nupkgOpenSslServerContext.class : [4.0.20.Final, 
4.0.25.Final)

*Advisories :* Project: [https://github.com/netty/netty/pull/3750]

*CVSS Details :* Sonatype CVSS 3.0: 7.5

Occurences (Paths) : [" apache-cassandra.zip/bin/cassandra.in.bat" ; " 
apache-cassandra.zip/bin/cassandra.in.sh" ;" 
apache-cassandra.zip/bin/cqlsh.bat" ; " apache-cassandra.zip/bin/debug-cql.bat" 
; " apache-cassandra.zip/bin/source-conf.ps1" ; " 
apache-cassandra.zip/bin/sstableloader.bat" ; " 
apache-cassandra.zip/bin/sstablescrub.bat" ; " 
apache-cassandra.zip/bin/sstableupgrade.bat" ; " 
apache-cassandra.zip/bin/sstableverify.bat" ; " 
apache-cassandra.zip/bin/stop-server" ; " 
apache-cassandra.zip/bin/stop-server.bat" ; " 
apache-cassandra.zip/bin/stop-server.ps1" ; " 
apache-cassandra.zip/conf/README.txt" ; " 
apache-cassandra.zip/conf/cassandra-rackdc.properties" ; " 
apache-cassandra.zip/conf/cassandra-topology.properties" ; " 
apache-cassandra.zip/conf/commitlog_archiving.properties" ; " 
apache-cassandra.zip/conf/triggers/README.txt" ; " 
apache-cassandra.zip/lib/ST4-4.0.8.jar" ; " 
apache-cassandra.zip/lib/airline-0.6.jar" ; " 
apache-cassandra.zip/lib/antlr-runtime-3.5.2.jar" ; " 
apache-cassandra.zip/lib/commons-cli-1.1.jar" ; " 
apache-cassandra.zip/lib/commons-lang3-3.1.jar" ; " 
apache-cassandra.zip/lib/commons-math3-3.2.jar" ; " 
apache-cassandra.zip/lib/compress-lzf-0.8.4.jar" ; " 
apache-cassandra.zip/lib/concurrentlinkedhashmap-lru-1.4.jar" ; " 
apache-cassandra.zip/lib/disruptor-3.0.1.jar" ; " 
apache-cassandra.zip/lib/ecj-4.4.2.jar" ; " 
apache-cassandra.zip/lib/futures-2.1.6-py2.py3-none-any.zip" ; " 
apache-cassandra.zip/lib/high-scale-lib-1.0.6.jar" ; " 
apache-cassandra.zip/lib/jamm-0.3.0.jar" ; " 
apache-cassandra.zip/lib/javax.inject.jar" ; " 
apache-cassandra.zip/lib/jbcrypt-0.3m.jar" ; " 
apache-cassandra.zip/lib/jcl-over-slf4j-1.7.7.jar" ; " 
apache-cassandra.zip/lib/joda-time-2.4.jar" ; " 
apache-cassandra.zip/lib/json-simple-1.1.jar" ; " 
apache-cassandra.zip/lib/libthrift-0.9.2.jar" ; " 
apache-cassandra.zip/lib/licenses/ST4-4.0.8.txt" ; " 
apache-cassandra.zip/lib/licenses/antlr-runtime-3.5.2.txt" ; " 
apache-cassandra.zip/lib/licenses/compress-lzf-0.8.4.txt" ; " 
apache-cassandra.zip/lib/licenses/concurrent-trees-2.4.0.txt" ; " 
apache-cassandra.zip/lib/licenses/ecj-4.4.2.txt" ; " 
apache-cassandra.zip/lib/licenses/futures-2.1.6.txt" ; " 
apache-cassandra.zip/lib/licenses/high-scale-lib-1.0.6.txt" ; " 
apache-cassandra.zip/lib/licenses/jbcrypt-0.3m.txt" ; " 
apache-cassandra.zip/lib/licenses/jcl-over-slf4j-1.7.7.txt" ; " 
apache-cassandra.zip/lib/licenses/jna-4.2.2.txt" ; " 
apache-cassandra.zip/lib/licenses/jstackjunit-0.0.1.txt" ; " 
apache-cassandra.zip/lib/licenses/log4j-over-slf4j-1.7.7.txt" ; " 
apache-cassandra.zip/lib/licenses/logback-classic-1.1.3.txt" ; " 
apache-cassandra.zip/lib/licenses/logback-core-1.1.3.txt" ; " 
apache-cassandra.zip/lib/licenses/lz4-1.3.0.txt" ; " 
apache-cassandra.zip/lib/licenses/metrics-core-3.1.0.txt" ; " 
apache-cassandra.zip/lib/licenses/metrics-jvm-3.1.0.txt" ; " 
apache-cassandra.zip/lib/licenses/ohc-0.4.4.txt" ; " 
apache-cassandra.zip/lib/licenses/reporter-config-base-3.0.3.txt" ; " 
apache-cassandra.zip/lib/licenses/reporter-config3-3.0.3.txt" ; " 
apache-cassandra.zip/lib/licenses/sigar-1.6.4.txt" ; " 
apache-cassandra.zip/lib/licenses/six-1.7.3.txt" ; " 
apache-cassandra.zip/lib/licenses/slf4j-api-1.7.7.txt" ; " 
apache-cassandra.zip/lib/licenses/stream-2.5.2.txt" ; " 
apache-cassandra.zip/lib/log4j-over-slf4j-1.7.7.jar" ; " 
apache-cassandra.zip/lib/logback-classic-1.1.3.jar" ; " 
apache-cassandra.zip/lib/logback-core-1.1.3.jar" ; " 
apache-cassandra.zip/lib/lz4-1.3.0.jar" ; " 
apache-cassandra.zip/lib/metrics-core-3.1.0.jar" ; " 

[jira] [Commented] (CASSANDRA-15332) When repair is running with tracing, if a CorruptSSTableException is thrown while building Merkle Trees the DiskFailurePolicy does not get applied

2019-11-12 Thread Jordan West (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16972932#comment-16972932
 ] 

Jordan West commented on CASSANDRA-15332:
-

LGTM. Sorry for the delay. 

> When repair is running with tracing, if a CorruptSSTableException is thrown 
> while building Merkle Trees the DiskFailurePolicy does not get applied
> --
>
> Key: CASSANDRA-15332
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15332
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Repair, Observability/Tracing
>Reporter: David Capwell
>Assignee: David Capwell
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> When a repair is in the validation phase and is building MerkleTrees, if a 
> corrupt SSTable exception is thrown the disk failure policy does not get 
> applied



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15351) Allow configuring timeouts on the per-request basis

2019-11-12 Thread Yifan Cai (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16972866#comment-16972866
 ] 

Yifan Cai commented on CASSANDRA-15351:
---

A requirement difference between this ticket and CASSANDRA-2848

One of the conclusion from the discussion in CASSANDRA-2848 is that the client 
specified timeout values have the upper bound set in DatabaseDescriptor.

This ticket wants to increase the timeout for some queries to be higher than 
the one set in DatabaseDescriptor.

> Allow configuring timeouts on the per-request basis
> ---
>
> Key: CASSANDRA-15351
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15351
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Messaging/Client
>Reporter: Alex Petrov
>Assignee: Yifan Cai
>Priority: Normal
>
> Some queries need to be ran with a higher timeout value, which should be 
> possible without allowing _all_ requests to be above this value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15413) Missing results on reading large frozen text map

2019-11-12 Thread Tyler Codispoti (Jira)
Tyler Codispoti created CASSANDRA-15413:
---

 Summary: Missing results on reading large frozen text map
 Key: CASSANDRA-15413
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15413
 Project: Cassandra
  Issue Type: Bug
Reporter: Tyler Codispoti


Cassandra version: 2.2.15

I have been running into a case where, when fetching the results from a table 
with a frozen>, if the number of results is greater than the 
fetch size (default 5000), we can end up with missing data.

Side note: The table schema comes from using KairosDB, but we've isolated this 
issue to Cassandra itself. But it looks like this can cause problems for users 
of KairosDB as well.

Repro case. Tested against fresh install of Cassandra 2.2.15.

1. Create table (csqlsh)
{code:sql}
CREATE KEYSPACE test
  WITH REPLICATION = { 
   'class' : 'SimpleStrategy', 
   'replication_factor' : 1 
  };

  CREATE TABLE test.test (
name text,
tags frozen>,
PRIMARY KEY (name, tags)
  ) WITH CLUSTERING ORDER BY (tags ASC);
{code}
2. Insert data (python3)
{code:python}
import time
from cassandra.cluster import Cluster

cluster = Cluster(['127.0.0.1'])
session = cluster.connect('test')

for i in range(0, 2):
session.execute(
"""
INSERT INTO test (name, tags)  
VALUES (%s, %s)
""",
("test_name", {'id':str(i)})
)
{code}
 

3. Flush

 
{code:java}
nodetools flush{code}
 

 

4. Fetch data (python3)
{code:python}
import time
from cassandra.cluster import Cluster

cluster = Cluster(['127.0.0.1'], control_connection_timeout=5000)
session = cluster.connect('test')
session.default_fetch_size = 5000
session.default_timeout = 120

count = 0
rows = session.execute("select tags from test where name='test_name'")
for row in rows:
count += 1

print(count)
{code}
Result: 10111 (expected 2)

 

Changing the page size changes the result count. Some quick samples:

 
||default_fetch_size||count||
|5000|10111|
|1000|1830|
|999|1840|
|998|1850|
|2|2|
|10|2|

 

 

In short, I cannot guarantee I'll get all the results back unless the page size 
> number of rows.

This seems to get worse with multiple SSTables (eg nodetool flush between some 
of the insert batches). When using replication, the issue can get disgustingly 
bad - potentially giving a different result on each query.

Interesting, if we pad the values on the tag map ("id" in this repro case) so 
that the insertion is in lexicographical order, there is no issue. I believe 
the issue also does not repro if I do not call "nodetools flush" before 
querying.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15351) Allow configuring timeouts on the per-request basis

2019-11-12 Thread Yifan Cai (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yifan Cai reassigned CASSANDRA-15351:
-

Assignee: Yifan Cai

> Allow configuring timeouts on the per-request basis
> ---
>
> Key: CASSANDRA-15351
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15351
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Messaging/Client
>Reporter: Alex Petrov
>Assignee: Yifan Cai
>Priority: Normal
>
> Some queries need to be ran with a higher timeout value, which should be 
> possible without allowing _all_ requests to be above this value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15295) Running into deadlock when do CommitLog initialization

2019-11-12 Thread Jordan West (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16972662#comment-16972662
 ] 

Jordan West commented on CASSANDRA-15295:
-

Hi [~djoshi] , I like the approach to removing the potential of re-introducing 
the bug. However, it seems like {{CommitLog}} was changed only partially to be 
thread-safe. For example, 
 concurrent calls to {{#start()}} and {{#shutdownBlocking()}} could leave the 
{{CommitLog}} in an invalid state: If the thread executing {{#start()}} pauses 
before calling {{executor.start()}}, and resumes
 only after a separate thread executing {{#shutdownBlocking}} calls 
{{executor.shutdown()}} and {{executor.awaitTermination}} (which will 
immediately exit since {{thread == null}}), the {{segmentManager}} will be 
shutdown but the {{executor}} will still be running.

 

Is it necessary to make {{CommitLog}} thread-safe? Removing the singleton 
doesn’t really change the odds of it being used from multiple threads and the 
original version wasn’t thread-safe either w.r.t to these functions.

Some other comments/minor nits:
 * I like moving the factory code to a function to reduce the amount of new code
 * Is it necessary to change AbstractCommitLogSegmentManager#shutdown() to no 
longer use an assert? That seems like a semantic change that is stylistic but I 
may be missing a further motivation for this change.
 * Minor style nit: In CommitLog#start, {{if (started) return true;}} should be 
on two lines per the Cassandra style guides

> Running into deadlock when do CommitLog initialization
> --
>
> Key: CASSANDRA-15295
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15295
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Commit Log
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Normal
> Attachments: image.png, jstack.log, pstack.log, screenshot-1.png, 
> screenshot-2.png, screenshot-3.png
>
>
> Recently, I found a cassandra(3.11.4) node stuck in STARTING status for a 
> long time.
>  I used jstack to saw what happened. The main thread stuck in 
> *AbstractCommitLogSegmentManager.awaitAvailableSegment*
>  !screenshot-1.png! 
> The strange thing is COMMIT-LOG-ALLOCATOR thread state was runnable but it 
> was not actually running.  
>  !screenshot-2.png! 
> And then I used pstack to troubleshoot. I found COMMIT-LOG-ALLOCATOR block on 
> java class initialization.
>   !screenshot-3.png! 
> This is a deadlock obviously. CommitLog waits for a CommitLogSegment when 
> initializing. In this moment, the CommitLog class is not initialized and the 
> main thread holds the class lock. After that, COMMIT-LOG-ALLOCATOR creates a 
> CommitLogSegment with exception and call *CommitLog.handleCommitError*(static 
> method).  COMMIT-LOG-ALLOCATOR will block on this line because CommitLog 
> class is still initializing.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15368) Failing to flush Memtable without terminating process results in permanent data loss

2019-11-12 Thread Dimitar Dimitrov (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16972639#comment-16972639
 ] 

Dimitar Dimitrov commented on CASSANDRA-15368:
--

Thanks for chasing this down, [~benedict]!

I'm glad it turned out that, as initially suspected, you're pretty good at this 
stuff, and the issue was not lurking from before, but more or less necessitated 
by the fix for CASSANDRA-15367. Then I guess it makes the most sense if you 
continue and take care of this.

> Failing to flush Memtable without terminating process results in permanent 
> data loss
> 
>
> Key: CASSANDRA-15368
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15368
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Commit Log, Local/Memtable
>Reporter: Benedict Elliott Smith
>Priority: Normal
> Fix For: 4.0, 2.2.x, 3.0.x, 3.11.x
>
>
> {{Memtable}} do not contain records that cover a precise contiguous range of 
> {{ReplayPosition}}, since there are only weak ordering constraints when 
> rolling over to a new {{Memtable}} - the last operations for the old 
> {{Memtable}} may obtain their {{ReplayPosition}} after the first operations 
> for the new {{Memtable}}.
> Unfortunately, we treat the {{Memtable}} range as contiguous, and invalidate 
> the entire range on flush.  Ordinarily we only invalidate records when all 
> prior {{Memtable}} have also successfully flushed.  However, in the event of 
> a flush that does not terminate the process (either because of disk failure 
> policy, or because it is a software error), the later flush is able to 
> invalidate the region of the commit log that includes records that should 
> have been flushed in the prior {{Memtable}}
> More problematically, this can also occur on restart without any associated 
> flush failure, as we use commit log boundaries written to our flushed 
> sstables to filter {{ReplayPosition}} on recovery, which is meant to 
> replicate our {{Memtable}} flush behaviour above.  However, we do not know 
> that earlier flushes have completed, and they may complete successfully 
> out-of-order.  So any flush that completes before the process terminates, but 
> began after another flush that _doesn’t_ complete before the process 
> terminates, has the potential to cause permanent data loss.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15410) Avoid over-allocation of bytes for UTF8 string serialization

2019-11-12 Thread Aleksey Yeschenko (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-15410:
--
Test and Documentation Plan: JMH benchmark included
 Status: Patch Available  (was: Open)

> Avoid over-allocation of bytes for UTF8 string serialization 
> -
>
> Key: CASSANDRA-15410
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15410
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Client
>Reporter: Yifan Cai
>Assignee: Yifan Cai
>Priority: Normal
> Fix For: 4.0
>
>
> In the current message encoding implementation, it first calculates the 
> `encodeSize` and allocates the bytebuffer with that size. 
> However, during encoding, it assumes the worst case of writing UTF8 string to 
> allocate bytes, i.e. assuming each letter takes 3 bytes. 
> The over-estimation further leads to resizing the underlying array and data 
> copy. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15410) Avoid over-allocation of bytes for UTF8 string serialization

2019-11-12 Thread Aleksey Yeschenko (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-15410:
--
Reviewers: Aleksey Yeschenko, Aleksey Yeschenko  (was: Aleksey Yeschenko)
   Aleksey Yeschenko, Aleksey Yeschenko  (was: Aleksey Yeschenko)
   Status: Review In Progress  (was: Patch Available)

> Avoid over-allocation of bytes for UTF8 string serialization 
> -
>
> Key: CASSANDRA-15410
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15410
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Client
>Reporter: Yifan Cai
>Assignee: Yifan Cai
>Priority: Normal
> Fix For: 4.0
>
>
> In the current message encoding implementation, it first calculates the 
> `encodeSize` and allocates the bytebuffer with that size. 
> However, during encoding, it assumes the worst case of writing UTF8 string to 
> allocate bytes, i.e. assuming each letter takes 3 bytes. 
> The over-estimation further leads to resizing the underlying array and data 
> copy. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15410) Avoid over-allocation of bytes for UTF8 string serialization

2019-11-12 Thread Aleksey Yeschenko (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko reassigned CASSANDRA-15410:
-

Assignee: Yifan Cai  (was: Abhishek Singh)

> Avoid over-allocation of bytes for UTF8 string serialization 
> -
>
> Key: CASSANDRA-15410
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15410
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Client
>Reporter: Yifan Cai
>Assignee: Yifan Cai
>Priority: Normal
> Fix For: 4.0
>
>
> In the current message encoding implementation, it first calculates the 
> `encodeSize` and allocates the bytebuffer with that size. 
> However, during encoding, it assumes the worst case of writing UTF8 string to 
> allocate bytes, i.e. assuming each letter takes 3 bytes. 
> The over-estimation further leads to resizing the underlying array and data 
> copy. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15410) Avoid over-allocation of bytes for UTF8 string serialization

2019-11-12 Thread Aleksey Yeschenko (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-15410:
--
Change Category: Performance
 Complexity: Low Hanging Fruit
  Fix Version/s: 4.0
  Reviewers: Aleksey Yeschenko
 Status: Open  (was: Triage Needed)

> Avoid over-allocation of bytes for UTF8 string serialization 
> -
>
> Key: CASSANDRA-15410
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15410
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Client
>Reporter: Yifan Cai
>Assignee: Abhishek Singh
>Priority: Normal
> Fix For: 4.0
>
>
> In the current message encoding implementation, it first calculates the 
> `encodeSize` and allocates the bytebuffer with that size. 
> However, during encoding, it assumes the worst case of writing UTF8 string to 
> allocate bytes, i.e. assuming each letter takes 3 bytes. 
> The over-estimation further leads to resizing the underlying array and data 
> copy. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-12197) Integrate top threads command in nodetool

2019-11-12 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova reassigned CASSANDRA-12197:
---

Assignee: Ekaterina Dimitrova

> Integrate top threads command in nodetool
> -
>
> Key: CASSANDRA-12197
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12197
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tool/nodetool
>Reporter: J.B. Langston
>Assignee: Ekaterina Dimitrova
>Priority: Low
>
> SJK (https://github.com/aragozin/jvm-tools) has a command called ttop that 
> displays the top threads within the JVM, sorted either by CPU utilization or 
> heap allocation rate. When diagnosing garbage collection or high cpu 
> utilization, this is very helpful information.  It would be great if users 
> can get this directly with nodetool without having to download something 
> else.  SJK is Apache 2.0 licensed so it might be possible leverage its code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14781) Log message when mutation passed to CommitLog#add(Mutation) is too large is not descriptive enough

2019-11-12 Thread Aleksey Yeschenko (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16972601#comment-16972601
 ] 

Aleksey Yeschenko commented on CASSANDRA-14781:
---

I would also not bother with listing individual tables; keyspace and partition 
key should hopefully be sufficient enough, and memoise calculated {{Mutation}} 
size in the {{Mutation}} object (see {{serializedSize*}} fields in {{Message}} 
in 4.0) to prevent redundant calculations by subsequent stages.

> Log message when mutation passed to CommitLog#add(Mutation) is too large is 
> not descriptive enough
> --
>
> Key: CASSANDRA-14781
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14781
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
>Reporter: Jordan West
>Assignee: Tom Petracca
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: CASSANDRA-14781.patch, CASSANDRA-14781_3.0.patch, 
> CASSANDRA-14781_3.11.patch
>
>
> When hitting 
> [https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/db/commitlog/CommitLog.java#L256-L257],
>  the log message produced does not help the operator track down what data is 
> being written. At a minimum the keyspace and cfIds involved would be useful 
> (and are available) – more detail might not be reasonable to include. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14781) Log message when mutation passed to CommitLog#add(Mutation) is too large is not descriptive enough

2019-11-12 Thread Aleksey Yeschenko (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16972590#comment-16972590
 ] 

Aleksey Yeschenko commented on CASSANDRA-14781:
---

I'd say this is a little insufficient. You have the exact same situation with 
writing hints in {{HintsBuffer.allocate()}}.

What you want is to add an extra validation downstream all the way to 
{{ModificationStatement}}, so that you can return a meaningful exception to the 
client immediately - rather than ending up timing out the response.

> Log message when mutation passed to CommitLog#add(Mutation) is too large is 
> not descriptive enough
> --
>
> Key: CASSANDRA-14781
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14781
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
>Reporter: Jordan West
>Assignee: Tom Petracca
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: CASSANDRA-14781.patch, CASSANDRA-14781_3.0.patch, 
> CASSANDRA-14781_3.11.patch
>
>
> When hitting 
> [https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/db/commitlog/CommitLog.java#L256-L257],
>  the log message produced does not help the operator track down what data is 
> being written. At a minimum the keyspace and cfIds involved would be useful 
> (and are available) – more detail might not be reasonable to include. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15412) Security vulnerability CVE-2016-4970 for Netty

2019-11-12 Thread Abhishek Singh (Jira)
Abhishek Singh created CASSANDRA-15412:
--

 Summary: Security vulnerability CVE-2016-4970 for Netty 
 Key: CASSANDRA-15412
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15412
 Project: Cassandra
  Issue Type: Bug
Reporter: Abhishek Singh


*Cassendra Version: 3.11.4*

*Description :*
*Severity :* CVE CVSS 3.0: 7.5Sonatype CVSS 3.0: 7.5

*Weakness :* Sonatype CWE: 835

*Source :* National Vulnerability Database

*Categories :* ConfigurationData

*Description from CVE :* handler.

*Explanation :* Netty is vulnerable to Denial of Service (DoS). The wrap() 
function in the OpenSslEngine class doesnt properly handle renegotiations, 
causing the application to hang in an infinite loop. A remote attacker could 
exploit this vulnerability by sending multiple requests to the application to 
consume large amounts of CPU cycles, which can result in Denial of Service 
(DoS).

The Sonatype security research team discovered that the vulnerability is 
present in version 4.0.20 until 4.0.37, not in all the versions from 4.0.0 till 
4.0.37 as the advisory states.

*Detection :* The application is vulnerable by using this component only if the 
server has renegotiation enabled (which is set as default).
Reference: ([https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2016-4970]) 
[https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2016-4970]

*Recommendation :* We recommend upgrading to a version of this component that 
is not vulnerable to this specific issue.
Workaround:
Users can use -Djdk.tls.rejectClientInitiatedRenegotiation=true to disable 
renegotiation and avoid this issue.
Reference link: ([https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2016-4970]) 
[https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2016-4970]

*Root Cause :* Cassandra-2.2.5.nupkgOpenSslEngine.class : [4.1.0.Beta1, 
4.1.1.Final)

*Advisories :* Project: 
[https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2016-4970]

*CVSS Details :* CVE CVSS 3.0: 7.5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15407) Hint-dispatcher file-channel not closed, if "open()" fails with OOM

2019-11-12 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-15407:

Reviewers:   (was: Robert Stupp)

> Hint-dispatcher file-channel not closed, if "open()" fails with OOM
> ---
>
> Key: CASSANDRA-15407
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15407
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Hints
>Reporter: Ekaterina Dimitrova
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 4.x
>
>
> Some places in the code base do not to close the file (some channel-proxy) in 
> case of errors. We should close the channel-proxy in those cases - at least 
> to not make the situation (due to that OOM) even worse.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15407) Hint-dispatcher file-channel not closed, if "open()" fails with OOM

2019-11-12 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-15407:

 Bug Category: Parent values: Correctness(12982)Level 1 values: Test 
Failure(12990)
   Complexity: Normal
  Component/s: Consistency/Hints
Discovered By: User Report
Fix Version/s: 4.x
Reviewers: Robert Stupp
 Severity: Low
   Status: Open  (was: Triage Needed)

> Hint-dispatcher file-channel not closed, if "open()" fails with OOM
> ---
>
> Key: CASSANDRA-15407
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15407
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Hints
>Reporter: Ekaterina Dimitrova
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 4.x
>
>
> Some places in the code base do not to close the file (some channel-proxy) in 
> case of errors. We should close the channel-proxy in those cases - at least 
> to not make the situation (due to that OOM) even worse.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15411) [9.8] [CVE-2017-5929] [Cassandra] [2.2.5]

2019-11-12 Thread Abhishek Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh updated CASSANDRA-15411:
---
Description: 
*Description :**Description :* *Severity :* CVE CVSS 3.0: 9.8Sonatype CVSS 3.0: 
9.8
  
  *Weakness :* CVE CWE: 502
  
  *Source :* National Vulnerability Database
  
  *Categories :* Data 
  *Description from CVE :* QOS.ch Logback before 1.2.0 has a serialization 
vulnerability affecting the SocketServer and ServerSocketReceiver components.
  
  *Explanation :* The RemoteStreamAppenderClient class in logback-classic and 
the SocketNode classes in logback-classic and logback-access allow data to be 
deserialized over a Java Socket, via an ObjectInputStream, without validating 
the data beforehand.When data is received from the Socket, to be logged, it is 
deserialized into Java objects.An attacker can exploit this vulnerability by 
sending malicious, serialized Java objects over the connection to the Socket, 
which may result in execution of arbitrary code when those objects are 
deserialized.Note that although logback-core is implicated by the Logback 
project here, the Sonatype Security Research team discovered that the 
vulnerability is actually present in the logback-classic and logback-access 
components. versions prior to 1.2.0, as stated in the advisory. 
  *Detection :* The application is vulnerable by using this component. 
  *Recommendation :* We recommend upgrading to a version of this component that 
is not vulnerable to this specific issue. 
  *Root Cause :* Cassandra-2.2.5.nupkgSocketNode.class : [1.0.12,1.2.0)
  
  *Advisories :* Project: [https://logback.qos.ch/news.html]
  
  *CVSS Details :* CVE CVSS 3.0: 9.8
 *Occurences (Paths) :* 
["TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/cassandra.in.bat"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/cassandra.in.sh"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/cqlsh.bat"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/debug-cql.bat"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/source-conf.ps1"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/sstableloader.bat"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/sstablescrub.bat"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/sstableupgrade.bat"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/sstableverify.bat"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/stop-server"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/stop-server.bat"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/stop-server.ps1"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/conf/README.txt"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/conf/cassandra-rackdc.properties"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/conf/cassandra-topology.properties"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/conf/commitlog_archiving.properties"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/conf/triggers/README.txt"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/ST4-4.0.8.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/airline-0.6.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/antlr-runtime-3.5.2.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/commons-cli-1.1.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/commons-lang3-3.1.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/commons-math3-3.2.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/compress-lzf-0.8.4.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/concurrentlinkedhashmap-lru-1.4.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/disruptor-3.0.1.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/ecj-4.4.2.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/futures-2.1.6-py2.py3-none-any.zip"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/high-scale-lib-1.0.6.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/jamm-0.3.0.jar"
 ; 

[jira] [Created] (CASSANDRA-15411) [9.8] [CVE-2017-5929] [Cassandra] [2.2.5]

2019-11-12 Thread Abhishek Singh (Jira)
Abhishek Singh created CASSANDRA-15411:
--

 Summary: [9.8] [CVE-2017-5929] [Cassandra] [2.2.5]
 Key: CASSANDRA-15411
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15411
 Project: Cassandra
  Issue Type: Bug
Reporter: Abhishek Singh


*Description :**Description :* *Severity :* CVE CVSS 3.0: 9.8Sonatype CVSS 3.0: 
9.8
 
 *Weakness :* CVE CWE: 502
 
 *Source :* National Vulnerability Database
 
 *Categories :* Data 
 *Description from CVE :* QOS.ch Logback before 1.2.0 has a serialization 
vulnerability affecting the SocketServer and ServerSocketReceiver components.
 
 *Explanation :* The RemoteStreamAppenderClient class in logback-classic and 
the SocketNode classes in logback-classic and logback-access allow data to be 
deserialized over a Java Socket, via an ObjectInputStream, without validating 
the data beforehand.When data is received from the Socket, to be logged, it is 
deserialized into Java objects.An attacker can exploit this vulnerability by 
sending malicious, serialized Java objects over the connection to the Socket, 
which may result in execution of arbitrary code when those objects are 
deserialized.Note that although logback-core is implicated by the Logback 
project here, the Sonatype Security Research team discovered that the 
vulnerability is actually present in the logback-classic and logback-access 
components. versions prior to 1.2.0, as stated in the advisory. 
 *Detection :* The application is vulnerable by using this component. 
 *Recommendation :* We recommend upgrading to a version of this component that 
is not vulnerable to this specific issue. 
 *Root Cause :* Cassandra-2.2.5.nupkgSocketNode.class : [1.0.12,1.2.0)
 
 *Advisories :* Project: https://logback.qos.ch/news.html
 
 *CVSS Details :* CVE CVSS 3.0: 9.8
*Occurences (Paths) :* 
["TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/cassandra.in.bat"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/cassandra.in.sh"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/cqlsh.bat"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/debug-cql.bat"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/source-conf.ps1"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/sstableloader.bat"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/sstablescrub.bat"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/sstableupgrade.bat"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/sstableverify.bat"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/stop-server"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/stop-server.bat"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/bin/stop-server.ps1"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/conf/README.txt"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/conf/cassandra-rackdc.properties"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/conf/cassandra-topology.properties"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/conf/commitlog_archiving.properties"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/conf/triggers/README.txt"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/ST4-4.0.8.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/airline-0.6.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/antlr-runtime-3.5.2.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/commons-cli-1.1.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/commons-lang3-3.1.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/commons-math3-3.2.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/compress-lzf-0.8.4.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/concurrentlinkedhashmap-lru-1.4.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/disruptor-3.0.1.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/ecj-4.4.2.jar"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/futures-2.1.6-py2.py3-none-any.zip"
 ; 
"TSO/solaris_bao_server_installer_8_3_00.tar/files/hdb/apache-cassandra.zip/lib/high-scale-lib-1.0.6.jar"
 ; 

[jira] [Assigned] (CASSANDRA-15410) Avoid over-allocation of bytes for UTF8 string serialization

2019-11-12 Thread Abhishek Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh reassigned CASSANDRA-15410:
--

Assignee: Abhishek Singh  (was: Yifan Cai)

> Avoid over-allocation of bytes for UTF8 string serialization 
> -
>
> Key: CASSANDRA-15410
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15410
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Client
>Reporter: Yifan Cai
>Assignee: Abhishek Singh
>Priority: Normal
>
> In the current message encoding implementation, it first calculates the 
> `encodeSize` and allocates the bytebuffer with that size. 
> However, during encoding, it assumes the worst case of writing UTF8 string to 
> allocate bytes, i.e. assuming each letter takes 3 bytes. 
> The over-estimation further leads to resizing the underlying array and data 
> copy. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15367) Memtable memory allocations may deadlock

2019-11-12 Thread Benedict Elliott Smith (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict Elliott Smith updated CASSANDRA-15367:
---
Status: Open  (was: Patch Available)

> Memtable memory allocations may deadlock
> 
>
> Key: CASSANDRA-15367
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15367
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Commit Log, Local/Memtable
>Reporter: Benedict Elliott Smith
>Assignee: Benedict Elliott Smith
>Priority: Normal
> Fix For: 4.0, 2.2.x, 3.0.x, 3.11.x
>
>
> * Under heavy contention, we guard modifications to a partition with a mutex, 
> for the lifetime of the memtable.
> * Memtables block for the completion of all {{OpOrder.Group}} started before 
> their flush began
> * Memtables permit operations from this cohort to fall-through to the 
> following Memtable, in order to guarantee a precise commitLogUpperBound
> * Memtable memory limits may be lifted for operations in the first cohort, 
> since they block flush (and hence block future memory allocation)
> With very unfortunate scheduling
> * A contended partition may rapidly escalate to a mutex
> * The system may reach memory limits that prevent allocations for the new 
> Memtable’s cohort (C2) 
> * An operation from C2 may hold the mutex when this occurs
> * Operations from a prior Memtable’s cohort (C1), for a contended partition, 
> may fall-through to the next Memtable
> * The operations from C1 may execute after the above is encountered by those 
> from C2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15368) Failing to flush Memtable without terminating process results in permanent data loss

2019-11-12 Thread Benedict Elliott Smith (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16972280#comment-16972280
 ] 

Benedict Elliott Smith edited comment on CASSANDRA-15368 at 11/12/19 11:05 AM:
---

Hmm.  I had convinced myself that this occurred already, and that 
CASSANDRA-15367 simply exploited the fact that the start of the commit log 
region owned by a Memtable actually could occur in either Memtable.  But now I 
attempt to properly construct the scenario, I see that I was wrong, and past me 
that wrote the bounds logic was better at this stuff.  You're right, 
CASSANDRA-15367 introduces, rather than exploits, this issue.


was (Author: benedict):
Hmm.  I had convinced myself that this occurred already, and that 
CASSANDRA-15367 simply exploited the the start of the commit log region owned 
by a Memtable actually could occur in either Memtable (as opposed to the end, 
which was contiguous).  But now I attempt to properly construct the scenario, I 
see that I was wrong, and past me that wrote the bounds logic was better at 
this stuff.  You're right, CASSANDRA-15367 introduces, rather than exploits, 
this issue.

> Failing to flush Memtable without terminating process results in permanent 
> data loss
> 
>
> Key: CASSANDRA-15368
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15368
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Commit Log, Local/Memtable
>Reporter: Benedict Elliott Smith
>Priority: Normal
> Fix For: 4.0, 2.2.x, 3.0.x, 3.11.x
>
>
> {{Memtable}} do not contain records that cover a precise contiguous range of 
> {{ReplayPosition}}, since there are only weak ordering constraints when 
> rolling over to a new {{Memtable}} - the last operations for the old 
> {{Memtable}} may obtain their {{ReplayPosition}} after the first operations 
> for the new {{Memtable}}.
> Unfortunately, we treat the {{Memtable}} range as contiguous, and invalidate 
> the entire range on flush.  Ordinarily we only invalidate records when all 
> prior {{Memtable}} have also successfully flushed.  However, in the event of 
> a flush that does not terminate the process (either because of disk failure 
> policy, or because it is a software error), the later flush is able to 
> invalidate the region of the commit log that includes records that should 
> have been flushed in the prior {{Memtable}}
> More problematically, this can also occur on restart without any associated 
> flush failure, as we use commit log boundaries written to our flushed 
> sstables to filter {{ReplayPosition}} on recovery, which is meant to 
> replicate our {{Memtable}} flush behaviour above.  However, we do not know 
> that earlier flushes have completed, and they may complete successfully 
> out-of-order.  So any flush that completes before the process terminates, but 
> began after another flush that _doesn’t_ complete before the process 
> terminates, has the potential to cause permanent data loss.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15368) Failing to flush Memtable without terminating process results in permanent data loss

2019-11-12 Thread Benedict Elliott Smith (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict Elliott Smith updated CASSANDRA-15368:
---
Resolution: Invalid
Status: Resolved  (was: Open)

> Failing to flush Memtable without terminating process results in permanent 
> data loss
> 
>
> Key: CASSANDRA-15368
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15368
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Commit Log, Local/Memtable
>Reporter: Benedict Elliott Smith
>Priority: Normal
> Fix For: 4.0, 2.2.x, 3.0.x, 3.11.x
>
>
> {{Memtable}} do not contain records that cover a precise contiguous range of 
> {{ReplayPosition}}, since there are only weak ordering constraints when 
> rolling over to a new {{Memtable}} - the last operations for the old 
> {{Memtable}} may obtain their {{ReplayPosition}} after the first operations 
> for the new {{Memtable}}.
> Unfortunately, we treat the {{Memtable}} range as contiguous, and invalidate 
> the entire range on flush.  Ordinarily we only invalidate records when all 
> prior {{Memtable}} have also successfully flushed.  However, in the event of 
> a flush that does not terminate the process (either because of disk failure 
> policy, or because it is a software error), the later flush is able to 
> invalidate the region of the commit log that includes records that should 
> have been flushed in the prior {{Memtable}}
> More problematically, this can also occur on restart without any associated 
> flush failure, as we use commit log boundaries written to our flushed 
> sstables to filter {{ReplayPosition}} on recovery, which is meant to 
> replicate our {{Memtable}} flush behaviour above.  However, we do not know 
> that earlier flushes have completed, and they may complete successfully 
> out-of-order.  So any flush that completes before the process terminates, but 
> began after another flush that _doesn’t_ complete before the process 
> terminates, has the potential to cause permanent data loss.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15368) Failing to flush Memtable without terminating process results in permanent data loss

2019-11-12 Thread Benedict Elliott Smith (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16972280#comment-16972280
 ] 

Benedict Elliott Smith commented on CASSANDRA-15368:


Hmm.  I had convinced myself that this occurred already, and that 
CASSANDRA-15367 simply exploited the the start of the commit log region owned 
by a Memtable actually could occur in either Memtable (as opposed to the end, 
which was contiguous).  But now I attempt to properly construct the scenario, I 
see that I was wrong, and past me that wrote the bounds logic was better at 
this stuff.  You're right, CASSANDRA-15367 introduces, rather than exploits, 
this issue.

> Failing to flush Memtable without terminating process results in permanent 
> data loss
> 
>
> Key: CASSANDRA-15368
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15368
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Commit Log, Local/Memtable
>Reporter: Benedict Elliott Smith
>Priority: Normal
> Fix For: 4.0, 2.2.x, 3.0.x, 3.11.x
>
>
> {{Memtable}} do not contain records that cover a precise contiguous range of 
> {{ReplayPosition}}, since there are only weak ordering constraints when 
> rolling over to a new {{Memtable}} - the last operations for the old 
> {{Memtable}} may obtain their {{ReplayPosition}} after the first operations 
> for the new {{Memtable}}.
> Unfortunately, we treat the {{Memtable}} range as contiguous, and invalidate 
> the entire range on flush.  Ordinarily we only invalidate records when all 
> prior {{Memtable}} have also successfully flushed.  However, in the event of 
> a flush that does not terminate the process (either because of disk failure 
> policy, or because it is a software error), the later flush is able to 
> invalidate the region of the commit log that includes records that should 
> have been flushed in the prior {{Memtable}}
> More problematically, this can also occur on restart without any associated 
> flush failure, as we use commit log boundaries written to our flushed 
> sstables to filter {{ReplayPosition}} on recovery, which is meant to 
> replicate our {{Memtable}} flush behaviour above.  However, we do not know 
> that earlier flushes have completed, and they may complete successfully 
> out-of-order.  So any flush that completes before the process terminates, but 
> began after another flush that _doesn’t_ complete before the process 
> terminates, has the potential to cause permanent data loss.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org