[jira] [Updated] (CASSANDRA-10597) Error: unmappable character for encoding MS949 in ant build-test task.
[ https://issues.apache.org/jira/browse/CASSANDRA-10597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaebin Lee updated CASSANDRA-10597: --- Attachment: trunk-10597.txt The ant build-test task fix patch file. > Error: unmappable character for encoding MS949 in ant build-test task. > -- > > Key: CASSANDRA-10597 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10597 > Project: Cassandra > Issue Type: Bug > Components: Tests, Tools > Environment: Windows 7, IntelliJ Idea >Reporter: Jaebin Lee >Priority: Trivial > Labels: build, easyfix, newbie, windows > Fix For: 3.x > > Attachments: trunk-10597.txt > > > While setting up a new Cassandra project, noticed "build-test" ant task fails > due to "unmappable character for encoding MS949". > An addition of encoding="UTF-8" parameter fixed this. > {code:xml} > debug="true" > debuglevel="${debuglevel}" > destdir="${test.classes}" > includeantruntime="false" > source="${source.version}" > target="${target.version}" > encoding="UTF-8"> > {code} > Error stack trace below: > {noformat} > Compiling 365 source files to ~\workspace\cassandra\build\test\classes > ~\workspace\cassandra\test\unit\org\apache\cassandra\security\CipherFactoryTest.java > (22:37)error: unmappable character for encoding MS949 > 1 error > ~\workspace\cassandra\build.xml:1127: Compile failed; see the compiler error > output for details. > at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1180) > at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:935) > at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292) > at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) > at org.apache.tools.ant.Task.perform(Task.java:348) > at org.apache.tools.ant.Target.execute(Target.java:435) > at org.apache.tools.ant.Target.performTasks(Target.java:456) > at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1393) > at org.apache.tools.ant.Project.executeTarget(Project.java:1364) > at > org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) > at org.apache.tools.ant.Project.executeTargets(Project.java:1248) > at org.apache.tools.ant.Main.runBuild(Main.java:851) > at org.apache.tools.ant.Main.startAnt(Main.java:235) > at org.apache.tools.ant.Main.start(Main.java:198) > at org.apache.tools.ant.Main.main(Main.java:286) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at com.intellij.rt.ant.execution.AntMain2.main(AntMain2.java:30) > ~\workspace\cassandra\build.xml:1127: Compile failed; see the compiler error > output for details. > at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1180) > at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:935) > at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292) > at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) > at org.apache.tools.ant.Task.perform(Task.java:348) > at org.apache.tools.ant.Target.execute(Target.java:435) > at org.apache.tools.ant.Target.performTasks(Target.java:456) > at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1393) > at org.apache.tools.ant.Project.executeTarget(Project.java:1364) > at > org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) > at org.apache.tools.ant.Project.executeTargets(Project.java:1248) > at org.apache.tools.ant.Main.runBuild(Main.java:851) > at org.apache.tools.ant.Main.startAnt(Main.java:235) > at org.apache.tools.ant.Main.start(Main.java:198) > at org.apache.tools.ant.Main.main(Main.java:286) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(
[jira] [Updated] (CASSANDRA-10597) Error: unmappable character for encoding MS949 in ant build-test task.
[ https://issues.apache.org/jira/browse/CASSANDRA-10597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaebin Lee updated CASSANDRA-10597: --- Description: While setting up a new Cassandra project, noticed "build-test" ant task fails due to "unmappable character for encoding MS949". An addition of encoding="UTF-8" parameter fixed this. {code:xml} {code} Error stack trace below: {noformat} Compiling 365 source files to ~\workspace\cassandra\build\test\classes ~\workspace\cassandra\test\unit\org\apache\cassandra\security\CipherFactoryTest.java (22:37)error: unmappable character for encoding MS949 1 error ~\workspace\cassandra\build.xml:1127: Compile failed; see the compiler error output for details. at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1180) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:935) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:435) at org.apache.tools.ant.Target.performTasks(Target.java:456) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1393) at org.apache.tools.ant.Project.executeTarget(Project.java:1364) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1248) at org.apache.tools.ant.Main.runBuild(Main.java:851) at org.apache.tools.ant.Main.startAnt(Main.java:235) at org.apache.tools.ant.Main.start(Main.java:198) at org.apache.tools.ant.Main.main(Main.java:286) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.intellij.rt.ant.execution.AntMain2.main(AntMain2.java:30) ~\workspace\cassandra\build.xml:1127: Compile failed; see the compiler error output for details. at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1180) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:935) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:435) at org.apache.tools.ant.Target.performTasks(Target.java:456) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1393) at org.apache.tools.ant.Project.executeTarget(Project.java:1364) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1248) at org.apache.tools.ant.Main.runBuild(Main.java:851) at org.apache.tools.ant.Main.startAnt(Main.java:235) at org.apache.tools.ant.Main.start(Main.java:198) at org.apache.tools.ant.Main.main(Main.java:286) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.intellij.rt.ant.execution.AntMain2.main(AntMain2.java:30) Ant build completed with 4 errors and no warnings in 58s {noformat} was: While setting up a new Cassandra project, noticed "build-test" ant task fails due to "unmappable character for encoding MS949". An addition of encoding="UTF-8" parameter fixed this. {code:xml} {code} Error stack trace below: Compiling 365 source files to ~\workspace\cassandra\build\test\classes ~\workspace\cassandra\test\unit\org\apache\cassandra\security\CipherFactoryTest.java (22:37)error: unmappable character for encoding MS949 1 error ~\workspace\cassandra\build.xml:1127: Compile failed; see the compiler error output for details. at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1180) at org.apache.tools.ant.taskdefs.Javac
[jira] [Updated] (CASSANDRA-10597) Error: unmappable character for encoding MS949 in ant build-test task.
[ https://issues.apache.org/jira/browse/CASSANDRA-10597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaebin Lee updated CASSANDRA-10597: --- Description: While setting up a new Cassandra project, noticed "build-test" ant task fails due to "unmappable character for encoding MS949". An addition of encoding="UTF-8" parameter fixed this. {code:xml} {code} Error stack trace below: Compiling 365 source files to ~\workspace\cassandra\build\test\classes ~\workspace\cassandra\test\unit\org\apache\cassandra\security\CipherFactoryTest.java (22:37)error: unmappable character for encoding MS949 1 error ~\workspace\cassandra\build.xml:1127: Compile failed; see the compiler error output for details. at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1180) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:935) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:435) at org.apache.tools.ant.Target.performTasks(Target.java:456) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1393) at org.apache.tools.ant.Project.executeTarget(Project.java:1364) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1248) at org.apache.tools.ant.Main.runBuild(Main.java:851) at org.apache.tools.ant.Main.startAnt(Main.java:235) at org.apache.tools.ant.Main.start(Main.java:198) at org.apache.tools.ant.Main.main(Main.java:286) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.intellij.rt.ant.execution.AntMain2.main(AntMain2.java:30) ~\workspace\cassandra\build.xml:1127: Compile failed; see the compiler error output for details. at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1180) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:935) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:435) at org.apache.tools.ant.Target.performTasks(Target.java:456) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1393) at org.apache.tools.ant.Project.executeTarget(Project.java:1364) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1248) at org.apache.tools.ant.Main.runBuild(Main.java:851) at org.apache.tools.ant.Main.startAnt(Main.java:235) at org.apache.tools.ant.Main.start(Main.java:198) at org.apache.tools.ant.Main.main(Main.java:286) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.intellij.rt.ant.execution.AntMain2.main(AntMain2.java:30) Ant build completed with 4 errors and no warnings in 58s was: While setting up a new Cassandra project, noticed "build-test" ant task fails due to "unmappable character for encoding MS949". An addition of encoding="UTF-8" parameter fixed this. ``` ``` Error stack trace below: Compiling 365 source files to ~\workspace\cassandra\build\test\classes ~\workspace\cassandra\test\unit\org\apache\cassandra\security\CipherFactoryTest.java (22:37)error: unmappable character for encoding MS949 1 error ~\workspace\cassandra\build.xml:1127: Compile failed; see the compiler error output for details. at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1180) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:935)
[jira] [Updated] (CASSANDRA-10597) Error: unmappable character for encoding MS949 in ant build-test task.
[ https://issues.apache.org/jira/browse/CASSANDRA-10597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaebin Lee updated CASSANDRA-10597: --- Description: While setting up a new Cassandra project, noticed "build-test" ant task fails due to "unmappable character for encoding MS949". An addition of encoding="UTF-8" parameter fixed this. ``` ``` Error stack trace below: Compiling 365 source files to ~\workspace\cassandra\build\test\classes ~\workspace\cassandra\test\unit\org\apache\cassandra\security\CipherFactoryTest.java (22:37)error: unmappable character for encoding MS949 1 error ~\workspace\cassandra\build.xml:1127: Compile failed; see the compiler error output for details. at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1180) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:935) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:435) at org.apache.tools.ant.Target.performTasks(Target.java:456) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1393) at org.apache.tools.ant.Project.executeTarget(Project.java:1364) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1248) at org.apache.tools.ant.Main.runBuild(Main.java:851) at org.apache.tools.ant.Main.startAnt(Main.java:235) at org.apache.tools.ant.Main.start(Main.java:198) at org.apache.tools.ant.Main.main(Main.java:286) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.intellij.rt.ant.execution.AntMain2.main(AntMain2.java:30) ~\workspace\cassandra\build.xml:1127: Compile failed; see the compiler error output for details. at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1180) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:935) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:435) at org.apache.tools.ant.Target.performTasks(Target.java:456) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1393) at org.apache.tools.ant.Project.executeTarget(Project.java:1364) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1248) at org.apache.tools.ant.Main.runBuild(Main.java:851) at org.apache.tools.ant.Main.startAnt(Main.java:235) at org.apache.tools.ant.Main.start(Main.java:198) at org.apache.tools.ant.Main.main(Main.java:286) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.intellij.rt.ant.execution.AntMain2.main(AntMain2.java:30) Ant build completed with 4 errors and no warnings in 58s was: While setting up a new Cassandra project, noticed "build-test" ant task fails due to "unmappable character for encoding MS949". An addition of encoding="UTF-8" parameter fixed this. Error stack trace below: Compiling 365 source files to ~\workspace\cassandra\build\test\classes ~\workspace\cassandra\test\unit\org\apache\cassandra\security\CipherFactoryTest.java (22:37)error: unmappable character for encoding MS949 1 error ~\workspace\cassandra\build.xml:1127: Compile failed; see the compiler error output for details. at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1180) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:935) at org.apache.too
[jira] [Created] (CASSANDRA-10597) Error: unmappable character for encoding MS949 in ant build-test task.
Jaebin Lee created CASSANDRA-10597: -- Summary: Error: unmappable character for encoding MS949 in ant build-test task. Key: CASSANDRA-10597 URL: https://issues.apache.org/jira/browse/CASSANDRA-10597 Project: Cassandra Issue Type: Bug Components: Tests, Tools Environment: Windows 7, IntelliJ Idea Reporter: Jaebin Lee Priority: Trivial Fix For: 3.x While setting up a new Cassandra project, noticed "build-test" ant task fails due to "unmappable character for encoding MS949". An addition of encoding="UTF-8" parameter fixed this. Error stack trace below: Compiling 365 source files to ~\workspace\cassandra\build\test\classes ~\workspace\cassandra\test\unit\org\apache\cassandra\security\CipherFactoryTest.java (22:37)error: unmappable character for encoding MS949 1 error ~\workspace\cassandra\build.xml:1127: Compile failed; see the compiler error output for details. at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1180) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:935) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:435) at org.apache.tools.ant.Target.performTasks(Target.java:456) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1393) at org.apache.tools.ant.Project.executeTarget(Project.java:1364) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1248) at org.apache.tools.ant.Main.runBuild(Main.java:851) at org.apache.tools.ant.Main.startAnt(Main.java:235) at org.apache.tools.ant.Main.start(Main.java:198) at org.apache.tools.ant.Main.main(Main.java:286) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.intellij.rt.ant.execution.AntMain2.main(AntMain2.java:30) ~\workspace\cassandra\build.xml:1127: Compile failed; see the compiler error output for details. at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1180) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:935) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:435) at org.apache.tools.ant.Target.performTasks(Target.java:456) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1393) at org.apache.tools.ant.Project.executeTarget(Project.java:1364) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1248) at org.apache.tools.ant.Main.runBuild(Main.java:851) at org.apache.tools.ant.Main.startAnt(Main.java:235) at org.apache.tools.ant.Main.start(Main.java:198) at org.apache.tools.ant.Main.main(Main.java:286) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.intellij.rt.ant.execution.AntMain2.main(AntMain2.java:30) Ant build completed with 4 errors and no warnings in 58s -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10365) Consider storing types by their CQL names in schema tables instead of fully-qualified internal class names
[ https://issues.apache.org/jira/browse/CASSANDRA-10365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975377#comment-14975377 ] Aleksey Yeschenko commented on CASSANDRA-10365: --- The third (in-progress) commit switches types in schema tables to their canonical CQL representation. That includes changes to 'functions' and 'aggregates' tables to no longer keep a surrogate {{signature}} column. Only non-nested (in anything) UDTs are supported there. Unfortunately there is an annoying bug in {{TypeParser}} that breaks down for some type that don't even involve UDT's (think {{frozen>, frozen>>>}}). Need a bit more time to resolve that. In the meantime, this unblocks drivers work/critical path. > Consider storing types by their CQL names in schema tables instead of > fully-qualified internal class names > -- > > Key: CASSANDRA-10365 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10365 > Project: Cassandra > Issue Type: Improvement >Reporter: Aleksey Yeschenko >Assignee: Aleksey Yeschenko > Labels: client-impacting > Fix For: 3.0.0 > > > Consider saving CQL types names for column, UDF/UDA arguments and return > types, and UDT components. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10554) Batch that updates two or more table can produce unreadable SSTable (was: Auto Bootstraping a new node fails)
[ https://issues.apache.org/jira/browse/CASSANDRA-10554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975281#comment-14975281 ] Alan Boudreault commented on CASSANDRA-10554: - The patch works well. I'll rework on the bootstraping stuff tomorrow morning > Batch that updates two or more table can produce unreadable SSTable (was: > Auto Bootstraping a new node fails) > - > > Key: CASSANDRA-10554 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10554 > Project: Cassandra > Issue Type: Bug >Reporter: Alan Boudreault >Assignee: Sylvain Lebresne >Priority: Blocker > Fix For: 3.0.0 > > Attachments: 0001-Add-debug.txt, 10554.cql, debug.log, system.log, > test.sh > > > I've been trying to add a new node in my 3.0 cluster and it seems to fail. > All my nodes are using apache/cassandra-3.0.0 branch. At the beginning, I can > see the following error: > {code} > INFO 18:45:55 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a ID#0] Prepare > completed. Receiving 42 files(1910066622 bytes), sending 0 files(0 bytes) > WARN 18:45:55 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a] Retrying for > following error > java.lang.RuntimeException: Unknown column added_time during deserialization > at > org.apache.cassandra.db.SerializationHeader$Component.toHeader(SerializationHeader.java:331) > ~[main/:na] > at > org.apache.cassandra.streaming.StreamReader.createWriter(StreamReader.java:136) > ~[main/:na] > at > org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:77) > ~[main/:na] > at > org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:50) > [main/:na] > at > org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:39) > [main/:na] > at > org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:59) > [main/:na] > at > org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45] > ERROR 18:45:55 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a] Streaming error > occurred > java.lang.IllegalArgumentException: Unknown type 0 > at > org.apache.cassandra.streaming.messages.StreamMessage$Type.get(StreamMessage.java:97) > ~[main/:na] > at > org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58) > ~[main/:na] > at > org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261) > ~[main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45] > INFO 18:45:55 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a] Session with > /54.210.187.114 is complete > INFO 18:45:56 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a ID#0] Prepare > completed. Receiving 38 files(2323537628 bytes), sending 0 files(0 bytes) > WARN 18:45:56 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a] Retrying for > following error > java.lang.RuntimeException: Unknown column added_time during deserialization > at > org.apache.cassandra.db.SerializationHeader$Component.toHeader(SerializationHeader.java:331) > ~[main/:na] > at > org.apache.cassandra.streaming.StreamReader.createWriter(StreamReader.java:136) > ~[main/:na] > at > org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:77) > ~[main/:na] > at > org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:50) > [main/:na] > at > org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:39) > [main/:na] > at > org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:59) > [main/:na] > at > org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45] > ERROR 18:45:56 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a] Streaming error > occurred > java.lang.IllegalArgumentException: Unknown type 0 > at > org.apache.cassandra.streaming.messages.StreamMessage$Type.get(StreamMessage.java:97) > ~[main/:na] > at > org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58) > ~[main/:na] > at > org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261) > ~[main/:na] > at java.lang.Thread.run(Thread.java:745)
[jira] [Commented] (CASSANDRA-10569) Keyspace validation errors are getting lost in system_add_keyspace
[ https://issues.apache.org/jira/browse/CASSANDRA-10569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975246#comment-14975246 ] Paulo Motta commented on CASSANDRA-10569: - LGTM, tested locally and failing dtest passes after patch is applied. Marking as ready to commit! Thanks! > Keyspace validation errors are getting lost in system_add_keyspace > -- > > Key: CASSANDRA-10569 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10569 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Mike Adamson >Assignee: Sam Tunnicliffe > Fix For: 3.0.0 > > > The following: > {noformat} > cassandraserver.system_add_keyspace( > new KsDef("xxx", SimpleStrategy.class.getSimpleName(), > Lists.newArrayList())); > {noformat} > used to throw an {{InvalidRequestException}} in 2.1. > In 3.0 the strategy validation has been removed from > {{KeyspaceMetadata.validate}} so the strategy errors don't get picked up > until the schema change has been announced. As a result the > {{ConfigurationError}} is swallowed in {{FBUtilities.waitOnFuture}} and > thrown on as a {{RuntimeException}}. > This possibly affects {{system_update_keyspace}} as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10485) Missing host ID on hinted handoff write
[ https://issues.apache.org/jira/browse/CASSANDRA-10485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975122#comment-14975122 ] Paulo Motta commented on CASSANDRA-10485: - It seems pending endpoints are removed from the {{TokenMetadata}} before the new pending ranges are calculated by {{StorageService}}: {code:title=StorageService.java|borderStyle=solid} public void onRemove(InetAddress endpoint) { tokenMetadata.removeEndpoint(endpoint); PendingRangeCalculatorService.instance.update(); } {code} So, there's a window where nodes can be > Missing host ID on hinted handoff write > --- > > Key: CASSANDRA-10485 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10485 > Project: Cassandra > Issue Type: Bug >Reporter: Paulo Motta >Assignee: Paulo Motta > > when I restart one of them I receive the error "Missing host ID": > {noformat} > WARN [SharedPool-Worker-1] 2015-10-08 13:15:33,882 > AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread > Thread[SharedPool-Worker-1,5,main]: {} > java.lang.AssertionError: Missing host ID for 63.251.156.141 > at > org.apache.cassandra.service.StorageProxy.writeHintForMutation(StorageProxy.java:978) > ~[apache-cassandra-2.1.3.jar:2.1.3] > at > org.apache.cassandra.service.StorageProxy$6.runMayThrow(StorageProxy.java:950) > ~[apache-cassandra-2.1.3.jar:2.1.3] > at > org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:2235) > ~[apache-cassandra-2.1.3.jar:2.1.3] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_60] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > ~[apache-cassandra-2.1.3.jar:2.1.3] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [apache-cassandra-2.1.3.jar:2.1.3] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > {noformat} > If I made nodetool status, the problematic node has ID: > {noformat} > UN 10.10.10.12 1.3 TB 1 ? > 4d5c8fd2-a909-4f09-a23c-4cd6040f338a rack3 > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-10596) AssertionError in ReadCommand after upgrade to 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-10596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson resolved CASSANDRA-10596. - Resolution: Not A Problem > AssertionError in ReadCommand after upgrade to 3.0 > -- > > Key: CASSANDRA-10596 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10596 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Philip Thompson > Fix For: 3.0.0 > > Attachments: node1.log, node2.log, node3.log > > > The dtest > {{upgrade_through_versions_test.TestUpgrade_from_2_2_latest_tag_to_cassandra_3_0_HEAD.rolling_upgrade_test}} > is failing. > See: > http://cassci.datastax.com/view/Upgrades/job/cassandra_upgrade_to_3.0_proto_v4/4/testReport/upgrade_through_versions_test/TestUpgrade_from_2_2_latest_tag_to_cassandra_3_0_HEAD/rolling_upgrade_test_2/ > > The following exception shows up in the log > {code} > Unexpected error in node1 node log: ['ERROR [SharedPool-Worker-2] 2015-08-17 > 22:30:30,531 Message.java:611 - Unexpected exception during request; channel > = [id: 0xee05d108, /127.0.0.1:39640 => /127.0.0.1:9042] > java.lang.AssertionError: null \tat > org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:520) > ~[main/:na] \tat > org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:461) > ~[main/:na] \tat > org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) > ~[main/:na] \tat > org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72) > ~[main/:na] \tat > org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:583) > ~[main/:na] \tat > org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:733) > ~[main/:na] \tat > org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:676) > ~[main/:na] \tat > org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:659) > ~[main/:na] \tat > org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:103) > ~[main/:na] \tat > org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:76) > ~[main/:na] \tat > org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:323) > ~[main/:na] \tat > org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1599) > ~[main/:na] \tat > org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1554) > ~[main/:na] \tat > org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1501) > ~[main/:na] \tat > org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1420) > ~[main/:na] \tat > org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:457) > ~[main/:na] \tat > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:232) > ~[main/:na] \tat > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:202) > ~[main/:na] \tat > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:72) > ~[main/:na] \tat > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:204) > ~[main/:na] \tat > org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:470) > ~[main/:na] \tat > org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:447) > ~[main/:na] \tat > org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:139) > ~[main/:na] \tat > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) > [main/:na] \tat > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) > [main/:na] \tat > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.23.Final.jar:4.0.23.Final] \tat > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > [netty-all-4.0.23.Final.jar:4.0.23.Final] \tat > io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32) > [netty-all-4.0.23.Final.jar:4.0.23.Final] \tat > io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324) > [netty-all-4.0.23.Final.jar:4.0.23.Final] \tat > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_51] \tat > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > [main/:na] \tat > org.apache.cassandra.concurrent.SEPWorker.run
[jira] [Commented] (CASSANDRA-10596) AssertionError in ReadCommand after upgrade to 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-10596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975074#comment-14975074 ] Philip Thompson commented on CASSANDRA-10596: - It appears there are different failures occurring in the newest tests. Closing this, sorry. > AssertionError in ReadCommand after upgrade to 3.0 > -- > > Key: CASSANDRA-10596 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10596 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Philip Thompson > Fix For: 3.0.0 > > Attachments: node1.log, node2.log, node3.log > > > The dtest > {{upgrade_through_versions_test.TestUpgrade_from_2_2_latest_tag_to_cassandra_3_0_HEAD.rolling_upgrade_test}} > is failing. > See: > http://cassci.datastax.com/view/Upgrades/job/cassandra_upgrade_to_3.0_proto_v4/4/testReport/upgrade_through_versions_test/TestUpgrade_from_2_2_latest_tag_to_cassandra_3_0_HEAD/rolling_upgrade_test_2/ > > The following exception shows up in the log > {code} > Unexpected error in node1 node log: ['ERROR [SharedPool-Worker-2] 2015-08-17 > 22:30:30,531 Message.java:611 - Unexpected exception during request; channel > = [id: 0xee05d108, /127.0.0.1:39640 => /127.0.0.1:9042] > java.lang.AssertionError: null \tat > org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:520) > ~[main/:na] \tat > org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:461) > ~[main/:na] \tat > org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) > ~[main/:na] \tat > org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72) > ~[main/:na] \tat > org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:583) > ~[main/:na] \tat > org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:733) > ~[main/:na] \tat > org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:676) > ~[main/:na] \tat > org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:659) > ~[main/:na] \tat > org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:103) > ~[main/:na] \tat > org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:76) > ~[main/:na] \tat > org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:323) > ~[main/:na] \tat > org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1599) > ~[main/:na] \tat > org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1554) > ~[main/:na] \tat > org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1501) > ~[main/:na] \tat > org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1420) > ~[main/:na] \tat > org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:457) > ~[main/:na] \tat > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:232) > ~[main/:na] \tat > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:202) > ~[main/:na] \tat > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:72) > ~[main/:na] \tat > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:204) > ~[main/:na] \tat > org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:470) > ~[main/:na] \tat > org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:447) > ~[main/:na] \tat > org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:139) > ~[main/:na] \tat > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) > [main/:na] \tat > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) > [main/:na] \tat > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.23.Final.jar:4.0.23.Final] \tat > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > [netty-all-4.0.23.Final.jar:4.0.23.Final] \tat > io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32) > [netty-all-4.0.23.Final.jar:4.0.23.Final] \tat > io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324) > [netty-all-4.0.23.Final.jar:4.0.23.Final] \tat > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_51] \tat > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTas
[jira] [Commented] (CASSANDRA-10596) AssertionError in ReadCommand after upgrade to 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-10596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975047#comment-14975047 ] Philip Thompson commented on CASSANDRA-10596: - Hold off on assigning this, these results are remarkably out of date. I'm re-running the upgrade suites. I wonder why CI isn't on-going here, I'll have someone look at that as well. > AssertionError in ReadCommand after upgrade to 3.0 > -- > > Key: CASSANDRA-10596 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10596 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Philip Thompson > Fix For: 3.0.0 > > Attachments: node1.log, node2.log, node3.log > > > The dtest > {{upgrade_through_versions_test.TestUpgrade_from_2_2_latest_tag_to_cassandra_3_0_HEAD.rolling_upgrade_test}} > is failing. > See: > http://cassci.datastax.com/view/Upgrades/job/cassandra_upgrade_to_3.0_proto_v4/4/testReport/upgrade_through_versions_test/TestUpgrade_from_2_2_latest_tag_to_cassandra_3_0_HEAD/rolling_upgrade_test_2/ > > The following exception shows up in the log > {code} > Unexpected error in node1 node log: ['ERROR [SharedPool-Worker-2] 2015-08-17 > 22:30:30,531 Message.java:611 - Unexpected exception during request; channel > = [id: 0xee05d108, /127.0.0.1:39640 => /127.0.0.1:9042] > java.lang.AssertionError: null \tat > org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:520) > ~[main/:na] \tat > org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:461) > ~[main/:na] \tat > org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) > ~[main/:na] \tat > org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72) > ~[main/:na] \tat > org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:583) > ~[main/:na] \tat > org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:733) > ~[main/:na] \tat > org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:676) > ~[main/:na] \tat > org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:659) > ~[main/:na] \tat > org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:103) > ~[main/:na] \tat > org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:76) > ~[main/:na] \tat > org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:323) > ~[main/:na] \tat > org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1599) > ~[main/:na] \tat > org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1554) > ~[main/:na] \tat > org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1501) > ~[main/:na] \tat > org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1420) > ~[main/:na] \tat > org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:457) > ~[main/:na] \tat > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:232) > ~[main/:na] \tat > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:202) > ~[main/:na] \tat > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:72) > ~[main/:na] \tat > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:204) > ~[main/:na] \tat > org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:470) > ~[main/:na] \tat > org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:447) > ~[main/:na] \tat > org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:139) > ~[main/:na] \tat > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) > [main/:na] \tat > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) > [main/:na] \tat > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.23.Final.jar:4.0.23.Final] \tat > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > [netty-all-4.0.23.Final.jar:4.0.23.Final] \tat > io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32) > [netty-all-4.0.23.Final.jar:4.0.23.Final] \tat > io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324) > [netty-all-4.0.23.Final.jar:4.0.23.Final] \tat > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.
[jira] [Commented] (CASSANDRA-10501) Failure to start up Cassandra when temporary compaction files are not all renamed after kill/crash (FSReadError)
[ https://issues.apache.org/jira/browse/CASSANDRA-10501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974967#comment-14974967 ] Yuki Morishita commented on CASSANDRA-10501: Patch looks good to me though can you run tests on cassci to make sure it doesn't broke anything? > Failure to start up Cassandra when temporary compaction files are not all > renamed after kill/crash (FSReadError) > > > Key: CASSANDRA-10501 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10501 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra 2.1.6 > Redhat Linux >Reporter: Mathieu Roy >Assignee: Marcus Eriksson > Labels: compaction, triage > Fix For: 2.1.x, 2.2.x, 3.0.0 > > > We have seen an issue intermittently but repeatedly over the last few months > where, after exiting the Cassandra process, it fails to start with an > FSReadError (stack trace below). The FSReadError refers to a 'statistics' > file for a that doesn't exist, though a corresponding temporary file does > exist (eg. there is no > /media/data/cassandraDB/data/clusteradmin/singleton_token-01a92ed069b511e59b2c53679a538c14/clusteradmin-singleton_token-ka-9-Statistics.db > file, but there is a > /media/data/cassandraDB/data/clusteradmin/singleton_token-01a92ed069b511e59b2c53679a538c14/clusteradmin-singleton_token-tmp-ka-9-Statistics.db > file.) > We tracked down the issue to the fact that the process exited with leftover > compactions and some of the 'tmp' files for the SSTable had been renamed to > final files, but not all of them - the issue happens if the 'Statistics' file > is not renamed but others are. The scenario we've seen on the last two > occurrences involves the 'CompressionInfo' file being a final file while all > other files for the SSTable generation were left with 'tmp' names. > When this occurs, Cassandra cannot start until the file issue is resolved; > we've worked around it by deleting the SSTable files from the same > generation, both final and tmp, which at least allows Cassandra to start. > Renaming all files to either tmp or final names would also work. > We've done some debugging in Cassandra and have been unable to cause the > issue without renaming the files manually. The rename code at > SSTableWriter.rename() looks like it could result in this if the process > exits in the middle of the rename, but in every occurrence we've debugged > through, the Set of components is ordered and Statistics is the first file > renamed. > However the comments in SSTableWriter.rename() suggest that the 'Data' file > is meant to be used as meaning the files were completely renamed. The method > ColumnFamilyStore. removeUnfinishedCompactionLeftovers(), however, will > proceed assuming the compaction is complete if any of the component files has > a final name, and will skip temporary files when reading the list. If the > 'Statistics' file is temporary then it won't be read, and the defaults does > not include a list of ancestors, leading to the NullPointerException. > It appears that ColumnFamilyStore. removeUnfinishedCompactionLeftovers() > should perhaps either ensure that all 'tmp' files are properly renamed before > it uses them, or skip SSTable files that don't have either the 'Data' or > 'Statistics' file in final form. > Stack trace: > {code} > FSReadError in Failed to remove unfinished compaction leftovers (file: > /media/data/cassandraDB/data/clusteradmin/singleton_token-01a92ed069b511e59b2c53679a538c14/clusteradmin-singleton_token-ka-9-Statistics.db). > See log for details. > at > org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:617) > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:302) > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:536) > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:625) > Caused by: java.lang.NullPointerException > at > org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:609) > ... 3 more > Exception encountered during startup: java.lang.NullPointerException > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9556) Add newer data types to cassandra stress (e.g. decimal, dates, UDTs)
[ https://issues.apache.org/jira/browse/CASSANDRA-9556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974932#comment-14974932 ] Benjamin Lerer commented on CASSANDRA-9556: --- {quote}should we split UDTs and Tuples into a separate ticket and get all of the basic types done for this one?{quote} I am in favor of it. [~jasonstack] do you still want to provide a patch or should I reasign the ticket? > Add newer data types to cassandra stress (e.g. decimal, dates, UDTs) > > > Key: CASSANDRA-9556 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9556 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Jeremy Hanna >Assignee: ZhaoYang > Labels: stress > Attachments: cassandra-2.1-9556.txt, trunk-9556.txt > > > Currently you can't define a data model with decimal types and use Cassandra > stress with it. Also, I imagine that holds true with other newer data types > such as the new date and time types. Besides that, now that data models are > including user defined types, we should allow users to create those > structures with stress as well. Perhaps we could split out the UDTs into a > different ticket if it holds the other types up. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-10596) AssertionError in ReadCommand after upgrade to 3.0
Philip Thompson created CASSANDRA-10596: --- Summary: AssertionError in ReadCommand after upgrade to 3.0 Key: CASSANDRA-10596 URL: https://issues.apache.org/jira/browse/CASSANDRA-10596 Project: Cassandra Issue Type: Bug Components: Core Reporter: Philip Thompson Fix For: 3.0.0 Attachments: node1.log, node2.log, node3.log The dtest {{upgrade_through_versions_test.TestUpgrade_from_2_2_latest_tag_to_cassandra_3_0_HEAD.rolling_upgrade_test}} is failing. See: http://cassci.datastax.com/view/Upgrades/job/cassandra_upgrade_to_3.0_proto_v4/4/testReport/upgrade_through_versions_test/TestUpgrade_from_2_2_latest_tag_to_cassandra_3_0_HEAD/rolling_upgrade_test_2/ The following exception shows up in the log {code} Unexpected error in node1 node log: ['ERROR [SharedPool-Worker-2] 2015-08-17 22:30:30,531 Message.java:611 - Unexpected exception during request; channel = [id: 0xee05d108, /127.0.0.1:39640 => /127.0.0.1:9042] java.lang.AssertionError: null \tat org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:520) ~[main/:na] \tat org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:461) ~[main/:na] \tat org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) ~[main/:na] \tat org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72) ~[main/:na] \tat org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:583) ~[main/:na] \tat org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:733) ~[main/:na] \tat org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:676) ~[main/:na] \tat org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:659) ~[main/:na] \tat org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:103) ~[main/:na] \tat org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:76) ~[main/:na] \tat org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:323) ~[main/:na] \tat org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1599) ~[main/:na] \tat org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1554) ~[main/:na] \tat org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1501) ~[main/:na] \tat org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1420) ~[main/:na] \tat org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:457) ~[main/:na] \tat org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:232) ~[main/:na] \tat org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:202) ~[main/:na] \tat org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:72) ~[main/:na] \tat org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:204) ~[main/:na] \tat org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:470) ~[main/:na] \tat org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:447) ~[main/:na] \tat org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:139) ~[main/:na] \tat org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) [main/:na] \tat org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) [main/:na] \tat io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-all-4.0.23.Final.jar:4.0.23.Final] \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final] \tat io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32) [netty-all-4.0.23.Final.jar:4.0.23.Final] \tat io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324) [netty-all-4.0.23.Final.jar:4.0.23.Final] \tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_51] \tat org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) [main/:na] \tat org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [main/:na] \tat java.lang.Thread.run(Thread.java:745) [na:1.8.0_51] ERROR [SharedPool-Worker-3] 2015-08-17 22:30:30,748 {code} I've attached the system.log files from the three ccm nodes in the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate
[ https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-10592: --- Fix Version/s: 2.2.4 > IllegalArgumentException in DataOutputBuffer.reallocate > --- > > Key: CASSANDRA-10592 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10592 > Project: Cassandra > Issue Type: Bug >Reporter: Sebastian Estevez >Assignee: Ariel Weisberg > Fix For: 2.2.4, 3.0.0 > > > The following exception appeared in my logs while running a cassandra-stress > workload on master. > {code} > WARN [SharedPool-Worker-1] 2015-10-22 12:58:20,792 > AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread > Thread[SharedPool-Worker-1,5,main]: {} > java.lang.RuntimeException: java.lang.IllegalArgumentException > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366) > ~[main/:na] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_60] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > ~[main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > Caused by: java.lang.IllegalArgumentException: null > at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60] > at > org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63) > ~[main/:na] > at > org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57) > ~[main/:na] > at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132) > ~[main/:na] > at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151) > ~[main/:na] > at > org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296) > ~[main/:na] > at > org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) > ~[main/:na] > at > org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77) > ~[main/:na] > at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) > ~[main/:na] > at > org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1697) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2362) > ~[main/:na] > ... 4 common frames omitted > {code} > I was running this command: > {code} > tools/bin/cassandra-stress user > profile=~/Desktop/startup/stress/stress.yaml n=10 ops\(insert=1\) -rate > threads=30 > {code} > Here's the stress.yaml > {code} > ### DML ### THIS IS UNDER CONSTRUCTION!!! > # Keyspace Name > keyspace: autogeneratedtest > # The CQL for creating a keyspace (optional if it already exists) > keyspace_definition: | > CREATE KEYSPACE autogeneratedtest WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': 1}; > # Table name > table: test > # The CQL for creating a table you wish to stress (optional if it already > exists) > table_definition: > CREATE TABLE test ( > a int, > b int, > c int, > d int, > e int, > f timestamp, > g
[jira] [Commented] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate
[ https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974894#comment-14974894 ] Ariel Weisberg commented on CASSANDRA-10592: [~sebastian.este...@datastax.com] I was able to reproduce the problem I found via code inspection in a unit test. I am trying to run the workload you posted. I am cleaning out the data directory and restarting cassandra each time. I get this error {code} java.lang.UnsupportedOperationException: Because of this name: i if you removed it from the yaml and are still seeing this, make sure to drop table at org.apache.cassandra.stress.StressProfile$ColumnInfo.getGenerator(StressProfile.java:563) at org.apache.cassandra.stress.StressProfile$ColumnInfo.getGenerator(StressProfile.java:522) at org.apache.cassandra.stress.StressProfile$GeneratorFactory.get(StressProfile.java:502) at org.apache.cassandra.stress.StressProfile$GeneratorFactory.newGenerator(StressProfile.java:495) at org.apache.cassandra.stress.StressProfile.newGenerator(StressProfile.java:471) at org.apache.cassandra.stress.settings.SettingsCommandUser$1.newGenerator(SettingsCommandUser.java:90) at org.apache.cassandra.stress.operations.SampledOpDistributionFactory$1.get(SampledOpDistributionFactory.java:80) at org.apache.cassandra.stress.StressAction$Consumer.(StressAction.java:269) at org.apache.cassandra.stress.StressAction.run(StressAction.java:204) at org.apache.cassandra.stress.StressAction.warmup(StressAction.java:104) at org.apache.cassandra.stress.StressAction.run(StressAction.java:60) at org.apache.cassandra.stress.Stress.main(Stress.java:121) {code} > IllegalArgumentException in DataOutputBuffer.reallocate > --- > > Key: CASSANDRA-10592 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10592 > Project: Cassandra > Issue Type: Bug >Reporter: Sebastian Estevez >Assignee: Ariel Weisberg > Fix For: 3.0.0 > > > The following exception appeared in my logs while running a cassandra-stress > workload on master. > {code} > WARN [SharedPool-Worker-1] 2015-10-22 12:58:20,792 > AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread > Thread[SharedPool-Worker-1,5,main]: {} > java.lang.RuntimeException: java.lang.IllegalArgumentException > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366) > ~[main/:na] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_60] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > ~[main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > Caused by: java.lang.IllegalArgumentException: null > at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60] > at > org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63) > ~[main/:na] > at > org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57) > ~[main/:na] > at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132) > ~[main/:na] > at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151) > ~[main/:na] > at > org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296) > ~[main/:na] > at > org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) > ~[main/:na] > at > org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77) > ~[main/:na] > at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381) > ~[main/:na] >
[jira] [Commented] (CASSANDRA-10595) Don't initialize un-registered indexes
[ https://issues.apache.org/jira/browse/CASSANDRA-10595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974868#comment-14974868 ] Sam Tunnicliffe commented on CASSANDRA-10595: - You're right, and actually {{Index::getIndexName}} was only supposed to be a transitional thing while we fixed some inconsistencies in how we referred to instances internally. That was done in CASSANDRA-10127, so I've pushed a second commit to remove {{Index::getIndexName}} completely. > Don't initialize un-registered indexes > -- > > Key: CASSANDRA-10595 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10595 > Project: Cassandra > Issue Type: Improvement >Reporter: Sam Tunnicliffe >Assignee: Sam Tunnicliffe >Priority: Minor > Fix For: 3.0.0 > > > If a secondary index implementation chooses not to register with > {{SecondaryIndexManager}} on a particular node, it won't be required to > provide either {{Indexer}} or {{Searcher}} instances. In this case, > initialization is unnecessary so we should avoid doing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9280) Streaming connections should bind to the broadcast_address of the node
[ https://issues.apache.org/jira/browse/CASSANDRA-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-9280: -- Assignee: (was: Yuki Morishita) > Streaming connections should bind to the broadcast_address of the node > -- > > Key: CASSANDRA-9280 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9280 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Richard Low >Priority: Minor > > Currently, if you have multiple interfaces on a server, a node receiving a > stream may show the stream as coming from the wrong IP in e.g. nodetool > netstats. The IP is taken as the source of the socket, which may not be the > same as the node’s broadcast_address. The outgoing socket should be > explicitly bound to the broadcast_address. > It seems like this was fixed a long time ago in CASSANDRA-737 but has since > broken. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8110) Make streaming backwards compatible
[ https://issues.apache.org/jira/browse/CASSANDRA-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-8110: -- Assignee: (was: Yuki Morishita) > Make streaming backwards compatible > --- > > Key: CASSANDRA-8110 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8110 > Project: Cassandra > Issue Type: Improvement >Reporter: Marcus Eriksson > Fix For: 3.x > > > To be able to seamlessly upgrade clusters we need to make it possible to > stream files between nodes with different StreamMessage.CURRENT_VERSION -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10595) Don't initialize un-registered indexes
[ https://issues.apache.org/jira/browse/CASSANDRA-10595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974844#comment-14974844 ] Sergio Bossa commented on CASSANDRA-10595: -- The patch looks good, I have only one concern: there's some mixed usage of {{Index#getIndexName()}} and {{IndexMetadata#name}}, which are apparently assumed to be the same but there's no actual "constraint" about that API-wise, so wouldn't it be better to just drop the former? > Don't initialize un-registered indexes > -- > > Key: CASSANDRA-10595 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10595 > Project: Cassandra > Issue Type: Improvement >Reporter: Sam Tunnicliffe >Assignee: Sam Tunnicliffe >Priority: Minor > Fix For: 3.0.0 > > > If a secondary index implementation chooses not to register with > {{SecondaryIndexManager}} on a particular node, it won't be required to > provide either {{Indexer}} or {{Searcher}} instances. In this case, > initialization is unnecessary so we should avoid doing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9813) cqlsh column header can be incorrect when no rows are returned
[ https://issues.apache.org/jira/browse/CASSANDRA-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974831#comment-14974831 ] Adam Holmberg commented on CASSANDRA-9813: -- [Driver branch|https://github.com/datastax/python-driver/tree/439] passing column names to results. Attached patch showing proposed solution (based on 2.1 -- can port forward after initial review). This patch is based on changes in CASSANDRA-10513. > cqlsh column header can be incorrect when no rows are returned > -- > > Key: CASSANDRA-9813 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9813 > Project: Cassandra > Issue Type: Bug >Reporter: Aleksey Yeschenko > Labels: cqlsh > Fix For: 3.x, 2.1.x, 2.2.x > > Attachments: 9813-2.1.txt, Test-for-9813.txt > > > Upon migration, we internally create a pair of surrogate clustering/regular > columns for compact static tables. These shouldn't be exposed to the user. > That is, for the table > {code} > CREATE TABLE bar (k int, c int, PRIMARY KEY (k)) WITH COMPACT STORAGE; > {code} > {{SELECT * FROM bar}} should not be returning this result set: > {code} > cqlsh:test> select * from bar; > c | column1 | k | value > ---+-+---+--- > (0 rows) > {code} > Should only contain the defined {{c}} and {{k}} columns. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9813) cqlsh column header can be incorrect when no rows are returned
[ https://issues.apache.org/jira/browse/CASSANDRA-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Holmberg updated CASSANDRA-9813: - Attachment: 9813-2.1.txt > cqlsh column header can be incorrect when no rows are returned > -- > > Key: CASSANDRA-9813 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9813 > Project: Cassandra > Issue Type: Bug >Reporter: Aleksey Yeschenko > Labels: cqlsh > Fix For: 3.x, 2.1.x, 2.2.x > > Attachments: 9813-2.1.txt, Test-for-9813.txt > > > Upon migration, we internally create a pair of surrogate clustering/regular > columns for compact static tables. These shouldn't be exposed to the user. > That is, for the table > {code} > CREATE TABLE bar (k int, c int, PRIMARY KEY (k)) WITH COMPACT STORAGE; > {code} > {{SELECT * FROM bar}} should not be returning this result set: > {code} > cqlsh:test> select * from bar; > c | column1 | k | value > ---+-+---+--- > (0 rows) > {code} > Should only contain the defined {{c}} and {{k}} columns. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9813) cqlsh column header can be incorrect when no rows are returned
[ https://issues.apache.org/jira/browse/CASSANDRA-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974826#comment-14974826 ] Adam Holmberg commented on CASSANDRA-9813: -- My apologies for not acknowledging your suggestions made above -- I responded without reading the whole thread. > cqlsh column header can be incorrect when no rows are returned > -- > > Key: CASSANDRA-9813 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9813 > Project: Cassandra > Issue Type: Bug >Reporter: Aleksey Yeschenko > Labels: cqlsh > Fix For: 3.x, 2.1.x, 2.2.x > > Attachments: Test-for-9813.txt > > > Upon migration, we internally create a pair of surrogate clustering/regular > columns for compact static tables. These shouldn't be exposed to the user. > That is, for the table > {code} > CREATE TABLE bar (k int, c int, PRIMARY KEY (k)) WITH COMPACT STORAGE; > {code} > {{SELECT * FROM bar}} should not be returning this result set: > {code} > cqlsh:test> select * from bar; > c | column1 | k | value > ---+-+---+--- > (0 rows) > {code} > Should only contain the defined {{c}} and {{k}} columns. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (CASSANDRA-9328) WriteTimeoutException thrown when LWT concurrency > 1, despite the query duration taking MUCH less than cas_contention_timeout_in_ms
[ https://issues.apache.org/jira/browse/CASSANDRA-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Whiteside reopened CASSANDRA-9328: > WriteTimeoutException thrown when LWT concurrency > 1, despite the query > duration taking MUCH less than cas_contention_timeout_in_ms > > > Key: CASSANDRA-9328 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9328 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Aaron Whiteside >Priority: Critical > Fix For: 2.1.x > > Attachments: CassandraLWTTest.java, CassandraLWTTest2.java > > > WriteTimeoutException thrown when LWT concurrency > 1, despite the query > duration taking MUCH less than cas_contention_timeout_in_ms. > Unit test attached, run against a 3 node cluster running 2.1.5. > If you reduce the threadCount to 1, you never see a WriteTimeoutException. If > the WTE is due to not being able to communicate with other nodes, why does > the concurrency >1 cause inter-node communication to fail? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10595) Don't initialize un-registered indexes
[ https://issues.apache.org/jira/browse/CASSANDRA-10595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sam Tunnicliffe updated CASSANDRA-10595: Description: If a secondary index implementation chooses not to register with {{SecondaryIndexManager}} on a particular node, it won't be required to provide either {{Indexer}} or {{Searcher}} instances. In this case, initialization is unnecessary so we should avoid doing it. (was: If a secondary index implementation chooses not to register with {{SecondaryIndexManager}}on a particular node, it won't be required to provide either {{Indexer}} or {{Searcher}} instances. In this case, initialization is unnecessary so we should avoid doing.) > Don't initialize un-registered indexes > -- > > Key: CASSANDRA-10595 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10595 > Project: Cassandra > Issue Type: Improvement >Reporter: Sam Tunnicliffe >Assignee: Sam Tunnicliffe >Priority: Minor > Fix For: 3.0.0 > > > If a secondary index implementation chooses not to register with > {{SecondaryIndexManager}} on a particular node, it won't be required to > provide either {{Indexer}} or {{Searcher}} instances. In this case, > initialization is unnecessary so we should avoid doing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9328) WriteTimeoutException thrown when LWT concurrency > 1, despite the query duration taking MUCH less than cas_contention_timeout_in_ms
[ https://issues.apache.org/jira/browse/CASSANDRA-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974817#comment-14974817 ] Aaron Whiteside commented on CASSANDRA-9328: If this is a known issue, and there is no other ticket to represent this issue, then please tell me again why you want to close it? This ticket should remain OPEN until the issue is resolved, regardless of the fact there is no known solution. And I don't see any documentation on this feature that says it will provide non-deterministic behavior under light (2 threads) contention. I disagree on your point that you can read the value after writing it to determine if the LWT was successful. You forget in a concurrent environment that this is the very definition of a race condition. With the current LWT implementation you can NEVER know 100% if an update succeeded or not. If you think this is not true please provide sample code on how to accomplish this.. if such a thing exists it should also be added to the official documentation as a work around on how to use LWT "correctly". > WriteTimeoutException thrown when LWT concurrency > 1, despite the query > duration taking MUCH less than cas_contention_timeout_in_ms > > > Key: CASSANDRA-9328 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9328 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Aaron Whiteside >Priority: Critical > Fix For: 2.1.x > > Attachments: CassandraLWTTest.java, CassandraLWTTest2.java > > > WriteTimeoutException thrown when LWT concurrency > 1, despite the query > duration taking MUCH less than cas_contention_timeout_in_ms. > Unit test attached, run against a 3 node cluster running 2.1.5. > If you reduce the threadCount to 1, you never see a WriteTimeoutException. If > the WTE is due to not being able to communicate with other nodes, why does > the concurrency >1 cause inter-node communication to fail? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10535) Solidify use of Decommission command
[ https://issues.apache.org/jira/browse/CASSANDRA-10535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-10535: - Summary: Solidify use of Decommission command (was: Solidify use of Demmission command) > Solidify use of Decommission command > > > Key: CASSANDRA-10535 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10535 > Project: Cassandra > Issue Type: Improvement > Components: Tools > Environment: All C* environments >Reporter: Richard Lewis > Fix For: 3.x, 2.1.x > > > Decommission should have protection mechanisms so nodes are not accidentally > removed from a cluster due to error input. > 1) Decommission should have a validation message "Do you really want to do > this" > 2) Decommission should be run from a remote node. > Background, user was on the wrong node, ran decommission, which required no > validation which resulted in the node being removed from the cluster > accidentally. There should be validation required so a critical command such > as this is not accidentally run. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-10595) Don't initialize un-registered indexes
Sam Tunnicliffe created CASSANDRA-10595: --- Summary: Don't initialize un-registered indexes Key: CASSANDRA-10595 URL: https://issues.apache.org/jira/browse/CASSANDRA-10595 Project: Cassandra Issue Type: Improvement Reporter: Sam Tunnicliffe Assignee: Sam Tunnicliffe Priority: Minor Fix For: 3.0.0 If a secondary index implementation chooses not to register with {{SecondaryIndexManager}}on a particular node, it won't be required to provide either {{Indexer}} or {{Searcher}} instances. In this case, initialization is unnecessary so we should avoid doing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-10594) Inconsistent permissions results return
Adam Holmberg created CASSANDRA-10594: - Summary: Inconsistent permissions results return Key: CASSANDRA-10594 URL: https://issues.apache.org/jira/browse/CASSANDRA-10594 Project: Cassandra Issue Type: Bug Reporter: Adam Holmberg Priority: Minor The server returns inconsistent results when listing permissions, depending on whether a user is configured. *Observed with Cassandra 3.0:* Only super user configured: {code} cassandra@cqlsh> list all; role | resource | permissions --+--+- (0 rows) {code} VOID result type is returned (meaning no result meta is returned and cqlsh must use the table meta to determine columns) With one user configured, no grants: {code} cassandra@cqlsh> create user holmberg with password 'tmp'; cassandra@cqlsh> list all; results meta: system_auth permissions 4 role | username | resource| permission ---+---+-+ cassandra | cassandra | | ALTER cassandra | cassandra | | DROP cassandra | cassandra | | AUTHORIZE (3 rows) {code} Now a ROWS result message is returned with the cassandra super user grants. Dropping the regular user causes the VOID message to be returned again. *Slightly different behavior on 2.2 branch:* VOID message with no result meta is returned, even if regular user is configured, until permissions are added to that user. *Expected:* It would be nice if the query always resulted in a ROWS result, even if there are no explicit permissions defined. This would provide the correct result metadata even if there are no rows. Additionally, it is strange that the 'cassandra' super user only appears in the results when another user is configured. I would expect it to always appear, or never. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate
[ https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974720#comment-14974720 ] Ariel Weisberg edited comment on CASSANDRA-10592 at 10/26/15 6:15 PM: -- This looks like Integer overflow in DOB during reallocation. The response to some query is probably slightly larger than 1gb. I'll confirm I can reproduce this, fix it, and look at missing coverage for stuff in the 2 gig range in o.a.c.io.util.*. Integer overflow was not something that was on my mind when we did a lot of that work. was (Author: aweisberg): This looks like Integer overflow in DOB during reallocation. The response to some query is probably slightly larger than 1gb. I'll confirm I can reproduce this, fix it, and look at missing coverage for stuff in the 2 gig range in o.a.c.io.util.*. Integer overflow was not something that was really on my mind when we did a lot of that work. > IllegalArgumentException in DataOutputBuffer.reallocate > --- > > Key: CASSANDRA-10592 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10592 > Project: Cassandra > Issue Type: Bug >Reporter: Sebastian Estevez >Assignee: Ariel Weisberg > Fix For: 3.0.0 > > > The following exception appeared in my logs while running a cassandra-stress > workload on master. > {code} > WARN [SharedPool-Worker-1] 2015-10-22 12:58:20,792 > AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread > Thread[SharedPool-Worker-1,5,main]: {} > java.lang.RuntimeException: java.lang.IllegalArgumentException > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366) > ~[main/:na] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_60] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > ~[main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > Caused by: java.lang.IllegalArgumentException: null > at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60] > at > org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63) > ~[main/:na] > at > org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57) > ~[main/:na] > at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132) > ~[main/:na] > at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151) > ~[main/:na] > at > org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296) > ~[main/:na] > at > org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) > ~[main/:na] > at > org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77) > ~[main/:na] > at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) > ~[main/:na] > at > org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1697) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2362) > ~[main/:na] > .
[jira] [Commented] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate
[ https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974720#comment-14974720 ] Ariel Weisberg commented on CASSANDRA-10592: This looks like Integer overflow in DOB during reallocation. The response to some query is probably slightly larger than 1gb. I'll confirm I can reproduce this, fix it, and look at missing coverage for stuff in the 2 gig range in o.a.c.io.util.*. Integer overflow was not something that was really on my mind when we did a lot of that work. > IllegalArgumentException in DataOutputBuffer.reallocate > --- > > Key: CASSANDRA-10592 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10592 > Project: Cassandra > Issue Type: Bug >Reporter: Sebastian Estevez >Assignee: Ariel Weisberg > Fix For: 3.0.0 > > > The following exception appeared in my logs while running a cassandra-stress > workload on master. > {code} > WARN [SharedPool-Worker-1] 2015-10-22 12:58:20,792 > AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread > Thread[SharedPool-Worker-1,5,main]: {} > java.lang.RuntimeException: java.lang.IllegalArgumentException > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366) > ~[main/:na] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_60] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > ~[main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > Caused by: java.lang.IllegalArgumentException: null > at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60] > at > org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63) > ~[main/:na] > at > org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57) > ~[main/:na] > at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132) > ~[main/:na] > at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151) > ~[main/:na] > at > org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296) > ~[main/:na] > at > org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) > ~[main/:na] > at > org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77) > ~[main/:na] > at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) > ~[main/:na] > at > org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1697) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2362) > ~[main/:na] > ... 4 common frames omitted > {code} > I was running this command: > {code} > tools/bin/cassandra-stress user > profile=~/Desktop/startup/stress/stress.yaml n=10 ops\(insert=1\) -rate > threads=30 > {code} > Here's the stress.yaml > {code} > ### DML ### THIS IS UNDER CONSTRUCTION!!! > # Keyspace Name > keyspace: autogeneratedtest > # The CQL for creating a keyspace (optional if it already exists) > keys
[jira] [Commented] (CASSANDRA-10449) OOM on bootstrap after long GC pause
[ https://issues.apache.org/jira/browse/CASSANDRA-10449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974716#comment-14974716 ] Robbie Strickland commented on CASSANDRA-10449: --- As a workaround I was able to simply restart the node with {{auto_bootstrap}} set to false, which allowed it to successfully join. Obviously there appear to be multiple issues here, as the behavior in 2.1.7 and 2.1.11 is different with an otherwise identical setup. > OOM on bootstrap after long GC pause > > > Key: CASSANDRA-10449 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10449 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Ubuntu 14.04, AWS >Reporter: Robbie Strickland > Labels: gc > Fix For: 2.1.x > > Attachments: GCpath.txt, heap_dump.png, system.log.10-05, > thread_dump.log, threads.txt > > > I have a 20-node cluster (i2.4xlarge) with vnodes (default of 256) and > 500-700GB per node. SSTable counts are <10 per table. I am attempting to > provision additional nodes, but bootstrapping OOMs every time after about 10 > hours with a sudden long GC pause: > {noformat} > INFO [Service Thread] 2015-10-05 23:33:33,373 GCInspector.java:252 - G1 Old > Generation GC in 1586126ms. G1 Old Gen: 49213756976 -> 49072277176; > ... > ERROR [MemtableFlushWriter:454] 2015-10-05 23:33:33,380 > CassandraDaemon.java:223 - Exception in thread > Thread[MemtableFlushWriter:454,5,main] > java.lang.OutOfMemoryError: Java heap space > {noformat} > I have tried increasing max heap to 48G just to get through the bootstrap, to > no avail. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10557) Streaming can throw exception when trying to retry
[ https://issues.apache.org/jira/browse/CASSANDRA-10557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974680#comment-14974680 ] Yuki Morishita commented on CASSANDRA-10557: This may relate to CASSANDRA-10448, but I'm not sure since the log we got was not complete. bq. there was no retry and that exception was not thrown during SSTableWriter.abort() call. Could you please clarify? Streaming can try to read next message from the middle of failed stream, thus we need to drain unread data. And I think that's the source of this "Unknown type 0" error. {{SSTableWriter.abort()}} can throw RuntimeException which is not handled by anything and if that happens it will leave incomplete stream. > Streaming can throw exception when trying to retry > -- > > Key: CASSANDRA-10557 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10557 > Project: Cassandra > Issue Type: Bug >Reporter: Yuki Morishita >Assignee: Yuki Morishita >Priority: Minor > Fix For: 2.1.x, 2.2.x, 3.0.0 > > > Streaming can throw below exception when trying to retry. > This seems to be happening when underlining cause is not catched properly. > {code} > ERROR 18:45:56 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a] Streaming error > occurred > java.lang.IllegalArgumentException: Unknown type 0 > at > org.apache.cassandra.streaming.messages.StreamMessage$Type.get(StreamMessage.java:97) > ~[main/:na] > at > org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58) > ~[main/:na] > at > org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261) > ~[main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7723) sstable2json (and possibly other command-line tools) hang if no write permission to the commitlogs
[ https://issues.apache.org/jira/browse/CASSANDRA-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-7723: --- Assignee: (was: Joshua McKenzie) > sstable2json (and possibly other command-line tools) hang if no write > permission to the commitlogs > -- > > Key: CASSANDRA-7723 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7723 > Project: Cassandra > Issue Type: Bug >Reporter: J.B. Langston >Priority: Minor > > sstable2json (and potentially other command-line tools that call > DatabaseDescriptor.loadSchemas) will hang if the user running them doesn't > have write permission on the commit logs. loadSchemas calls > Schema.updateVersion, which causes a mutation to the system tables, then it > just spins forever trying to acquire a commit log segment. See this thread > dump: https://gist.github.com/markcurtis1970/837e770d1cad5200943c. The tools > should recognize this and present an understandable error message. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-2527) Add ability to snapshot data as input to hadoop jobs
[ https://issues.apache.org/jira/browse/CASSANDRA-2527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-2527: --- Assignee: (was: Joshua McKenzie) > Add ability to snapshot data as input to hadoop jobs > > > Key: CASSANDRA-2527 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2527 > Project: Cassandra > Issue Type: New Feature >Reporter: Jeremy Hanna >Priority: Minor > Labels: hadoop > Fix For: 3.x > > > It is desirable to have immutable inputs to hadoop jobs for the duration of > the job. That way re-execution of individual tasks do not alter the output. > One way to accomplish this would be to snapshot the data that is used as > input to a job. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8465) Phase 1: Break static methods into classes
[ https://issues.apache.org/jira/browse/CASSANDRA-8465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-8465: --- Assignee: (was: Joshua McKenzie) > Phase 1: Break static methods into classes > -- > > Key: CASSANDRA-8465 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8465 > Project: Cassandra > Issue Type: Sub-task > Components: Core >Reporter: Joshua McKenzie > Fix For: 3.x > > > 1: Writes > * Regular > * Counter > * RegularBatch > * CounterBatch > * AtomicBatch > 2: Reads > * Regular > * Range > 3: LightweightTransaction > * Write > * Read > 4: Truncate -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8804) Startup error on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-8804: --- Assignee: (was: Joshua McKenzie) > Startup error on Windows > > > Key: CASSANDRA-8804 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8804 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson > Labels: Windows > Fix For: 2.1.x > > Attachments: node1.log > > > The dtest > snapshot_test.py:TestArchiveCommitlog.test_archive_commitlog_with_active_commitlog > is failing on windows, with the following exception when starting Cassandra: > {code} > java.lang.ExceptionInInitializerError: null > at org.apache.cassandra.db.Memtable.(Memtable.java:66) ~[main/:na] > at org.apache.cassandra.db.DataTracker.init(DataTracker.java:377) > ~[main/:na] > at org.apache.cassandra.db.DataTracker.(DataTracker.java:54) > ~[main/:na] > at > org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:316) > ~[main/:na] > at > org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:478) > ~[main/:na] > at > org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:449) > ~[main/:na] > at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:324) > ~[main/:na] > at org.apache.cassandra.db.Keyspace.(Keyspace.java:277) > ~[main/:na] > at org.apache.cassandra.db.Keyspace.open(Keyspace.java:119) ~[main/:na] > at org.apache.cassandra.db.Keyspace.open(Keyspace.java:96) ~[main/:na] > at > org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:560) > ~[main/:na] > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:228) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:468) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:557) > [main/:na] > Caused by: java.lang.IllegalArgumentException: Malformed \u encoding. > at java.util.Properties.loadConvert(Properties.java:568) ~[na:1.7.0_60] > at java.util.Properties.load0(Properties.java:391) ~[na:1.7.0_60] > at java.util.Properties.load(Properties.java:341) ~[na:1.7.0_60] > at > org.apache.cassandra.db.commitlog.CommitLogArchiver.(CommitLogArchiver.java:81) > ~[main/:na] > at > org.apache.cassandra.db.commitlog.CommitLog.(CommitLog.java:62) > ~[main/:na] > at > org.apache.cassandra.db.commitlog.CommitLog.(CommitLog.java:55) > ~[main/:na] > ... 14 common frames omitted > {code} > Attached is the node's log file. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8887) Direct (de)compression of internode communication
[ https://issues.apache.org/jira/browse/CASSANDRA-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-8887: -- Assignee: (was: Ariel Weisberg) > Direct (de)compression of internode communication > - > > Key: CASSANDRA-8887 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8887 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Matt Stump >Priority: Minor > Fix For: 3.x > > > Internode compression is on by default. Currently we allocate one set of > buffers for the raw data, and then compress which results in another set of > buffers. This greatly increases the GC load. We can decrease the GC load by > doing direct compression/decompression of the communication buffers. This is > the same work as done in CASSANDRA-8464 but applied to internode > communication. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[09/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a4f32c5a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a4f32c5a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a4f32c5a Branch: refs/heads/trunk Commit: a4f32c5af6974cec9d3d18bcec3c0ea683ab1045 Parents: db7feb4 32f22a4 Author: Yuki Morishita Authored: Mon Oct 26 12:19:21 2015 -0500 Committer: Yuki Morishita Committed: Mon Oct 26 12:19:21 2015 -0500 -- .../db/compaction/CompactionManager.java| 32 .../cassandra/service/StorageService.java | 3 ++ 2 files changed, 35 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a4f32c5a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a4f32c5a/src/java/org/apache/cassandra/service/StorageService.java -- diff --cc src/java/org/apache/cassandra/service/StorageService.java index 9153cd8,f162f7c..fb1edf6 --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@@ -3946,10 -3899,11 +3946,13 @@@ public class StorageService extends Not } FBUtilities.waitOnFutures(flushes); -BatchlogManager.shutdown(); +BatchlogManager.instance.shutdown(); + +HintsService.instance.shutdownBlocking(); + // Interrupt on going compaction and shutdown to prevent further compaction + CompactionManager.instance.forceShutdown(); + // whilst we've flushed all the CFs, which will have recycled all completed segments, we want to ensure // there are no segments to replay, so we force the recycling of any remaining (should be at most one) CommitLog.instance.forceRecycleAllSegments();
[05/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2
Merge branch 'cassandra-2.1' into cassandra-2.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/32f22a4e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/32f22a4e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/32f22a4e Branch: refs/heads/cassandra-3.0 Commit: 32f22a4e35cd38c5a539a0bff47e1fe3edd86e61 Parents: 013ce88 17082d4 Author: Yuki Morishita Authored: Mon Oct 26 12:19:13 2015 -0500 Committer: Yuki Morishita Committed: Mon Oct 26 12:19:13 2015 -0500 -- .../db/compaction/CompactionManager.java| 50 +--- .../cassandra/service/StorageService.java | 3 ++ 2 files changed, 37 insertions(+), 16 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/32f22a4e/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- diff --cc src/java/org/apache/cassandra/db/compaction/CompactionManager.java index ea20a1f,b85eb51..0c6e24f --- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java @@@ -20,22 -20,16 +20,8 @@@ package org.apache.cassandra.db.compact import java.io.File; import java.io.IOException; import java.lang.management.ManagementFactory; --import java.util.ArrayList; -import java.util.Arrays; --import java.util.Collection; --import java.util.Collections; --import java.util.HashSet; --import java.util.Iterator; --import java.util.List; --import java.util.Map; --import java.util.Set; - import java.util.UUID; - import java.util.concurrent.BlockingQueue; - import java.util.concurrent.Callable; - import java.util.concurrent.ExecutionException; - import java.util.concurrent.Future; - import java.util.concurrent.LinkedBlockingQueue; - import java.util.concurrent.SynchronousQueue; - import java.util.concurrent.TimeUnit; ++import java.util.*; + import java.util.concurrent.*; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.openmbean.OpenDataException; http://git-wip-us.apache.org/repos/asf/cassandra/blob/32f22a4e/src/java/org/apache/cassandra/service/StorageService.java --
[10/10] cassandra git commit: Merge branch 'cassandra-3.0' into trunk
Merge branch 'cassandra-3.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73c48260 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73c48260 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73c48260 Branch: refs/heads/trunk Commit: 73c48260d74e5114ea9d2b79726b308181c2b465 Parents: 6f848db a4f32c5 Author: Yuki Morishita Authored: Mon Oct 26 12:52:22 2015 -0500 Committer: Yuki Morishita Committed: Mon Oct 26 12:52:22 2015 -0500 -- .../db/compaction/CompactionManager.java| 32 .../cassandra/service/StorageService.java | 3 ++ 2 files changed, 35 insertions(+) --
[jira] [Updated] (CASSANDRA-9512) CqlTableTest.testCqlNativeStorageCollectionColumnTable failed in trunk
[ https://issues.apache.org/jira/browse/CASSANDRA-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-9512: -- Assignee: (was: Ariel Weisberg) > CqlTableTest.testCqlNativeStorageCollectionColumnTable failed in trunk > -- > > Key: CASSANDRA-9512 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9512 > Project: Cassandra > Issue Type: Test > Components: Tests >Reporter: Michael Shuler > Labels: test-failure > Fix For: 3.x > > > Error: > {{expected:<1> but was:<2>}} > The trace shows: > {noformat} > java.io.IOException: java.lang.RuntimeException: failed to prepare cql query > update cql3ks.collectiontable set n = ? WHERE "m" = ? > at > org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:357) > ~[main/:na] > {noformat} > http://cassci.datastax.com/view/trunk/job/trunk_testall/123/testReport/junit/org.apache.cassandra.pig/CqlTableTest/testCqlNativeStorageCollectionColumnTable/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[08/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a4f32c5a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a4f32c5a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a4f32c5a Branch: refs/heads/cassandra-3.0 Commit: a4f32c5af6974cec9d3d18bcec3c0ea683ab1045 Parents: db7feb4 32f22a4 Author: Yuki Morishita Authored: Mon Oct 26 12:19:21 2015 -0500 Committer: Yuki Morishita Committed: Mon Oct 26 12:19:21 2015 -0500 -- .../db/compaction/CompactionManager.java| 32 .../cassandra/service/StorageService.java | 3 ++ 2 files changed, 35 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a4f32c5a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a4f32c5a/src/java/org/apache/cassandra/service/StorageService.java -- diff --cc src/java/org/apache/cassandra/service/StorageService.java index 9153cd8,f162f7c..fb1edf6 --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@@ -3946,10 -3899,11 +3946,13 @@@ public class StorageService extends Not } FBUtilities.waitOnFutures(flushes); -BatchlogManager.shutdown(); +BatchlogManager.instance.shutdown(); + +HintsService.instance.shutdownBlocking(); + // Interrupt on going compaction and shutdown to prevent further compaction + CompactionManager.instance.forceShutdown(); + // whilst we've flushed all the CFs, which will have recycled all completed segments, we want to ensure // there are no segments to replay, so we force the recycling of any remaining (should be at most one) CommitLog.instance.forceRecycleAllSegments();
[01/10] cassandra git commit: Shutdown compaction in drain to prevent leak
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 34b8d8fcb -> 17082d4b5 refs/heads/cassandra-2.2 013ce8851 -> 32f22a4e3 refs/heads/cassandra-3.0 db7feb4c2 -> a4f32c5af refs/heads/trunk 6f848db4c -> 73c48260d Shutdown compaction in drain to prevent leak patch by yukim; reviewed by marcuse for CASSANDRA-10079 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/17082d4b Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/17082d4b Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/17082d4b Branch: refs/heads/cassandra-2.1 Commit: 17082d4b54c89fd34f81400e0002fff67c30f150 Parents: 34b8d8f Author: Yuki Morishita Authored: Wed Sep 2 19:36:37 2015 -0500 Committer: Yuki Morishita Committed: Mon Oct 26 12:08:35 2015 -0500 -- .../db/compaction/CompactionManager.java| 40 .../cassandra/service/StorageService.java | 3 ++ 2 files changed, 36 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/17082d4b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java index e60675a..b85eb51 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java @@ -29,13 +29,7 @@ import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Set; -import java.util.concurrent.BlockingQueue; -import java.util.concurrent.Callable; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.Future; -import java.util.concurrent.LinkedBlockingQueue; -import java.util.concurrent.SynchronousQueue; -import java.util.concurrent.TimeUnit; +import java.util.concurrent.*; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.openmbean.OpenDataException; @@ -198,6 +192,38 @@ public class CompactionManager implements CompactionManagerMBean return false; } +/** + * Shutdowns both compaction and validation executors, cancels running compaction / validation, + * and waits for tasks to complete if tasks were not cancelable. + */ +public void forceShutdown() +{ +// shutdown executors to prevent further submission +executor.shutdown(); +validationExecutor.shutdown(); + +// interrupt compactions and validations +for (Holder compactionHolder : CompactionMetrics.getCompactions()) +{ +compactionHolder.stop(); +} + +// wait for tasks to terminate +// compaction tasks are interrupted above, so it shuold be fairy quick +// until not interrupted tasks to complete. +for (ExecutorService exec : Arrays.asList(executor, validationExecutor)) +{ +try +{ +exec.awaitTermination(1, TimeUnit.MINUTES); +} +catch (InterruptedException e) +{ +logger.error("Interrupted while waiting for tasks to be terminated", e); +} +} +} + public void finishCompactionsAndShutdown(long timeout, TimeUnit unit) throws InterruptedException { executor.shutdown(); http://git-wip-us.apache.org/repos/asf/cassandra/blob/17082d4b/src/java/org/apache/cassandra/service/StorageService.java -- diff --git a/src/java/org/apache/cassandra/service/StorageService.java b/src/java/org/apache/cassandra/service/StorageService.java index d5730d5..7e5b67b 100644 --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@ -3891,6 +3891,9 @@ public class StorageService extends NotificationBroadcasterSupport implements IE BatchlogManager.shutdown(); +// Interrupt on going compaction and shutdown to prevent further compaction +CompactionManager.instance.forceShutdown(); + // whilst we've flushed all the CFs, which will have recycled all completed segments, we want to ensure // there are no segments to replay, so we force the recycling of any remaining (should be at most one) CommitLog.instance.forceRecycleAllSegments();
[02/10] cassandra git commit: Shutdown compaction in drain to prevent leak
Shutdown compaction in drain to prevent leak patch by yukim; reviewed by marcuse for CASSANDRA-10079 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/17082d4b Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/17082d4b Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/17082d4b Branch: refs/heads/cassandra-2.2 Commit: 17082d4b54c89fd34f81400e0002fff67c30f150 Parents: 34b8d8f Author: Yuki Morishita Authored: Wed Sep 2 19:36:37 2015 -0500 Committer: Yuki Morishita Committed: Mon Oct 26 12:08:35 2015 -0500 -- .../db/compaction/CompactionManager.java| 40 .../cassandra/service/StorageService.java | 3 ++ 2 files changed, 36 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/17082d4b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java index e60675a..b85eb51 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java @@ -29,13 +29,7 @@ import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Set; -import java.util.concurrent.BlockingQueue; -import java.util.concurrent.Callable; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.Future; -import java.util.concurrent.LinkedBlockingQueue; -import java.util.concurrent.SynchronousQueue; -import java.util.concurrent.TimeUnit; +import java.util.concurrent.*; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.openmbean.OpenDataException; @@ -198,6 +192,38 @@ public class CompactionManager implements CompactionManagerMBean return false; } +/** + * Shutdowns both compaction and validation executors, cancels running compaction / validation, + * and waits for tasks to complete if tasks were not cancelable. + */ +public void forceShutdown() +{ +// shutdown executors to prevent further submission +executor.shutdown(); +validationExecutor.shutdown(); + +// interrupt compactions and validations +for (Holder compactionHolder : CompactionMetrics.getCompactions()) +{ +compactionHolder.stop(); +} + +// wait for tasks to terminate +// compaction tasks are interrupted above, so it shuold be fairy quick +// until not interrupted tasks to complete. +for (ExecutorService exec : Arrays.asList(executor, validationExecutor)) +{ +try +{ +exec.awaitTermination(1, TimeUnit.MINUTES); +} +catch (InterruptedException e) +{ +logger.error("Interrupted while waiting for tasks to be terminated", e); +} +} +} + public void finishCompactionsAndShutdown(long timeout, TimeUnit unit) throws InterruptedException { executor.shutdown(); http://git-wip-us.apache.org/repos/asf/cassandra/blob/17082d4b/src/java/org/apache/cassandra/service/StorageService.java -- diff --git a/src/java/org/apache/cassandra/service/StorageService.java b/src/java/org/apache/cassandra/service/StorageService.java index d5730d5..7e5b67b 100644 --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@ -3891,6 +3891,9 @@ public class StorageService extends NotificationBroadcasterSupport implements IE BatchlogManager.shutdown(); +// Interrupt on going compaction and shutdown to prevent further compaction +CompactionManager.instance.forceShutdown(); + // whilst we've flushed all the CFs, which will have recycled all completed segments, we want to ensure // there are no segments to replay, so we force the recycling of any remaining (should be at most one) CommitLog.instance.forceRecycleAllSegments();
[07/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2
Merge branch 'cassandra-2.1' into cassandra-2.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/32f22a4e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/32f22a4e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/32f22a4e Branch: refs/heads/trunk Commit: 32f22a4e35cd38c5a539a0bff47e1fe3edd86e61 Parents: 013ce88 17082d4 Author: Yuki Morishita Authored: Mon Oct 26 12:19:13 2015 -0500 Committer: Yuki Morishita Committed: Mon Oct 26 12:19:13 2015 -0500 -- .../db/compaction/CompactionManager.java| 50 +--- .../cassandra/service/StorageService.java | 3 ++ 2 files changed, 37 insertions(+), 16 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/32f22a4e/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- diff --cc src/java/org/apache/cassandra/db/compaction/CompactionManager.java index ea20a1f,b85eb51..0c6e24f --- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java @@@ -20,22 -20,16 +20,8 @@@ package org.apache.cassandra.db.compact import java.io.File; import java.io.IOException; import java.lang.management.ManagementFactory; --import java.util.ArrayList; -import java.util.Arrays; --import java.util.Collection; --import java.util.Collections; --import java.util.HashSet; --import java.util.Iterator; --import java.util.List; --import java.util.Map; --import java.util.Set; - import java.util.UUID; - import java.util.concurrent.BlockingQueue; - import java.util.concurrent.Callable; - import java.util.concurrent.ExecutionException; - import java.util.concurrent.Future; - import java.util.concurrent.LinkedBlockingQueue; - import java.util.concurrent.SynchronousQueue; - import java.util.concurrent.TimeUnit; ++import java.util.*; + import java.util.concurrent.*; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.openmbean.OpenDataException; http://git-wip-us.apache.org/repos/asf/cassandra/blob/32f22a4e/src/java/org/apache/cassandra/service/StorageService.java --
[03/10] cassandra git commit: Shutdown compaction in drain to prevent leak
Shutdown compaction in drain to prevent leak patch by yukim; reviewed by marcuse for CASSANDRA-10079 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/17082d4b Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/17082d4b Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/17082d4b Branch: refs/heads/cassandra-3.0 Commit: 17082d4b54c89fd34f81400e0002fff67c30f150 Parents: 34b8d8f Author: Yuki Morishita Authored: Wed Sep 2 19:36:37 2015 -0500 Committer: Yuki Morishita Committed: Mon Oct 26 12:08:35 2015 -0500 -- .../db/compaction/CompactionManager.java| 40 .../cassandra/service/StorageService.java | 3 ++ 2 files changed, 36 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/17082d4b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java index e60675a..b85eb51 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java @@ -29,13 +29,7 @@ import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Set; -import java.util.concurrent.BlockingQueue; -import java.util.concurrent.Callable; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.Future; -import java.util.concurrent.LinkedBlockingQueue; -import java.util.concurrent.SynchronousQueue; -import java.util.concurrent.TimeUnit; +import java.util.concurrent.*; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.openmbean.OpenDataException; @@ -198,6 +192,38 @@ public class CompactionManager implements CompactionManagerMBean return false; } +/** + * Shutdowns both compaction and validation executors, cancels running compaction / validation, + * and waits for tasks to complete if tasks were not cancelable. + */ +public void forceShutdown() +{ +// shutdown executors to prevent further submission +executor.shutdown(); +validationExecutor.shutdown(); + +// interrupt compactions and validations +for (Holder compactionHolder : CompactionMetrics.getCompactions()) +{ +compactionHolder.stop(); +} + +// wait for tasks to terminate +// compaction tasks are interrupted above, so it shuold be fairy quick +// until not interrupted tasks to complete. +for (ExecutorService exec : Arrays.asList(executor, validationExecutor)) +{ +try +{ +exec.awaitTermination(1, TimeUnit.MINUTES); +} +catch (InterruptedException e) +{ +logger.error("Interrupted while waiting for tasks to be terminated", e); +} +} +} + public void finishCompactionsAndShutdown(long timeout, TimeUnit unit) throws InterruptedException { executor.shutdown(); http://git-wip-us.apache.org/repos/asf/cassandra/blob/17082d4b/src/java/org/apache/cassandra/service/StorageService.java -- diff --git a/src/java/org/apache/cassandra/service/StorageService.java b/src/java/org/apache/cassandra/service/StorageService.java index d5730d5..7e5b67b 100644 --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@ -3891,6 +3891,9 @@ public class StorageService extends NotificationBroadcasterSupport implements IE BatchlogManager.shutdown(); +// Interrupt on going compaction and shutdown to prevent further compaction +CompactionManager.instance.forceShutdown(); + // whilst we've flushed all the CFs, which will have recycled all completed segments, we want to ensure // there are no segments to replay, so we force the recycling of any remaining (should be at most one) CommitLog.instance.forceRecycleAllSegments();
[04/10] cassandra git commit: Shutdown compaction in drain to prevent leak
Shutdown compaction in drain to prevent leak patch by yukim; reviewed by marcuse for CASSANDRA-10079 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/17082d4b Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/17082d4b Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/17082d4b Branch: refs/heads/trunk Commit: 17082d4b54c89fd34f81400e0002fff67c30f150 Parents: 34b8d8f Author: Yuki Morishita Authored: Wed Sep 2 19:36:37 2015 -0500 Committer: Yuki Morishita Committed: Mon Oct 26 12:08:35 2015 -0500 -- .../db/compaction/CompactionManager.java| 40 .../cassandra/service/StorageService.java | 3 ++ 2 files changed, 36 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/17082d4b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java index e60675a..b85eb51 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java @@ -29,13 +29,7 @@ import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Set; -import java.util.concurrent.BlockingQueue; -import java.util.concurrent.Callable; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.Future; -import java.util.concurrent.LinkedBlockingQueue; -import java.util.concurrent.SynchronousQueue; -import java.util.concurrent.TimeUnit; +import java.util.concurrent.*; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.openmbean.OpenDataException; @@ -198,6 +192,38 @@ public class CompactionManager implements CompactionManagerMBean return false; } +/** + * Shutdowns both compaction and validation executors, cancels running compaction / validation, + * and waits for tasks to complete if tasks were not cancelable. + */ +public void forceShutdown() +{ +// shutdown executors to prevent further submission +executor.shutdown(); +validationExecutor.shutdown(); + +// interrupt compactions and validations +for (Holder compactionHolder : CompactionMetrics.getCompactions()) +{ +compactionHolder.stop(); +} + +// wait for tasks to terminate +// compaction tasks are interrupted above, so it shuold be fairy quick +// until not interrupted tasks to complete. +for (ExecutorService exec : Arrays.asList(executor, validationExecutor)) +{ +try +{ +exec.awaitTermination(1, TimeUnit.MINUTES); +} +catch (InterruptedException e) +{ +logger.error("Interrupted while waiting for tasks to be terminated", e); +} +} +} + public void finishCompactionsAndShutdown(long timeout, TimeUnit unit) throws InterruptedException { executor.shutdown(); http://git-wip-us.apache.org/repos/asf/cassandra/blob/17082d4b/src/java/org/apache/cassandra/service/StorageService.java -- diff --git a/src/java/org/apache/cassandra/service/StorageService.java b/src/java/org/apache/cassandra/service/StorageService.java index d5730d5..7e5b67b 100644 --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@ -3891,6 +3891,9 @@ public class StorageService extends NotificationBroadcasterSupport implements IE BatchlogManager.shutdown(); +// Interrupt on going compaction and shutdown to prevent further compaction +CompactionManager.instance.forceShutdown(); + // whilst we've flushed all the CFs, which will have recycled all completed segments, we want to ensure // there are no segments to replay, so we force the recycling of any remaining (should be at most one) CommitLog.instance.forceRecycleAllSegments();
[06/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2
Merge branch 'cassandra-2.1' into cassandra-2.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/32f22a4e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/32f22a4e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/32f22a4e Branch: refs/heads/cassandra-2.2 Commit: 32f22a4e35cd38c5a539a0bff47e1fe3edd86e61 Parents: 013ce88 17082d4 Author: Yuki Morishita Authored: Mon Oct 26 12:19:13 2015 -0500 Committer: Yuki Morishita Committed: Mon Oct 26 12:19:13 2015 -0500 -- .../db/compaction/CompactionManager.java| 50 +--- .../cassandra/service/StorageService.java | 3 ++ 2 files changed, 37 insertions(+), 16 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/32f22a4e/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- diff --cc src/java/org/apache/cassandra/db/compaction/CompactionManager.java index ea20a1f,b85eb51..0c6e24f --- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java @@@ -20,22 -20,16 +20,8 @@@ package org.apache.cassandra.db.compact import java.io.File; import java.io.IOException; import java.lang.management.ManagementFactory; --import java.util.ArrayList; -import java.util.Arrays; --import java.util.Collection; --import java.util.Collections; --import java.util.HashSet; --import java.util.Iterator; --import java.util.List; --import java.util.Map; --import java.util.Set; - import java.util.UUID; - import java.util.concurrent.BlockingQueue; - import java.util.concurrent.Callable; - import java.util.concurrent.ExecutionException; - import java.util.concurrent.Future; - import java.util.concurrent.LinkedBlockingQueue; - import java.util.concurrent.SynchronousQueue; - import java.util.concurrent.TimeUnit; ++import java.util.*; + import java.util.concurrent.*; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.openmbean.OpenDataException; http://git-wip-us.apache.org/repos/asf/cassandra/blob/32f22a4e/src/java/org/apache/cassandra/service/StorageService.java --
[jira] [Updated] (CASSANDRA-9356) SSTableRewriterTest fails infrequently
[ https://issues.apache.org/jira/browse/CASSANDRA-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-9356: -- Assignee: (was: Ariel Weisberg) > SSTableRewriterTest fails infrequently > -- > > Key: CASSANDRA-9356 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9356 > Project: Cassandra > Issue Type: Bug > Components: Tests >Reporter: Michael Shuler > Labels: test-failure > Fix For: 2.1.x, 2.2.x > > Attachments: system.log.gz > > > This used to complain about a timeout. I am not seeing that anymore. What I > see is one test case fail, or the one time it reproduced on my laptop a > bunch. I am seeing different assertions fail in different tests now. > http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-C-9528-testall/6/testReport/junit/org.apache.cassandra.io.sstable/SSTableRewriterTest/testAbort2/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8457) nio MessagingService
[ https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-8457: -- Assignee: (was: Ariel Weisberg) > nio MessagingService > > > Key: CASSANDRA-8457 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8457 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Jonathan Ellis >Priority: Minor > Labels: performance > Fix For: 3.x > > > Thread-per-peer (actually two each incoming and outbound) is a big > contributor to context switching, especially for larger clusters. Let's look > at switching to nio, possibly via Netty. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9946) use ioprio_set on compaction threads by default instead of manually throttling
[ https://issues.apache.org/jira/browse/CASSANDRA-9946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-9946: -- Assignee: (was: Ariel Weisberg) > use ioprio_set on compaction threads by default instead of manually throttling > -- > > Key: CASSANDRA-9946 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9946 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Jonathan Ellis > Labels: compaction > Fix For: 3.x > > > Compaction throttling works as designed, but it has two drawbacks: > * it requires manual tuning to choose the "right" value for a given machine > * it does not allow compaction to "burst" above its limit if there is > additional i/o capacity available while there are less application requests > to serve > Using ioprio_set instead solves both of these problems. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7919) Change timestamp representation to timeuuid
[ https://issues.apache.org/jira/browse/CASSANDRA-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-7919: -- Assignee: (was: T Jake Luciani) > Change timestamp representation to timeuuid > --- > > Key: CASSANDRA-7919 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7919 > Project: Cassandra > Issue Type: Improvement >Reporter: T Jake Luciani >Priority: Minor > Fix For: 3.x > > > In order to overcome some of the issues with timestamps (CASSANDRA-6123) we > need to migrate to a better timestamp representation for cells. > Since drivers already support timeuuid it makes sense to migrate to this > internally (see CASSANDRA-7056) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8449) Allow zero-copy reads again
[ https://issues.apache.org/jira/browse/CASSANDRA-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-8449: -- Assignee: (was: T Jake Luciani) > Allow zero-copy reads again > --- > > Key: CASSANDRA-8449 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8449 > Project: Cassandra > Issue Type: Improvement >Reporter: T Jake Luciani >Priority: Minor > Labels: performance > Fix For: 3.x > > > We disabled zero-copy reads in CASSANDRA-3179 due to in flight reads > accessing a ByteBuffer when the data was unmapped by compaction. Currently > this code path is only used for uncompressed reads. > The actual bytes are in fact copied to the client output buffers for both > netty and thrift before being sent over the wire, so the only issue really is > the time it takes to process the read internally. > This patch adds a slow network read test and changes the tidy() method to > actually delete a sstable once the readTimeout has elapsed giving plenty of > time to serialize the read. > Removing this copy causes significantly less GC on the read path and improves > the tail latencies: > http://cstar.datastax.com/graph?stats=c0c8ce16-7fea-11e4-959d-42010af0688f&metric=gc_count&operation=2_read&smoothing=1&show_aggregates=true&xmin=0&xmax=109.34&ymin=0&ymax=5.5 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10579) IndexOutOfBoundsException during memtable flushing at startup (with offheap_objects)
[ https://issues.apache.org/jira/browse/CASSANDRA-10579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974622#comment-14974622 ] Jeff Griffith commented on CASSANDRA-10579: --- Great thanks [~benedict]. i'll merge both changes in and give it a try. > IndexOutOfBoundsException during memtable flushing at startup (with > offheap_objects) > > > Key: CASSANDRA-10579 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10579 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: 2.1.10 on linux >Reporter: Jeff Griffith >Assignee: Benedict > Fix For: 2.1.x > > > Sometimes we have problems at startup where memtable flushes with an index > out of bounds exception as seen below. Cassandra is then dead in the water > until we track down the corresponding commit log via the segment ID and > remove it: > {code} > INFO [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:267 - Replaying > /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log > INFO [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:270 - Replaying > /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log (CL version 4, > messaging version 8) > INFO [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:478 - Finished > reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log > INFO [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:267 - Replaying > /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log > INFO [main] 2015-10-23 14:43:36,595 CommitLogReplayer.java:270 - Replaying > /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log (CL version 4, > messaging version 8) > INFO [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:478 - Finished > reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log > INFO [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:267 - Replaying > /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log > INFO [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:270 - Replaying > /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log (CL version 4, > messaging version 8) > WARN [SharedPool-Worker-5] 2015-10-23 14:43:36,747 > AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread > Thread[SharedPool-Worker-5,5,main]: {} > java.lang.ArrayIndexOutOfBoundsException: 6 > at > org.apache.cassandra.db.AbstractNativeCell.nametype(AbstractNativeCell.java:204) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at > org.apache.cassandra.db.AbstractNativeCell.isStatic(AbstractNativeCell.java:199) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at > org.apache.cassandra.db.composites.AbstractCType.compare(AbstractCType.java:166) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at > org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:61) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at > org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:58) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at org.apache.cassandra.utils.btree.BTree.find(BTree.java:277) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at > org.apache.cassandra.utils.btree.NodeBuilder.update(NodeBuilder.java:154) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at org.apache.cassandra.utils.btree.Builder.update(Builder.java:74) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at org.apache.cassandra.utils.btree.BTree.update(BTree.java:186) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at > org.apache.cassandra.db.AtomicBTreeColumns.addAllWithSizeDelta(AtomicBTreeColumns.java:225) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at org.apache.cassandra.db.Memtable.put(Memtable.java:210) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at > org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1225) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:359) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at > org.apache.cassandra.db.commitlog.CommitLogReplayer$1.runMayThrow(CommitLogReplayer.java:455) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_31] > at > org.apache.cassandra.
[jira] [Updated] (CASSANDRA-10593) Unintended interactions between commitlog archiving and commitlog recycling
[ https://issues.apache.org/jira/browse/CASSANDRA-10593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-10593: Fix Version/s: 3.0.x 2.2.x 2.1.x > Unintended interactions between commitlog archiving and commitlog recycling > --- > > Key: CASSANDRA-10593 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10593 > Project: Cassandra > Issue Type: Bug >Reporter: J.B. Langston > Fix For: 2.1.x, 2.2.x, 3.0.x > > > Currently the comments in commitlog_archiving.properties suggest using either > cp or ln for the archive_command. > Using ln is problematic because commitlog recycling marks segments as > recycled once the corresponding memtables are flushed and Cassandra will no > longer replay them. This means it's only possible to do PITR on any records > that were written since the last flush. > Using cp works, and this is currently how OpsCenter does for PITR, however > [~brandon.williams] has pointed out this could have some performance impact > because of the additional I/O overhead of copying the commitlog segments. > Starting in 2.1, we can disable commit log recycling in cassandra.yaml so I > thought this would allow me to do PITR without the extra overhead of using > cp. However, when I disable commitlog recycling and try to do a PITR, > Cassandra blows up when trying to replay the restored commit logs: > {code} > ERROR 16:56:42 Exception encountered during startup > java.lang.IllegalStateException: Cannot safely construct descriptor for > segment, as name and header descriptors do not match ((4,1445878452545) vs > (4,1445876822565)): /opt/dse/backup/CommitLog-4-1445876822565.log > at > org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeRestoreArchive(CommitLogArchiver.java:207) > ~[cassandra-all-2.1.9.791.jar:2.1.9.791] > at > org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:116) > ~[cassandra-all-2.1.9.791.jar:2.1.9.791] > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:352) > ~[cassandra-all-2.1.9.791.jar:2.1.9.791] > at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:335) > ~[dse-core-4.8.0.jar:4.8.0] > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:537) > ~[cassandra-all-2.1.9.791.jar:2.1.9.791] > at com.datastax.bdp.DseModule.main(DseModule.java:75) > [dse-core-4.8.0.jar:4.8.0] > java.lang.IllegalStateException: Cannot safely construct descriptor for > segment, as name and header descriptors do not match ((4,1445878452545) vs > (4,1445876822565)): /opt/dse/backup/CommitLog-4-1445876822565.log > at > org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeRestoreArchive(CommitLogArchiver.java:207) > at > org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:116) > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:352) > at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:335) > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:537) > at com.datastax.bdp.DseModule.main(DseModule.java:75) > Exception encountered during startup: Cannot safely construct descriptor for > segment, as name and header descriptors do not match ((4,1445878452545) vs > (4,1445876822565)): /opt/dse/backup/CommitLog-4-1445876822565.log > INFO 16:56:42 DSE shutting down... > INFO 16:56:42 All plugins are stopped. > ERROR 16:56:42 Exception in thread Thread[Thread-2,5,main] > java.lang.AssertionError: null > at > org.apache.cassandra.gms.Gossiper.addLocalApplicationState(Gossiper.java:1403) > ~[cassandra-all-2.1.9.791.jar:2.1.9.791] > at com.datastax.bdp.gms.DseState.setActiveStatus(DseState.java:196) > ~[dse-core-4.8.0.jar:4.8.0] > at com.datastax.bdp.server.DseDaemon.preStop(DseDaemon.java:426) > ~[dse-core-4.8.0.jar:4.8.0] > at com.datastax.bdp.server.DseDaemon.safeStop(DseDaemon.java:436) > ~[dse-core-4.8.0.jar:4.8.0] > at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:676) > ~[dse-core-4.8.0.jar:4.8.0] > at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_31] > {code} > For the sake of completeness, I also tested using cp for the archive_command > and commitlog recycling disabled, and PITR works as expected, but this of > course defeats the point. > It would be good to have some guidance on what is supported here. If ln isn't > expected to work at all, it shouldn't be documented as an acceptable option > for the archive_command in commitlog_archiving.properties. If it should work > with commitlog recycling disabled, the bug causing the IllegalStateException > needs to be fixed. > It would also be good to
[jira] [Updated] (CASSANDRA-10593) Unintended interactions between commitlog archiving and commitlog recycling
[ https://issues.apache.org/jira/browse/CASSANDRA-10593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] J.B. Langston updated CASSANDRA-10593: -- Description: Currently the comments in commitlog_archiving.properties suggest using either cp or ln for the archive_command. Using ln is problematic because commitlog recycling marks segments as recycled once the corresponding memtables are flushed and Cassandra will no longer replay them. This means it's only possible to do PITR on any records that were written since the last flush. Using cp works, and this is currently how OpsCenter does for PITR, however [~brandon.williams] has pointed out this could have some performance impact because of the additional I/O overhead of copying the commitlog segments. Starting in 2.1, we can disable commit log recycling in cassandra.yaml so I thought this would allow me to do PITR without the extra overhead of using cp. However, when I disable commitlog recycling and try to do a PITR, Cassandra blows up when trying to replay the restored commit logs: {code} ERROR 16:56:42 Exception encountered during startup java.lang.IllegalStateException: Cannot safely construct descriptor for segment, as name and header descriptors do not match ((4,1445878452545) vs (4,1445876822565)): /opt/dse/backup/CommitLog-4-1445876822565.log at org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeRestoreArchive(CommitLogArchiver.java:207) ~[cassandra-all-2.1.9.791.jar:2.1.9.791] at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:116) ~[cassandra-all-2.1.9.791.jar:2.1.9.791] at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:352) ~[cassandra-all-2.1.9.791.jar:2.1.9.791] at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:335) ~[dse-core-4.8.0.jar:4.8.0] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:537) ~[cassandra-all-2.1.9.791.jar:2.1.9.791] at com.datastax.bdp.DseModule.main(DseModule.java:75) [dse-core-4.8.0.jar:4.8.0] java.lang.IllegalStateException: Cannot safely construct descriptor for segment, as name and header descriptors do not match ((4,1445878452545) vs (4,1445876822565)): /opt/dse/backup/CommitLog-4-1445876822565.log at org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeRestoreArchive(CommitLogArchiver.java:207) at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:116) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:352) at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:335) at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:537) at com.datastax.bdp.DseModule.main(DseModule.java:75) Exception encountered during startup: Cannot safely construct descriptor for segment, as name and header descriptors do not match ((4,1445878452545) vs (4,1445876822565)): /opt/dse/backup/CommitLog-4-1445876822565.log INFO 16:56:42 DSE shutting down... INFO 16:56:42 All plugins are stopped. ERROR 16:56:42 Exception in thread Thread[Thread-2,5,main] java.lang.AssertionError: null at org.apache.cassandra.gms.Gossiper.addLocalApplicationState(Gossiper.java:1403) ~[cassandra-all-2.1.9.791.jar:2.1.9.791] at com.datastax.bdp.gms.DseState.setActiveStatus(DseState.java:196) ~[dse-core-4.8.0.jar:4.8.0] at com.datastax.bdp.server.DseDaemon.preStop(DseDaemon.java:426) ~[dse-core-4.8.0.jar:4.8.0] at com.datastax.bdp.server.DseDaemon.safeStop(DseDaemon.java:436) ~[dse-core-4.8.0.jar:4.8.0] at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:676) ~[dse-core-4.8.0.jar:4.8.0] at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_31] {code} For the sake of completeness, I also tested using cp for the archive_command and commitlog recycling disabled, and PITR works as expected, but this of course defeats the point. It would be good to have some guidance on what is supported here. If ln isn't expected to work at all, it shouldn't be documented as an acceptable option for the archive_command in commitlog_archiving.properties. If it should work with commitlog recycling disabled, the bug causing the IllegalStateException needs to be fixed. It would also be good to do some testing and quantify the performance impact of enabling commitlog archiving using cp as the archve_command. I realize there are several different issues described here, so maybe they should be separate JIRAs, but first I wanted to just clarify whether we want to support ln at all, and we can go from there. was: Currently the comments in commitlog_archiving.properties suggest using either cp or ln for the archive_command. Using ln is problematic because commitlog recycling marks segments as recycled once the corresponding memtables are flushed and Cassandra will n
[jira] [Created] (CASSANDRA-10593) Unintended interactions between commitlog archiving and commitlog recycling
J.B. Langston created CASSANDRA-10593: - Summary: Unintended interactions between commitlog archiving and commitlog recycling Key: CASSANDRA-10593 URL: https://issues.apache.org/jira/browse/CASSANDRA-10593 Project: Cassandra Issue Type: Bug Reporter: J.B. Langston Currently the comments in commitlog_archiving.properties suggest using either cp or ln for the archive_command. Using ln is problematic because commitlog recycling marks segments as recycled once the corresponding memtables are flushed and Cassandra will no longer be replay them. This means it's only possible to do PITR on any records that were written since the last flush. Using cp works, and this is currently how OpsCenter does for PITR, however [~brandon.williams] has pointed out this could have some performance impact because of the additional I/O overhead of copying the commitlog segments. Starting in 2.1, we can disable commit log recycling in cassandra.yaml so I thought this would allow me to do PITR without the extra overhead of using cp. However, when I disable commitlog recycling and try to do a PITR, Cassandra blows up when trying to replay the restored commit logs: {code} ERROR 16:56:42 Exception encountered during startup java.lang.IllegalStateException: Cannot safely construct descriptor for segment, as name and header descriptors do not match ((4,1445878452545) vs (4,1445876822565)): /opt/dse/backup/CommitLog-4-1445876822565.log at org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeRestoreArchive(CommitLogArchiver.java:207) ~[cassandra-all-2.1.9.791.jar:2.1.9.791] at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:116) ~[cassandra-all-2.1.9.791.jar:2.1.9.791] at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:352) ~[cassandra-all-2.1.9.791.jar:2.1.9.791] at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:335) ~[dse-core-4.8.0.jar:4.8.0] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:537) ~[cassandra-all-2.1.9.791.jar:2.1.9.791] at com.datastax.bdp.DseModule.main(DseModule.java:75) [dse-core-4.8.0.jar:4.8.0] java.lang.IllegalStateException: Cannot safely construct descriptor for segment, as name and header descriptors do not match ((4,1445878452545) vs (4,1445876822565)): /opt/dse/backup/CommitLog-4-1445876822565.log at org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeRestoreArchive(CommitLogArchiver.java:207) at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:116) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:352) at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:335) at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:537) at com.datastax.bdp.DseModule.main(DseModule.java:75) Exception encountered during startup: Cannot safely construct descriptor for segment, as name and header descriptors do not match ((4,1445878452545) vs (4,1445876822565)): /opt/dse/backup/CommitLog-4-1445876822565.log INFO 16:56:42 DSE shutting down... INFO 16:56:42 All plugins are stopped. ERROR 16:56:42 Exception in thread Thread[Thread-2,5,main] java.lang.AssertionError: null at org.apache.cassandra.gms.Gossiper.addLocalApplicationState(Gossiper.java:1403) ~[cassandra-all-2.1.9.791.jar:2.1.9.791] at com.datastax.bdp.gms.DseState.setActiveStatus(DseState.java:196) ~[dse-core-4.8.0.jar:4.8.0] at com.datastax.bdp.server.DseDaemon.preStop(DseDaemon.java:426) ~[dse-core-4.8.0.jar:4.8.0] at com.datastax.bdp.server.DseDaemon.safeStop(DseDaemon.java:436) ~[dse-core-4.8.0.jar:4.8.0] at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:676) ~[dse-core-4.8.0.jar:4.8.0] at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_31] {code} For the sake of completeness, I also tested using cp for the archive_command and commitlog recycling disabled, and PITR works as expected, but this of course defeats the point. It would be good to have some guidance on what is supported here. If ln isn't expected to work at all, it shouldn't be documented as an acceptable option for the archive_command in commitlog_archiving.properties. If it should work with commitlog recycling disabled, the bug causing the IllegalStateException needs to be fixed. It would also be good to do some testing and quantify the performance impact of enabling commitlog archiving using cp as the archve_command. I realize there are several different issues described here, so maybe they should be separate JIRAs, but first I wanted to just clarify whether we want to support ln at all, and we can go from there. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10569) Keyspace validation errors are getting lost in system_add_keyspace
[ https://issues.apache.org/jira/browse/CASSANDRA-10569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta updated CASSANDRA-10569: Reviewer: Paulo Motta > Keyspace validation errors are getting lost in system_add_keyspace > -- > > Key: CASSANDRA-10569 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10569 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Mike Adamson >Assignee: Sam Tunnicliffe > Fix For: 3.0.0 > > > The following: > {noformat} > cassandraserver.system_add_keyspace( > new KsDef("xxx", SimpleStrategy.class.getSimpleName(), > Lists.newArrayList())); > {noformat} > used to throw an {{InvalidRequestException}} in 2.1. > In 3.0 the strategy validation has been removed from > {{KeyspaceMetadata.validate}} so the strategy errors don't get picked up > until the schema change has been announced. As a result the > {{ConfigurationError}} is swallowed in {{FBUtilities.waitOnFuture}} and > thrown on as a {{RuntimeException}}. > This possibly affects {{system_update_keyspace}} as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10579) IndexOutOfBoundsException during memtable flushing at startup (with offheap_objects)
[ https://issues.apache.org/jira/browse/CASSANDRA-10579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974535#comment-14974535 ] Benedict commented on CASSANDRA-10579: -- Thanks. That particular assertion can be explained and fixed by [this patch|https://github.com/belliottsmith/cassandra/tree/10579-fix]. I'm not certain if this explains the earlier presentation of the problem that occurs without assertions enabled, but since it involves integer overflows, it is entirely possible that we manage to overwrite the higher-order bytes of that other field as a result. If you could try the patch and confirm the problem is resolved, I'd appreciate it. if not, please post whatever error is now produced. > IndexOutOfBoundsException during memtable flushing at startup (with > offheap_objects) > > > Key: CASSANDRA-10579 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10579 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: 2.1.10 on linux >Reporter: Jeff Griffith >Assignee: Benedict > Fix For: 2.1.x > > > Sometimes we have problems at startup where memtable flushes with an index > out of bounds exception as seen below. Cassandra is then dead in the water > until we track down the corresponding commit log via the segment ID and > remove it: > {code} > INFO [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:267 - Replaying > /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log > INFO [main] 2015-10-23 14:43:36,440 CommitLogReplayer.java:270 - Replaying > /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log (CL version 4, > messaging version 8) > INFO [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:478 - Finished > reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832692.log > INFO [main] 2015-10-23 14:43:36,594 CommitLogReplayer.java:267 - Replaying > /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log > INFO [main] 2015-10-23 14:43:36,595 CommitLogReplayer.java:270 - Replaying > /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log (CL version 4, > messaging version 8) > INFO [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:478 - Finished > reading /home/y/var/cassandra/commitlog/CommitLog-4-1445474832693.log > INFO [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:267 - Replaying > /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log > INFO [main] 2015-10-23 14:43:36,699 CommitLogReplayer.java:270 - Replaying > /home/y/var/cassandra/commitlog/CommitLog-4-1445474832694.log (CL version 4, > messaging version 8) > WARN [SharedPool-Worker-5] 2015-10-23 14:43:36,747 > AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread > Thread[SharedPool-Worker-5,5,main]: {} > java.lang.ArrayIndexOutOfBoundsException: 6 > at > org.apache.cassandra.db.AbstractNativeCell.nametype(AbstractNativeCell.java:204) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at > org.apache.cassandra.db.AbstractNativeCell.isStatic(AbstractNativeCell.java:199) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at > org.apache.cassandra.db.composites.AbstractCType.compare(AbstractCType.java:166) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at > org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:61) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at > org.apache.cassandra.db.composites.AbstractCellNameType$1.compare(AbstractCellNameType.java:58) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at org.apache.cassandra.utils.btree.BTree.find(BTree.java:277) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at > org.apache.cassandra.utils.btree.NodeBuilder.update(NodeBuilder.java:154) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at org.apache.cassandra.utils.btree.Builder.update(Builder.java:74) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at org.apache.cassandra.utils.btree.BTree.update(BTree.java:186) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at > org.apache.cassandra.db.AtomicBTreeColumns.addAllWithSizeDelta(AtomicBTreeColumns.java:225) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at org.apache.cassandra.db.Memtable.put(Memtable.java:210) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at > org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1225) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) > ~[apache-cassandra-2.1.10.jar:2.1.10-SNAPSHOT] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:359) > ~[apache-cassandra-2.1.10.j
[jira] [Commented] (CASSANDRA-9556) Add newer data types to cassandra stress (e.g. decimal, dates, UDTs)
[ https://issues.apache.org/jira/browse/CASSANDRA-9556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974486#comment-14974486 ] Jeremy Hanna commented on CASSANDRA-9556: - [~blerer] should we split UDTs and Tuples into a separate ticket and get all of the basic types done for this one? > Add newer data types to cassandra stress (e.g. decimal, dates, UDTs) > > > Key: CASSANDRA-9556 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9556 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Jeremy Hanna >Assignee: ZhaoYang > Labels: stress > Attachments: cassandra-2.1-9556.txt, trunk-9556.txt > > > Currently you can't define a data model with decimal types and use Cassandra > stress with it. Also, I imagine that holds true with other newer data types > such as the new date and time types. Besides that, now that data models are > including user defined types, we should allow users to create those > structures with stress as well. Perhaps we could split out the UDTs into a > different ticket if it holds the other types up. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7217) Native transport performance (with cassandra-stress) drops precipitously past around 1000 threads
[ https://issues.apache.org/jira/browse/CASSANDRA-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-7217: Labels: performance stress triaged (was: performance triaged) > Native transport performance (with cassandra-stress) drops precipitously past > around 1000 threads > - > > Key: CASSANDRA-7217 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7217 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Benedict >Assignee: Ryan McGuire > Labels: performance, stress, triaged > Fix For: 2.1.x > > > This is obviously bad. Let's figure out why it's happening and put a stop to > it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7739) cassandra-stress: cannot handle "value-less" tables
[ https://issues.apache.org/jira/browse/CASSANDRA-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-7739: Labels: lhf stress (was: lhf) > cassandra-stress: cannot handle "value-less" tables > --- > > Key: CASSANDRA-7739 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7739 > Project: Cassandra > Issue Type: Improvement >Reporter: Robert Stupp > Labels: lhf, stress > Fix For: 2.1.x > > > Given a table, that only has primary-key columns, cassandra-stress fails with > this exception. > The bug is, that > https://github.com/apache/cassandra/blob/trunk/tools/stress/src/org/apache/cassandra/stress/StressProfile.java#L281 > always adds the {{SET}} even if there are no "value columns" to update. > {noformat} > Exception in thread "main" java.lang.RuntimeException: > InvalidRequestException(why:line 1:24 no viable alternative at input 'WHERE') > at > org.apache.cassandra.stress.StressProfile.getInsert(StressProfile.java:352) > at > org.apache.cassandra.stress.settings.SettingsCommandUser$1.get(SettingsCommandUser.java:66) > at > org.apache.cassandra.stress.settings.SettingsCommandUser$1.get(SettingsCommandUser.java:62) > at > org.apache.cassandra.stress.operations.SampledOpDistributionFactory$1.get(SampledOpDistributionFactory.java:76) > at > org.apache.cassandra.stress.StressAction$Consumer.(StressAction.java:248) > at org.apache.cassandra.stress.StressAction.run(StressAction.java:188) > at org.apache.cassandra.stress.StressAction.warmup(StressAction.java:92) > at org.apache.cassandra.stress.StressAction.run(StressAction.java:62) > at org.apache.cassandra.stress.Stress.main(Stress.java:109) > Caused by: InvalidRequestException(why:line 1:24 no viable alternative at > input 'WHERE') > at > org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result$prepare_cql3_query_resultStandardScheme.read(Cassandra.java:52282) > at > org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result$prepare_cql3_query_resultStandardScheme.read(Cassandra.java:52259) > at > org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result.read(Cassandra.java:52198) > at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) > at > org.apache.cassandra.thrift.Cassandra$Client.recv_prepare_cql3_query(Cassandra.java:1797) > at > org.apache.cassandra.thrift.Cassandra$Client.prepare_cql3_query(Cassandra.java:1783) > at > org.apache.cassandra.stress.util.SimpleThriftClient.prepare_cql3_query(SimpleThriftClient.java:79) > at > org.apache.cassandra.stress.StressProfile.getInsert(StressProfile.java:348) > ... 8 more > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8629) Exceptions in cassandra-stress
[ https://issues.apache.org/jira/browse/CASSANDRA-8629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-8629: Labels: stress (was: ) > Exceptions in cassandra-stress > -- > > Key: CASSANDRA-8629 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8629 > Project: Cassandra > Issue Type: Bug >Reporter: Aleksey Yeschenko >Priority: Trivial > Labels: stress > > cassandra-stress when run with tiny n, throws > org.apache.commons.math3.exception.NotStrictlyPositiveException. > Now, n=1 doesn't really make any sense, w/ 50k writes used just for warmup, > but an exception is still an exception. Labeled w/ priority: Trivial. > Profile used: http://pastebin.com/raw.php?i=9U5EMdVq > {noformat} > tools/bin/cassandra-stress user profile=partition.yaml ops\(insert=1\) n=1 > -rate threads=50 > INFO 18:21:59 Using data-center name 'datacenter1' for > DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct > datacenter name with DCAwareRoundRobinPolicy constructor) > Connected to cluster: Test Cluster > Datatacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1 > INFO 18:21:59 New Cassandra host localhost/127.0.0.1:9042 added > Created schema. Sleeping 1s for propagation. > Exception in thread "main" > org.apache.commons.math3.exception.NotStrictlyPositiveException: standard > deviation (0) > at > org.apache.commons.math3.distribution.NormalDistribution.(NormalDistribution.java:108) > at > org.apache.cassandra.stress.settings.OptionDistribution$GaussianFactory.get(OptionDistribution.java:418) > at > org.apache.cassandra.stress.generate.SeedManager.(SeedManager.java:59) > at > org.apache.cassandra.stress.settings.SettingsCommandUser.getFactory(SettingsCommandUser.java:78) > at org.apache.cassandra.stress.StressAction.run(StressAction.java:61) > at org.apache.cassandra.stress.Stress.main(Stress.java:109) > {noformat} > On cassandra-2.1 HEAD, I cannot reproduce it, but get a different exception, > with n=10: > {noformat} > Exception in thread "Thread-13" java.lang.AssertionError > at > org.apache.cassandra.stress.util.DynamicList.remove(DynamicList.java:156) > at org.apache.cassandra.stress.generate.Seed.remove(Seed.java:83) > at > org.apache.cassandra.stress.generate.SeedManager.markLastWrite(SeedManager.java:115) > at > org.apache.cassandra.stress.generate.PartitionIterator$MultiRowIterator.setHasNext(PartitionIterator.java:561) > at > org.apache.cassandra.stress.generate.PartitionIterator$MultiRowIterator.seek(PartitionIterator.java:333) > at > org.apache.cassandra.stress.generate.PartitionIterator$MultiRowIterator.reset(PartitionIterator.java:242) > at > org.apache.cassandra.stress.generate.PartitionIterator.reset(PartitionIterator.java:99) > at org.apache.cassandra.stress.Operation.ready(Operation.java:110) > at > org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:288) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8633) cassandra-stress writes don't respect 'select' parameter
[ https://issues.apache.org/jira/browse/CASSANDRA-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-8633: Labels: stress (was: ) > cassandra-stress writes don't respect 'select' parameter > > > Key: CASSANDRA-8633 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8633 > Project: Cassandra > Issue Type: Bug >Reporter: Aleksey Yeschenko > Labels: stress > Fix For: 2.1.x > > Attachments: partition1.yaml, partition2.yaml, partition3.yaml, > partition4.yaml > > > W/ the attached profile (partition1.yaml), which has 3 clustering columns, > each w/ clustering: fixed(100), and select: fixed(1)/1M, stress generates > huge batches (whereas should only generate single-row mutations). > {noformat} > WARN 20:37:59 Batch of prepared statements for [extreme.extreme] is of size > 269973, exceeding specified threshold of 5120 by 264853. > {noformat} > It can be better or worth depending on the distribution of those clustering-s > (w/ product fixed at 1M). > W/ 10/1000/100 (partition2.yaml): > {noformat} > WARN 20:47:22 Batch of prepared statements for [extreme.extreme] is of size > 1769445, exceeding specified threshold of 5120 by 1764325. > {noformat} > W/ 10k/100/1 (partition3.yaml): > {noformat} > WARN 20:50:19 Batch of prepared statements for [extreme.extreme] is of size > 5373, exceeding specified threshold of 5120 by 253. > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8500) Improve cassandra-stress help pages
[ https://issues.apache.org/jira/browse/CASSANDRA-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-8500: Labels: stress (was: ) > Improve cassandra-stress help pages > --- > > Key: CASSANDRA-8500 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8500 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Benedict > Labels: stress > > cassandra-stress is flummoxing a lot of people. As well as rewriting its > README, we should improve the help pages so that they're more legible. > We should offer an "all" option that prints every sub-page, so it can be > scanned like a README (and perhaps make the basis of said file), and we > should at least stop printing all of the distribution parameter options every > time they appear, as they're very common now. > Offering some help about how to make the best out of the help might itself be > a good idea, as well as perhaps printing what all of the options within each > subgroup are in the summary page, so there is no pecking at them to be done. > There should be a dedicated distribution help page that can explain all of > the parameters that are currently just given names we hope are sufficiently > descriptive. > Finally, we should make sure all of the descriptions of each option are clear. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8681) cassandra-stress fails after single ReadTimeoutException
[ https://issues.apache.org/jira/browse/CASSANDRA-8681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-8681: Labels: stress (was: ) > cassandra-stress fails after single ReadTimeoutException > > > Key: CASSANDRA-8681 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8681 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Robert Stupp > Labels: stress > > After a single exception is propagated from the Java Driver, cassandra-stress > prints out some {{NoSuchElementException}}s and stops working. > {noformat} > Running [singlepost, timeline] with 36 threads 2 minutes > total ops , adj row/s,op/s,pk/s, row/s,mean, med, .95, >.99,.999, max, time, stderr, gc: #, max ms, sum ms, sdv > ms, mb > com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout > during read query at consistency ONE (1 responses were required but only 0 > replica responded) > com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout > during read query at consistency ONE (1 responses were required but only 0 > replica responded) > com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout > during read query at consistency ONE (timeout while waiting for repair of > inconsistent replica) > com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout > during read query at consistency ONE (timeout while waiting for repair of > inconsistent replica) > java.util.NoSuchElementException > com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout > during read query at consistency ONE (1 responses were required but only 0 > replica responded) > java.util.NoSuchElementException > com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout > during read query at consistency ONE (1 responses were required but only 0 > replica responded) > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout > during read query at consistency ONE (1 responses were required but only 0 > replica responded) > com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout > during read query at consistency ONE (1 responses were required but only 0 > replica responded) > java.util.NoSuchElementException > com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout > during read query at consistency ONE (1 responses were required but only 0 > replica responded) > com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout > during read query at consistency ONE (1 responses were required but only 0 > replica responded) > java.util.NoSuchElementException > com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout > during read query at consistency ONE (1 responses were required but only 0 > replica responded) > com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout > during read query at consistency ONE (timeout while waiting for repair of > inconsistent replica) > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchElementException > java.util.NoSuchEl
[jira] [Updated] (CASSANDRA-8686) Introduce Latency Target for Stress
[ https://issues.apache.org/jira/browse/CASSANDRA-8686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-8686: Labels: stress (was: ) > Introduce Latency Target for Stress > --- > > Key: CASSANDRA-8686 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8686 > Project: Cassandra > Issue Type: Improvement >Reporter: jonathan lacefield >Priority: Minor > Labels: stress > > This item is a request to add a latency target to the rate option for the new > stress tool. The goal of the latency target would be to provide a guideline > for SLAs to the stress tool so the stress tool can determine threads and > throughputs that can be sustained while meeting the SLA targets. > For example: > cassandra-stress [options/commands] -rate latency p90=5 p95=10 p99=100 > The outcome of this command would be a stress execution that would gradually > increase threads, and hence throughput (trans/sec), until the latency profile > can no longer be satisfied with the current workload (yaml file definition) > and/or cluster. This would provide a ceiling for throughput and connections > for the given cluster, workload, and SLA profile. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8769) Extend cassandra-stress to be slightly more configurable
[ https://issues.apache.org/jira/browse/CASSANDRA-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-8769: Labels: stress (was: ) > Extend cassandra-stress to be slightly more configurable > > > Key: CASSANDRA-8769 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8769 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Anthony Cozzie >Assignee: Anthony Cozzie >Priority: Minor > Labels: stress > Fix For: 2.1.5 > > Attachments: stress-extensions-patch-v2.txt, > stress-extensions-patch.txt > > > Some simple extensions to cassandra stress: > * Configurable warm up iterations > * Output results by command type for USER (e.g. 5000 ops/sec, 1000 inserts, > 1000 reads, 3000 range reads) > * Count errors when ignore flag is set > * Configurable truncate for more consistent results > Patch attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate
[ https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-10592: - Assignee: Ariel Weisberg > IllegalArgumentException in DataOutputBuffer.reallocate > --- > > Key: CASSANDRA-10592 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10592 > Project: Cassandra > Issue Type: Bug >Reporter: Sebastian Estevez >Assignee: Ariel Weisberg > Fix For: 3.0.0 > > > The following exception appeared in my logs while running a cassandra-stress > workload on master. > {code} > WARN [SharedPool-Worker-1] 2015-10-22 12:58:20,792 > AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread > Thread[SharedPool-Worker-1,5,main]: {} > java.lang.RuntimeException: java.lang.IllegalArgumentException > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366) > ~[main/:na] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_60] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > ~[main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > Caused by: java.lang.IllegalArgumentException: null > at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60] > at > org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63) > ~[main/:na] > at > org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57) > ~[main/:na] > at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132) > ~[main/:na] > at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151) > ~[main/:na] > at > org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296) > ~[main/:na] > at > org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) > ~[main/:na] > at > org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77) > ~[main/:na] > at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) > ~[main/:na] > at > org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1697) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2362) > ~[main/:na] > ... 4 common frames omitted > {code} > I was running this command: > {code} > tools/bin/cassandra-stress user > profile=~/Desktop/startup/stress/stress.yaml n=10 ops\(insert=1\) -rate > threads=30 > {code} > Here's the stress.yaml > {code} > ### DML ### THIS IS UNDER CONSTRUCTION!!! > # Keyspace Name > keyspace: autogeneratedtest > # The CQL for creating a keyspace (optional if it already exists) > keyspace_definition: | > CREATE KEYSPACE autogeneratedtest WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': 1}; > # Table name > table: test > # The CQL for creating a table you wish to stress (optional if it already > exists) > table_definition: > CREATE TABLE test ( > a int, > b int, > c int, > d int, > e int, > f timestamp, > g
[jira] [Updated] (CASSANDRA-8987) cassandra-stress should support a more complex client model
[ https://issues.apache.org/jira/browse/CASSANDRA-8987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-8987: Labels: stress (was: ) > cassandra-stress should support a more complex client model > --- > > Key: CASSANDRA-8987 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8987 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Benedict > Labels: stress > > Orthogonal to CASSANDRA-8986, but still very important, is stress' simulation > of clients: currently we assume a fixed number of clients performing infinite > synchronous work, whereas, as I > [argued|https://groups.google.com/forum/#!topic/mechanical-sympathy/icNZJejUHfE%5B101-125%5D] > on the mechanical sympathy mailing list, the correct model is to have a new > client arrival distribution and a distinct client model. Ideally, however, I > would like to expand this to support client models that can simulate > multi-table "transactions", with both synchronous and asynchronous steps. So, > let's say we have three tables T1, T2, T3, we could say something like: > A client performs: > * a registration by insert to T1 (and/or perhaps lookup in T1), multiple > inserts to T2 and T2, in parallel > * followed by a number of queries on T3 > Probably the best way to achieve this is with a tiered "transaction" > definition that can be composed, so that any single query or insert is a > "transaction" that itself may be sequentially or in parallel composed with > any other to compose a new macro transaction. This would then be combined > with a client arrival rate distribution to produce a total cluster workload. > At least one remaining question is if we want the operations to be data > dependent, in which case this may well interact with CASSANDRA-8986, and > probably requires a little thought. [~jshook] [~jeromatron] [~mstump] > [~tupshin] [~jlacefie] thoughts on this? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8756) cassandra-stress should produce work asynchronously
[ https://issues.apache.org/jira/browse/CASSANDRA-8756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-8756: Labels: stress (was: ) > cassandra-stress should produce work asynchronously > --- > > Key: CASSANDRA-8756 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8756 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Benedict >Priority: Minor > Labels: stress > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8780) cassandra-stress should support multiple table operations
[ https://issues.apache.org/jira/browse/CASSANDRA-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-8780: Labels: stress (was: ) > cassandra-stress should support multiple table operations > - > > Key: CASSANDRA-8780 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8780 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Benedict > Labels: stress > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9325) cassandra-stress requires keystore for SSL but provides no way to configure it
[ https://issues.apache.org/jira/browse/CASSANDRA-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-9325: Labels: stress (was: ) > cassandra-stress requires keystore for SSL but provides no way to configure it > -- > > Key: CASSANDRA-9325 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9325 > Project: Cassandra > Issue Type: Bug >Reporter: J.B. Langston > Labels: stress > Fix For: 2.1.x > > > Even though it shouldn't be required unless client certificate authentication > is enabled, the stress tool is looking for a keystore in the default location > of conf/.keystore with the default password of cassandra. There is no command > line option to override these defaults so you have to provide a keystore that > satisfies the default. It looks for conf/.keystore in the working directory, > so you need to create this in the directory you are running cassandra-stress > from.It doesn't really matter what's in the keystore; it just needs to exist > in the expected location and have a password of cassandra. > Since the keystore might be required if client certificate authentication is > enabled, we need to add -transport parameters for keystore and > keystore-password. Ideally, these should be optional and stress shouldn't > require the keystore unless client certificate authentication is enabled on > the server. > In case it wasn't apparent, this is for Cassandra 2.1 and later's stress > tool. I actually had even more problems getting Cassandra 2.0's stress tool > working with SSL and gave up on it. We probably don't need to fix 2.0; we > can just document that it doesn't support SSL and recommend using 2.1 instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate
[ https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-10592: - Reviewer: Benedict > IllegalArgumentException in DataOutputBuffer.reallocate > --- > > Key: CASSANDRA-10592 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10592 > Project: Cassandra > Issue Type: Bug >Reporter: Sebastian Estevez >Assignee: Ariel Weisberg > Fix For: 3.0.0 > > > The following exception appeared in my logs while running a cassandra-stress > workload on master. > {code} > WARN [SharedPool-Worker-1] 2015-10-22 12:58:20,792 > AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread > Thread[SharedPool-Worker-1,5,main]: {} > java.lang.RuntimeException: java.lang.IllegalArgumentException > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366) > ~[main/:na] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_60] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > ~[main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > Caused by: java.lang.IllegalArgumentException: null > at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60] > at > org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63) > ~[main/:na] > at > org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57) > ~[main/:na] > at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132) > ~[main/:na] > at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151) > ~[main/:na] > at > org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296) > ~[main/:na] > at > org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) > ~[main/:na] > at > org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77) > ~[main/:na] > at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) > ~[main/:na] > at > org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1697) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2362) > ~[main/:na] > ... 4 common frames omitted > {code} > I was running this command: > {code} > tools/bin/cassandra-stress user > profile=~/Desktop/startup/stress/stress.yaml n=10 ops\(insert=1\) -rate > threads=30 > {code} > Here's the stress.yaml > {code} > ### DML ### THIS IS UNDER CONSTRUCTION!!! > # Keyspace Name > keyspace: autogeneratedtest > # The CQL for creating a keyspace (optional if it already exists) > keyspace_definition: | > CREATE KEYSPACE autogeneratedtest WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': 1}; > # Table name > table: test > # The CQL for creating a table you wish to stress (optional if it already > exists) > table_definition: > CREATE TABLE test ( > a int, > b int, > c int, > d int, > e int, > f timestamp, > g text,
[jira] [Updated] (CASSANDRA-8986) Major cassandra-stress refactor
[ https://issues.apache.org/jira/browse/CASSANDRA-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-8986: Labels: stress (was: ) > Major cassandra-stress refactor > --- > > Key: CASSANDRA-8986 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8986 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Benedict >Assignee: Benedict > Labels: stress > > We need a tool for both stressing _and_ validating more complex workloads > than stress currently supports. Stress needs a raft of changes, and I think > it would be easier to deliver many of these as a single major endeavour which > I think is justifiable given its audience. The rough behaviours I want stress > to support are: > * Ability to know exactly how many rows it will produce, for any clustering > prefix, without generating those prefixes > * Ability to generate an amount of data proportional to the amount it will > produce to the server (or consume from the server), rather than proportional > to the variation in clustering columns > * Ability to reliably produce near identical behaviour each run > * Ability to understand complex overlays of operation types (LWT, Delete, > Expiry, although perhaps not all implemented immediately, the framework for > supporting them easily) > * Ability to (with minimal internal state) understand the complete cluster > state through overlays of multiple procedural generations > * Ability to understand the in-flight state of in-progress operations (i.e. > if we're applying a delete, understand that the delete may have been applied, > and may not have been, for potentially multiple conflicting in flight > operations) > I think the necessary changes to support this would give us the _functional_ > base to support all the functionality I can currently envisage stress > needing. Before embarking on this (which I may attempt very soon), it would > be helpful to get input from others as to features missing from stress that I > haven't covered here that we will certainly want in the future, so that they > can be factored in to the overall design and hopefully avoid another refactor > one year from now, as its complexity is scaling each time, and each time it > is a higher sunk cost. [~jbellis] [~iamaleksey] [~slebresne] [~tjake] > [~enigmacurry] [~aweisberg] [~blambov] [~jshook] ... and @everyone else :) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9091) Cassandra-stress should support map column type
[ https://issues.apache.org/jira/browse/CASSANDRA-9091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-9091: Labels: stress (was: ) > Cassandra-stress should support map column type > --- > > Key: CASSANDRA-9091 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9091 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Asya Lisak >Priority: Minor > Labels: stress > Fix For: 2.1.5 > > > Currently cassandra-stress does not support map data type, even though > cassandra itself supports it. UnsupportedOperationException is thrown if > table under stress test has column with map type. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9482) SSTable leak after stress and repair
[ https://issues.apache.org/jira/browse/CASSANDRA-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-9482: Labels: stress (was: ) > SSTable leak after stress and repair > > > Key: CASSANDRA-9482 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9482 > Project: Cassandra > Issue Type: Bug >Reporter: Jim Witschey >Assignee: Marcus Eriksson > Labels: stress > > I have a dtest that fails intermittently because of SSTable leaks. The test > logic leading to the error is: > - create a 5-node cluster > - insert 5000 records with {{stress}}, RF=3 at CL=ONE > - run {{flush}} on all nodes > - run {{repair}} on a single node. > The leak is detected on a different node than {{repair}} was run on. > The failing test is > [here|https://github.com/mambocab/cassandra-dtest/blob/CASSANDRA-5839-squash/repair_test.py#L317]. > The relevant error his > [here|https://gist.github.com/mambocab/8aab7b03496e0b279bd3#file-node2-log-L256], > along with the errors from the entire 5-node cluster. In these logs, the > {{repair}} was run on {{node1}} and the leak was found on {{node2}}. > I can bisect, but I thought I'd get the ball rolling in case someone knows > where to look. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate
[ https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-10592: - Summary: IllegalArgumentException in DataOutputBuffer.reallocate (was: IllegalArgumentException at Storage Proxy) > IllegalArgumentException in DataOutputBuffer.reallocate > --- > > Key: CASSANDRA-10592 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10592 > Project: Cassandra > Issue Type: Bug >Reporter: Sebastian Estevez > Fix For: 3.0.0 > > > The following exception appeared in my logs while running a cassandra-stress > workload on master. > {code} > WARN [SharedPool-Worker-1] 2015-10-22 12:58:20,792 > AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread > Thread[SharedPool-Worker-1,5,main]: {} > java.lang.RuntimeException: java.lang.IllegalArgumentException > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366) > ~[main/:na] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_60] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > ~[main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > Caused by: java.lang.IllegalArgumentException: null > at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60] > at > org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63) > ~[main/:na] > at > org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57) > ~[main/:na] > at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132) > ~[main/:na] > at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151) > ~[main/:na] > at > org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296) > ~[main/:na] > at > org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) > ~[main/:na] > at > org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77) > ~[main/:na] > at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) > ~[main/:na] > at > org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1697) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2362) > ~[main/:na] > ... 4 common frames omitted > {code} > I was running this command: > {code} > tools/bin/cassandra-stress user > profile=~/Desktop/startup/stress/stress.yaml n=10 ops\(insert=1\) -rate > threads=30 > {code} > Here's the stress.yaml > {code} > ### DML ### THIS IS UNDER CONSTRUCTION!!! > # Keyspace Name > keyspace: autogeneratedtest > # The CQL for creating a keyspace (optional if it already exists) > keyspace_definition: | > CREATE KEYSPACE autogeneratedtest WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': 1}; > # Table name > table: test > # The CQL for creating a table you wish to stress (optional if it already > exists) > table_definition: > CREATE TABLE test ( > a int, > b int, >
[jira] [Updated] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate
[ https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-10592: - Fix Version/s: 3.0.0 > IllegalArgumentException in DataOutputBuffer.reallocate > --- > > Key: CASSANDRA-10592 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10592 > Project: Cassandra > Issue Type: Bug >Reporter: Sebastian Estevez > Fix For: 3.0.0 > > > The following exception appeared in my logs while running a cassandra-stress > workload on master. > {code} > WARN [SharedPool-Worker-1] 2015-10-22 12:58:20,792 > AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread > Thread[SharedPool-Worker-1,5,main]: {} > java.lang.RuntimeException: java.lang.IllegalArgumentException > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366) > ~[main/:na] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_60] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > ~[main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > Caused by: java.lang.IllegalArgumentException: null > at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60] > at > org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63) > ~[main/:na] > at > org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57) > ~[main/:na] > at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132) > ~[main/:na] > at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151) > ~[main/:na] > at > org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296) > ~[main/:na] > at > org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) > ~[main/:na] > at > org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77) > ~[main/:na] > at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) > ~[main/:na] > at > org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1697) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2362) > ~[main/:na] > ... 4 common frames omitted > {code} > I was running this command: > {code} > tools/bin/cassandra-stress user > profile=~/Desktop/startup/stress/stress.yaml n=10 ops\(insert=1\) -rate > threads=30 > {code} > Here's the stress.yaml > {code} > ### DML ### THIS IS UNDER CONSTRUCTION!!! > # Keyspace Name > keyspace: autogeneratedtest > # The CQL for creating a keyspace (optional if it already exists) > keyspace_definition: | > CREATE KEYSPACE autogeneratedtest WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': 1}; > # Table name > table: test > # The CQL for creating a table you wish to stress (optional if it already > exists) > table_definition: > CREATE TABLE test ( > a int, > b int, > c int, > d int, > e int, > f timestamp, > g text, > h bigint, > i map, > j text
[jira] [Updated] (CASSANDRA-9870) Improve cassandra-stress graphing
[ https://issues.apache.org/jira/browse/CASSANDRA-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-9870: Labels: stress (was: ) > Improve cassandra-stress graphing > - > > Key: CASSANDRA-9870 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9870 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Benedict >Assignee: Ryan McGuire > Labels: stress > Attachments: reads.svg > > > CASSANDRA-7918 introduces graph output from a stress run, but these graphs > are a little limited. Attached to the ticket is an example of some improved > graphs which can serve as the *basis* for some improvements, which I will > briefly describe. They should not be taken as the exact end goal, but we > should aim for at least their functionality. Preferably with some Javascript > advantages thrown in, such as the hiding of datasets/graphs for clarity. Any > ideas for improvements are *definitely* encouraged. > Some overarching design principles: > * Display _on *one* screen_ all of the information necessary to get a good > idea of how two or more branches compare to each other. Ideally we will > reintroduce this, painting multiple graphs onto one screen, stretched to fit. > * Axes must be truncated to only the interesting dimensions, to ensure there > is no wasted space. > * Each graph displaying multiple kinds of data should use colour _and shape_ > to help easily distinguish the different datasets. > * Each graph should be tailored to the data it is representing, and we should > have multiple views of each data. > The data can roughly be partitioned into three kinds: > * throughput > * latency > * gc > These can each be viewed in different ways: > * as a continuous plot of: > ** raw data > ** scaled/compared to a "base" branch, or other metric > ** cumulatively > * as box plots > ** ideally, these will plot median, outer quartiles, outer deciles and > absolute limits of the distribution, so the shape of the data can be best > understood > Each compresses the information differently, losing different information, so > that collectively they help to understand the data. > Some basic rules for presentation that work well: > * Latency information should be plotted to a logarithmic scale, to avoid high > latencies drowning out low ones > * GC information should be plotted cumulatively, to avoid differing > throughputs giving the impression of worse GC. It should also have a line > that is rescaled by the amount of work (number of operations) completed > * Throughput should be plotted as the actual numbers > To walk the graphs top-left to bottom-right, we have: > * Spot throughput comparison of branches to the baseline branch, as an > improvement ratio (which can of course be negative, but is not in this > example) > * Raw throughput of all branches (no baseline) > * Raw throughput as a box plot > * Latency percentiles, compared to baseline. The percentage improvement at > any point in time vs baseline is calculated, and then multiplied by the > overall median for the entire run. This simply permits the non-baseline > branches to scatter their wins/loss around a relatively clustered line for > each percentile. It's probably the most "dishonest" graph but comparing > something like latency where each data point can have very high variance is > difficult, and this gives you an idea of clustering of improvements/losses. > * Latency percentiles, raw, each with a different shape; lowest percentiles > plotted as a solid line as they vary least, with higher percentiles each > getting their own subtly different shape to scatter. > * Latency box plots > * GC time, plotted cumulatively and also scaled by work done > * GC Mb, plotted cumulatively and also scaled by work done > * GC time, raw > * GC time as a box plot > These do mostly introduce the concept of a "baseline" branch. It may be that, > ideally, this baseline be selected by a dropdown so the javascript can > transform the output dynamically. This would permit more interesting > comparisons to be made on the fly. > There are also some complexities, such as deciding which datapoints to > compare against baseline when times get out-of-whack (due to GC, etc, causing > a lack of output for a period). The version I uploaded does a merge of the > times, permitting a small degree of variance, and ignoring those datapoints > we cannot pair. One option here might be to change stress' behaviour to > always print to a strict schedule, instead of trying to get absolutely > accurate apportionment of timings. If this makes things much simpler, it can > be done. > As previously stated, but may be lost in the wall-of-text, these should be > taken as a starting point / sign post, rather than a
[jira] [Updated] (CASSANDRA-9558) Cassandra-stress regression in 2.2
[ https://issues.apache.org/jira/browse/CASSANDRA-9558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-9558: Labels: stress (was: ) > Cassandra-stress regression in 2.2 > -- > > Key: CASSANDRA-9558 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9558 > Project: Cassandra > Issue Type: Bug >Reporter: Alan Boudreault >Assignee: Andy Tolbert > Labels: stress > Fix For: 2.2.0 rc2 > > Attachments: 2.1.log, 2.2.log, CASSANDRA-9558-2.patch, > CASSANDRA-9558-ProtocolV2.patch, atolber-CASSANDRA-9558-stress.tgz, > atolber-trunk-driver-coalescing-disabled.txt, > stress-2.1-java-driver-2.0.9.2.log, stress-2.1-java-driver-2.2+PATCH.log, > stress-2.1-java-driver-2.2.log, stress-2.2-java-driver-2.2+PATCH.log, > stress-2.2-java-driver-2.2.log > > > We are seeing some regression in performance when using cassandra-stress 2.2. > You can see the difference at this url: > http://riptano.github.io/cassandra_performance/graph_v5/graph.html?stats=stress_regression.json&metric=op_rate&operation=1_write&smoothing=1&show_aggregates=true&xmin=0&xmax=108.57&ymin=0&ymax=168147.1 > The cassandra version of the cluster doesn't seem to have any impact. > //cc [~tjake] [~benedict] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9850) Allow optional schema statements in cassandra-stress yaml
[ https://issues.apache.org/jira/browse/CASSANDRA-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-9850: Labels: stress (was: ) > Allow optional schema statements in cassandra-stress yaml > - > > Key: CASSANDRA-9850 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9850 > Project: Cassandra > Issue Type: Improvement >Reporter: T Jake Luciani >Assignee: T Jake Luciani >Priority: Minor > Labels: stress > Fix For: 3.0 alpha 1 > > Attachments: users.yaml > > > We need a way to include extra optional CQL schema statements for a given > table. Like adding a secondary index or materialized view. > Attached a simple example -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9361) Add all possible consistency levels to Cassandra-Stress and make LOCAL_ONE the default one
[ https://issues.apache.org/jira/browse/CASSANDRA-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-9361: Labels: stress (was: ) > Add all possible consistency levels to Cassandra-Stress and make LOCAL_ONE > the default one > -- > > Key: CASSANDRA-9361 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9361 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Mario Lazaro >Assignee: Mario Lazaro >Priority: Minor > Labels: stress > Fix For: 2.1.6 > > Attachments: patch.txt > > > CASSANDRA-8253 added all of them but CASSANDRA-8769 delete some of them from > CommandSettings.java > Also notice the default consistency is set to ONE, I believe it'd be better > if we use LOCAL_ONE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10048) cassandra-stress - Decimal is a BigInt not a Double
[ https://issues.apache.org/jira/browse/CASSANDRA-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-10048: - Labels: stress (was: ) > cassandra-stress - Decimal is a BigInt not a Double > --- > > Key: CASSANDRA-10048 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10048 > Project: Cassandra > Issue Type: Bug >Reporter: Sebastian Estevez > Labels: stress > Fix For: 2.1.9, 2.2.1, 3.0 beta 2 > > Attachments: CASSANDRA-10048.patch > > > Similar to CASSANDRA-8882 > I'll provide a patch. > {code} > com.datastax.driver.core.exceptions.InvalidTypeException: Invalid type for > value 26 of CQL type decimal, expecting class java.math.BigDecimal but class > java.lang.Double provided > com.datastax.driver.core.exceptions.InvalidTypeException: Invalid type for > value 26 of CQL type decimal, expecting class java.math.BigDecimal but class > java.lang.Double provided > com.datastax.driver.core.exceptions.InvalidTypeException: Invalid type for > value 26 of CQL type decimal, expecting class java.math.BigDecimal but class > java.lang.Double provided > ^Ccom.datastax.driver.core.exceptions.InvalidTypeException: Invalid type for > value 26 of CQL type decimal, expecting class java.math.BigDecimal but class > java.lang.Double provided > com.datastax.driver.core.exceptions.InvalidTypeException: Invalid type for > value 26 of CQL type decimal, expecting class java.math.BigDecimal but class > java.lang.Double provided > com.datastax.driver.core.exceptions.InvalidTypeException: Invalid type for > value 26 of CQL type decimal, expecting class java.math.BigDecimal but class > java.lang.Double provided > com.datastax.driver.core.exceptions.InvalidTypeException: Invalid type for > value 26 of CQL type decimal, expecting class java.math.BigDecimal but class > java.lang.Double provided > com.datastax.driver.core.exceptions.InvalidTypeException: Invalid type for > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9923) stress against counters hangs
[ https://issues.apache.org/jira/browse/CASSANDRA-9923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-9923: Labels: stress (was: ) > stress against counters hangs > - > > Key: CASSANDRA-9923 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9923 > Project: Cassandra > Issue Type: Bug >Reporter: Robert Stupp > Labels: stress > > (Sorry for the vague description) > I tried some cstar tests against counter columns. But all these tests against > 2.1 and 2.2 ended (hang) during with the following output: > {noformat} > Created keyspaces. Sleeping 3s for propagation. > Sleeping 2s... > Warming up COUNTER_WRITE with 15 iterations... > Running COUNTER_WRITE with 300 threads for 1500 iteration > type, total ops,op/s,pk/s, row/s,mean, med, .95, > .99,.999, max, time, stderr, errors, gc: #, max ms, sum ms, > sdv ms, mb > total, 98073, 98054, 98054, 98054, 3.1, 1.7, 8.9, > 23.2,89.9, 107.7,1.0, 0.0, 0, 0, 0, 0, > 0, 0 > total,188586, 72492, 72492, 72492, 4.1, 1.5,10.0, > 61.4, 202.8, 214.7,2.2, 0.13101, 0, 3, 564, 564, > 6,3447 > total,363880, 137986, 137986, 137986, 2.2, 1.4, 4.1, > 9.6, 207.1, 253.3,3.5, 0.18684, 0, 0, 0, 0, > 0, 0 > total,460122, 105062, 105062, 105062, 2.8, 1.4, 4.6, > 14.7, 225.6, 236.2,4.4, 0.13969, 0, 1, 214, 214, > 0,1199 > total,600625, 111453, 111453, 111453, 2.7, 1.4, 3.8, > 10.4, 231.5, 241.6,5.7, 0.11366, 0, 2, 442, 442, > 1,2389 > total,745680, 149583, 149583, 149583, 2.0, 1.4, 3.6, > 6.7, 155.8, 159.7,6.7, 0.11318, 0, 0, 0, 0, > 0, 0 > total,828453, 63632, 63632, 63632, 4.7, 1.4, 4.8, > 261.9, 274.5, 279.3,8.0, 0.12645, 0, 3, 782, 782, > 1,3542 > total, 1009560, 172429, 172429, 172429, 1.7, 1.4, 3.7, > 6.1,16.2,29.7,9.0, 0.11629, 0, 0, 0, 0, > 0, 0 > total, 1062409, 53860, 53860, 53860, 5.5, 1.3, 8.6, > 270.3, 293.4, 324.3, 10.0, 0.12738, 0, 2, 542, 542, > 7,2354 > total, 1186672, 96540, 96540, 96540, 3.1, 1.5, 5.9, > 14.5, 266.4, 277.6, 11.3, 0.11451, 0, 1, 260, 260, > 0,1183 > {noformat} > ... > {noformat} > total, 4977251, 238, 238, 238, 0.7, 0.6, 0.7, > 1.3, 3.4, 158.5, 352.3, 0.11749, 0, 0, 0, 0, > 0, 0 > total, 4979839, 214, 214, 214, 0.6, 0.6, 0.7, > 1.3, 2.5, 2.8, 364.4, 0.11761, 0, 0, 0, 0, > 0, 0 > total, 4981729, 191, 191, 191, 0.6, 0.6, 0.7, > 1.3, 3.2, 3.3, 374.3, 0.11774, 0, 0, 0, 0, > 0, 0 > total, 4983362, 167, 167, 167, 0.8, 0.7, 1.8, > 2.7, 3.9, 5.8, 384.0, 0.11787, 0, 0, 0, 0, > 0, 0 > total, 4985171, 153, 153, 153, 0.7, 0.6, 1.2, > 1.4, 2.0, 3.3, 395.9, 0.11799, 0, 0, 0, 0, > 0, 0 > total, 4986684, 137, 137, 137, 0.7, 0.6, 0.8, > 1.3, 2.0, 2.0, 406.9, 0.11812, 0, 0, 0, 0, > 0, 0 > total, 4988410, 121, 121, 121, 0.7, 0.7, 0.8, > 1.3, 2.0, 2.8, 421.1, 0.11824, 0, 0, 0, 0, > 0, 0 > total, 4990216, 99, 99, 99, 0.7, 0.7, 0.8, > 1.4, 2.6, 2.8, 439.5, 0.11836, 0, 0, 0, 0, > 0, 0 > total, 4991765, 81, 81, 81, 0.8, 0.7, 0.8, > 1.4,30.1,81.6, 458.7, 0.11848, 0, 1, 159, 159, > 0,1179 > total, 4993731, 67, 67, 67, 0.7, 0.7, 0.8, > 1.4, 3.2, 3.2, 488.1, 0.11860, 0, 0, 0, 0, > 0, 0 > total, 4996565, 45, 45, 45, 0.9, 0.7, 0.9, > 1.5,84.7, 218.3, 551.5, 0.11872, 0, 1, 248, 248, > 0,1180 > java.lang.RuntimeException: Timed out waiting for a timer thread - seems one > got stu
[jira] [Updated] (CASSANDRA-9864) Stress leaves threads running after a fatal error
[ https://issues.apache.org/jira/browse/CASSANDRA-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-9864: Labels: stress (was: ) > Stress leaves threads running after a fatal error > - > > Key: CASSANDRA-9864 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9864 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Ryan McGuire >Assignee: Benedict > Labels: stress > Fix For: 3.0 alpha 1 > > > For some types of error, cassandra-stress is staying alive even after it > shows an exception, and it will hang forever. > Here's an example: > {code} > [10.200.241.112] All nodes available! > INFO:benchmark:Started cassandra on 3 nodes with git SHA: > efaff1bff92fdf4cc84007a5cc1e641ebf889633 > INFO:stress_compare:Running stress operation : user > profile=https://dl.dropboxusercontent.com/u/15683245/8894_tiny.yaml > ops\(insert=1,\) n=1M -rate threads=3. > INFO:benchmark:Running stress from > '/home/ryan/fab/stress/default/tools/bin/cassandra-stress' : user > profile=https://dl.dropboxusercontent.com/u/15683245/8894a > INFO 23:50:05 Did not find Netty's native epoll transport in the classpath, > defaulting to NIO. > Exception in thread "main" java.lang.RuntimeException: > java.lang.IllegalArgumentException: clustering_key > at > org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:198) > at > org.apache.cassandra.stress.StressProfile.maybeCreateSchema(StressProfile.java:162) > at > org.apache.cassandra.stress.settings.StressSettings.maybeCreateKeyspaces(StressSettings.java:207) > at org.apache.cassandra.stress.StressAction.run(StressAction.java:55) > at org.apache.cassandra.stress.Stress.main(Stress.java:114) > Caused by: java.lang.IllegalArgumentException: clustering_key > at > com.datastax.driver.core.ColumnMetadata$Raw$Kind.fromStringV3(ColumnMetadata.java:235) > at > com.datastax.driver.core.ColumnMetadata$Raw.fromRow(ColumnMetadata.java:263) > at > com.datastax.driver.core.SchemaParser.groupByKeyspaceAndCf(SchemaParser.java:408) > at > com.datastax.driver.core.SchemaParser$2.refresh(SchemaParser.java:246) > at > com.datastax.driver.core.ControlConnection.refreshSchema(ControlConnection.java:323) > at > com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:264) > at > com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:187) > at > com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:75) > at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1265) > at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:337) > at > org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:121) > at > org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:189) > ... 4 more > {code} > So I'd love for that bug to go away, but in general, the stress process > should exit when it encounters a fatal error. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9522) Specify unset column ratios in cassandra-stress write
[ https://issues.apache.org/jira/browse/CASSANDRA-9522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-9522: Labels: stress (was: ) > Specify unset column ratios in cassandra-stress write > - > > Key: CASSANDRA-9522 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9522 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Jim Witschey >Assignee: T Jake Luciani > Labels: stress > Fix For: 3.0 alpha 1 > > > I'd like to be able to use stress to generate workloads with different > distributions of unset columns -- so, for instance, you could specify that > rows will have 70% unset columns, and on average, a 100-column row would > contain only 30 values. > This would help us test the new row formats introduced in 8099. There are a 2 > different row formats, used depending on the ratio of set to unset columns, > and this feature would let us generate workloads that would be stored in each > of those formats. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10182) Cassandra stress driver settings broken
[ https://issues.apache.org/jira/browse/CASSANDRA-10182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-10182: - Labels: stress (was: ) > Cassandra stress driver settings broken > --- > > Key: CASSANDRA-10182 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10182 > Project: Cassandra > Issue Type: Bug >Reporter: T Jake Luciani >Assignee: T Jake Luciani >Priority: Minor > Labels: stress > Fix For: 3.0 beta 2 > > > Running cassandra-stress with the latest java driver breaks the metadata > lookup required for yaml profiles. We can avoid the issue by not using > protocol v2 (which is going away CASSANDRA-10146) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8882) Wrong type mapping for varint -- Cassandra Stress 2.1
[ https://issues.apache.org/jira/browse/CASSANDRA-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-8882: Labels: stress (was: ) > Wrong type mapping for varint -- Cassandra Stress 2.1 > - > > Key: CASSANDRA-8882 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8882 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Sebastian Estevez > Labels: stress > Fix For: 2.1.5 > > > Run a workload with a varint type, you'll see the following error: > {code} > com.datastax.driver.core.exceptions.InvalidTypeException: Invalid type for > value 2 of CQL type varint, expecting class java.math.BigInteger but class > java.lang.Integer provided > com.datastax.driver.core.exceptions.InvalidTypeException: Invalid type for > value 2 of CQL type varint, expecting class java.math.BigInteger but class > java.lang.Integer provided > com.datastax.driver.core.exceptions.InvalidTypeException: Invalid type for > value 2 of CQL type varint, expecting class java.math.BigInteger but class > java.lang.Integer provided > com.datastax.driver.core.exceptions.InvalidTypeException: Invalid type for > value 2 of CQL type varint, expecting class java.math.BigInteger but class > java.lang.Integer provided > com.datastax.driver.core.exceptions.InvalidTypeException: Invalid type for > value 2 of CQL type varint, expecting class java.math.BigInteger but class > java.lang.Integer provided > com.datastax.driver.core.exceptions.InvalidTypeException: Invalid type for > value 2 of CQL type varint, expecting class java.math.BigInteger but class > java.lang.Integer provided > com.datastax.driver.core.exceptions.InvalidTypeException: Invalid type for > value 2 of CQL type varint, expecting class java.math.BigInteger but class > java.lang.Integer provided > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10175) cassandra-stress should be tolerant when a remote node shutdown
[ https://issues.apache.org/jira/browse/CASSANDRA-10175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-10175: - Labels: stress (was: ) > cassandra-stress should be tolerant when a remote node shutdown > > > Key: CASSANDRA-10175 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10175 > Project: Cassandra > Issue Type: Improvement >Reporter: Alan Boudreault > Labels: stress > Fix For: 3.x > > > Currently, if we start a stress session with 3 nodes and shutdown one node, > stress will crash. It is caused by the JMX connection lost on the node, which > is use to collect some gc stats IIRC. > backtrace: https://gist.github.com/aboudreault/6cd82bb0acc681992414 > Stress should handle that jmx connection lost in a better way so the session > could continue. Ideally, it should try to *reconnect* to JMX if the node is > back online? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10445) Cassandra-stress throws max frame size error when SSL certification is enabled
[ https://issues.apache.org/jira/browse/CASSANDRA-10445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-10445: - Labels: stress (was: ) > Cassandra-stress throws max frame size error when SSL certification is enabled > -- > > Key: CASSANDRA-10445 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10445 > Project: Cassandra > Issue Type: Bug >Reporter: Sam Goldberg > Labels: stress > Fix For: 2.1.x > > > Running cassandra-stress when SSL is enabled gives the following error and > does not finish executing: > {quote} > cassandra-stress write n=100 > Exception in thread "main" java.lang.RuntimeException: > org.apache.thrift.transport.TTransportException: Frame size (352518912) > larger than max length (15728640)! > at > org.apache.cassandra.stress.settings.StressSettings.getRawThriftClient(StressSettings.java:144) > at > org.apache.cassandra.stress.settings.StressSettings.getRawThriftClient(StressSettings.java:110) > at > org.apache.cassandra.stress.settings.SettingsSchema.createKeySpacesThrift(SettingsSchema.java:111) > at > org.apache.cassandra.stress.settings.SettingsSchema.createKeySpaces(SettingsSchema.java:59) > at > org.apache.cassandra.stress.settings.StressSettings.maybeCreateKeyspaces(StressSettings.java:205) > at org.apache.cassandra.stress.StressAction.run(StressAction.java:55) > at org.apache.cassandra.stress.Stress.main(Stress.java:109) > {quote} > I was able to reproduce this issue consistently via the following steps: > 1) Spin up 3 node cassandra cluster running 2.1.8 > 2) Perform cassandra-stress write n=100 > 3) Everything works! > 4) Generate keystore and truststore for each node in the cluster and > distribute appropriately > 5) Modify cassandra.yaml on each node to enable SSL: > client_encryption_options: > enabled: true > keystore: / > # require_client_auth: false > # Set trustore and truststore_password if require_client_auth is true > truststore: / > truststore_password: > # More advanced defaults below: > protocol: ssl > 6) Restart each node. > 7) Perform cassandra-stress write n=100 > 8) Get Frame Size error, cassandra-stress fails > This may be related to CASSANDRA-9325. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10229) Fix cassandra-stress gaussian behaviour for shuffling the distribution, to mitigate read perf after a major compaction
[ https://issues.apache.org/jira/browse/CASSANDRA-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-10229: - Labels: perfomance stress (was: perfomance) > Fix cassandra-stress gaussian behaviour for shuffling the distribution, to > mitigate read perf after a major compaction > -- > > Key: CASSANDRA-10229 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10229 > Project: Cassandra > Issue Type: Improvement >Reporter: Alan Boudreault >Priority: Minor > Labels: perfomance, stress > Attachments: users-caching.yaml > > > TITLE WAS: BAD READ PERFORMANCE AFTER A MAJOR COMPACTION > I am trying to understand what I am seeing. My scenario is very basic, it's a > simple users table with key cache and row cache disabled. I write 50M then > read 5M random elements. The read performance is not that bad BEFORE a major > compaction of the data. I see a ~3x performance regression AFTER I run a > major compaction. > Here's the read performance numbers for my scenario: > {code} > 3.0 before a major compaction (Key cache and row cache disabled), note that > this is the numbers from 50M, I see the same with 5M > == > Results: > op rate : 9149 [read:9149] > partition rate: 9149 [read:9149] > row rate : 9149 [read:9149] > latency mean : 32.8 [read:32.8] > latency median: 31.2 [read:31.2] > latency 95th percentile : 47.2 [read:47.2] > latency 99th percentile : 55.0 [read:55.0] > latency 99.9th percentile : 66.4 [read:66.4] > latency max : 305.4 [read:305.4] > Total partitions : 5000 [read:5000] > Total errors : 0 [read:0] > total gc count: 0 > total gc mb : 0 > total gc time (s) : 0 > avg gc time(ms) : NaN > stdev gc time(ms) : 0 > Total operation time : 01:31:05 > END > -rw-rw-r-- 1 aboudreault aboudreault 4.7G Aug 31 08:51 ma-1024-big-Data.db > -rw-rw-r-- 1 aboudreault aboudreault 4.9G Aug 31 09:08 ma-1077-big-Data.db > 3.0 after a major compaction (Key cache and row cache disabled), note that > this is the numbers from 50M, I see the same with 5M > > Results: > op rate : 3275 [read:3275] > partition rate: 3275 [read:3275] > row rate : 3275 [read:3275] > latency mean : 91.6 [read:91.6] > latency median: 88.8 [read:88.8] > latency 95th percentile : 107.2 [read:107.2] > latency 99th percentile : 116.0 [read:116.0] > latency 99.9th percentile : 125.5 [read:125.5] > latency max : 249.0 [read:249.0] > Total partitions : 5000 [read:5000] > Total errors : 0 [read:0] > total gc count: 0 > total gc mb : 0 > total gc time (s) : 0 > avg gc time(ms) : NaN > stdev gc time(ms) : 0 > Total operation time : 04:14:26 > END > -rw-rw-r-- 1 aboudreault aboudreault 9.5G Aug 31 09:40 ma-1079-big-Data.db > 2.1 before major compaction (Key cache and row cache disabled) > == > Results: > op rate : 21348 [read:21348] > partition rate: 21348 [read:21348] > row rate : 21348 [read:21348] > latency mean : 14.1 [read:14.1] > latency median: 8.0 [read:8.0] > latency 95th percentile : 38.5 [read:38.5] > latency 99th percentile : 60.8 [read:60.8] > latency 99.9th percentile : 99.2 [read:99.2] > latency max : 229.2 [read:229.2] > Total partitions : 500 [read:500] > Total errors : 0 [read:0] > total gc count: 0 > total gc mb : 0 > total gc time (s) : 0 > avg gc time(ms) : NaN > stdev gc time(ms) : 0 > Total operation time : 00:03:54 > END > 2.1 after major compaction (Key cache and row cache disabled) > = > Results: > op rate : 5262 [read:5262] > partition rate: 5262 [read:5262] > row rate : 5262 [read:5262] > latency mean : 57.0 [read:57.0] > latency median: 55.5 [read:55.5] > latency 95th percentile : 69.4 [read:69.4] > latency 99th percentile : 83.3 [read:83.3] > latency 99.9th percentile : 197.4 [read:197.4] > latency max : 1169.0 [read:1169.0] > Total partitions : 500 [read:500] > Total errors : 0 [read:0] > total
[jira] [Updated] (CASSANDRA-10399) Create default Stress tables without compact storage
[ https://issues.apache.org/jira/browse/CASSANDRA-10399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-10399: - Labels: stress (was: ) > Create default Stress tables without compact storage > - > > Key: CASSANDRA-10399 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10399 > Project: Cassandra > Issue Type: Bug >Reporter: Sebastian Estevez >Priority: Minor > Labels: stress > > ~$ cassandra-stress write > {code} > cqlsh> desc TABLE keyspace1.standard1 > CREATE TABLE keyspace1.standard1 ( > key blob PRIMARY KEY, > "C0" blob, > "C1" blob, > "C2" blob, > "C3" blob, > "C4" blob > ) WITH COMPACT STORAGE > AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}' > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'} > AND compression = {} > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = 'NONE'; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10578) bootstrap_test.py:TestBootstrap.simultaneous_bootstrap_test dtest failing
[ https://issues.apache.org/jira/browse/CASSANDRA-10578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1497#comment-1497 ] Jim Witschey commented on CASSANDRA-10578: -- +1 from me if Yuki's happy with it. > bootstrap_test.py:TestBootstrap.simultaneous_bootstrap_test dtest failing > - > > Key: CASSANDRA-10578 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10578 > Project: Cassandra > Issue Type: Sub-task >Reporter: Jim Witschey >Assignee: Yuki Morishita > Fix For: 2.1.x, 2.2.x, 3.0.0 > > > This test fails on 2.1, 2.2, and 3.0 versions tested on CassCI: > http://cassci.datastax.com/view/cassandra-2.1/job/cassandra-2.1_dtest/lastCompletedBuild/testReport/bootstrap_test/TestBootstrap/simultaneous_bootstrap_test/ > http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/350/testReport/junit/bootstrap_test/TestBootstrap/simultaneous_bootstrap_test/ > http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/bootstrap_test/TestBootstrap/simultaneous_bootstrap_test/ > It fails with the same error, indicating that the third node, which should > not start while another node is bootstrapping, started. Oddly, the assertion > just before it, looking for a particular error in the logs, succeeds. > This could be a race condition, where one node successfully completes > bootstrapping before the third node is started. However, I don't know how > likely that is, since it fails consistently. Unfortunately, we don't have > enough history on CassCI to show when the test failure started. > I'm assigning [~yukim] for now, feel free to reassign. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10462) Fix failing test_failure_threshold_deletions upgrade test
[ https://issues.apache.org/jira/browse/CASSANDRA-10462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974440#comment-14974440 ] Jim Witschey commented on CASSANDRA-10462: -- It has been renamed. Sorry for the trouble, but these tests were changed to run on 2 different test classes so it would run on multiple cluster topologies and catch bugs like [CASSANDRA-10470|https://issues.apache.org/jira/browse/CASSANDRA-10470?focusedCommentId=14960746&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14960746] that fail differently on RF=1 and RF>1. The tests ran here and passed: http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/upgrade_tests.paging_test/TestPagingWithDeletionsNodes3RF3/test_failure_threshold_deletions/ http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/upgrade_tests.paging_test/TestPagingWithDeletionsNodes2RF1/ > Fix failing test_failure_threshold_deletions upgrade test > - > > Key: CASSANDRA-10462 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10462 > Project: Cassandra > Issue Type: Sub-task >Reporter: Jim Witschey >Assignee: Sylvain Lebresne > Fix For: 3.0.0 > > > The > {{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_failure_threshold_deletions}} > dtest fails on CassCI: > http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_failure_threshold_deletions/ > and has failed for a while: > http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_failure_threshold_deletions/history/ > It fails identically when I run it manually on OpenStack, so I don't think > it's a CassCI problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7645) cqlsh: show partial trace if incomplete after max_trace_wait
[ https://issues.apache.org/jira/browse/CASSANDRA-7645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-7645: --- Fix Version/s: (was: 3.0.0 rc2) 3.0.0 > cqlsh: show partial trace if incomplete after max_trace_wait > > > Key: CASSANDRA-7645 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7645 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Tyler Hobbs >Assignee: Carl Yeksigian >Priority: Trivial > Fix For: 2.2.4, 2.1.12, 3.0.0 > > > If a trace hasn't completed within {{max_trace_wait}}, cqlsh will say the > trace is unavailable and not show anything. It (and the underlying python > driver) determines when the trace is complete by checking if the {{duration}} > column in {{system_traces.sessions}} is non-null. If {{duration}} is null > but we still have some trace events when the timeout is hit, cqlsh should > print whatever trace events we have along with a warning about it being > incomplete. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10578) bootstrap_test.py:TestBootstrap.simultaneous_bootstrap_test dtest failing
[ https://issues.apache.org/jira/browse/CASSANDRA-10578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974398#comment-14974398 ] Sylvain Lebresne commented on CASSANDRA-10578: -- Seems that the last 3 runs (on 3.0 at least) of that test have succeeded with the dtest PR above merged. [~yukim] happy to call it a day? > bootstrap_test.py:TestBootstrap.simultaneous_bootstrap_test dtest failing > - > > Key: CASSANDRA-10578 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10578 > Project: Cassandra > Issue Type: Sub-task >Reporter: Jim Witschey >Assignee: Yuki Morishita > Fix For: 2.1.x, 2.2.x, 3.0.0 > > > This test fails on 2.1, 2.2, and 3.0 versions tested on CassCI: > http://cassci.datastax.com/view/cassandra-2.1/job/cassandra-2.1_dtest/lastCompletedBuild/testReport/bootstrap_test/TestBootstrap/simultaneous_bootstrap_test/ > http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/350/testReport/junit/bootstrap_test/TestBootstrap/simultaneous_bootstrap_test/ > http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/bootstrap_test/TestBootstrap/simultaneous_bootstrap_test/ > It fails with the same error, indicating that the third node, which should > not start while another node is bootstrapping, started. Oddly, the assertion > just before it, looking for a particular error in the logs, succeeds. > This could be a race condition, where one node successfully completes > bootstrapping before the third node is started. However, I don't know how > likely that is, since it fails consistently. Unfortunately, we don't have > enough history on CassCI to show when the test failure started. > I'm assigning [~yukim] for now, feel free to reassign. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10462) Fix failing test_failure_threshold_deletions upgrade test
[ https://issues.apache.org/jira/browse/CASSANDRA-10462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974390#comment-14974390 ] Sylvain Lebresne commented on CASSANDRA-10462: -- Hum, it appears the test have been renamed (probably not the most helpful thing to do at this point in time for the record) so I'm not sure how to check whether it has passed on the last builds or not. But the last CI build doesn't have any failure that looks like this test so closing. > Fix failing test_failure_threshold_deletions upgrade test > - > > Key: CASSANDRA-10462 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10462 > Project: Cassandra > Issue Type: Sub-task >Reporter: Jim Witschey >Assignee: Sylvain Lebresne > Fix For: 3.0.0 > > > The > {{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_failure_threshold_deletions}} > dtest fails on CassCI: > http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_failure_threshold_deletions/ > and has failed for a while: > http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_failure_threshold_deletions/history/ > It fails identically when I run it manually on OpenStack, so I don't think > it's a CassCI problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332)