[jira] [Updated] (CASSANDRA-8282) org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out while selecting query

2019-03-15 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-8282:
-
Status: Open  (was: Resolved)

> org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out 
> while selecting query
> ---
>
> Key: CASSANDRA-8282
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8282
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 12
> AWS network
> [cqlsh 5.0.1 | Cassandra 2.1.0 | CQL spec 3.2.0 | Native protocol v3]  
> 4 cores and 15GB ram with 6 nodes.
>Reporter: murali
>Priority: Low
>  Labels: performance
>
> Hi there,
> We are getting below error when we ran {{select count(*) from demo.songs}} on 
> CQLSH ( [cqlsh 5.0.1 | Cassandra 2.1.0 | CQL spec 3.2.0 | Native protocol v3] 
> ) and we are on AWS m3.large instances with 6 nodes in single DC.
> {noformat}
> ERROR [Thrift:15] 2014-11-10 09:27:52,613 CustomTThreadPoolServer.java:219 - 
> Error occurred during processing of message.
> com.google.common.util.concurrent.UncheckedExecutionException: 
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
> received only 0 responses.
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201) 
> ~[guava-16.0.jar:na]
> at com.google.common.cache.LocalCache.get(LocalCache.java:3934) 
> ~[guava-16.0.jar:na]
> at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) 
> ~[guava-16.0.jar:na]
> at 
> com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821)
>  ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.service.ClientState.authorize(ClientState.java:353) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:225)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:219) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:203)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.thrift.CassandraServer.multiget_slice(CassandraServer.java:370)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$multiget_slice.getResult(Cassandra.java:3716)
>  ~[apache-cassandra-thrift-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$multiget_slice.getResult(Cassandra.java:3700)
>  ~[apache-cassandra-thrift-2.1.0.jar:2.1.0]
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
> ~[libthrift-0.9.1.jar:0.9.1]
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[libthrift-0.9.1.jar:0.9.1]
> at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_51]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
> Caused by: java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
> received only 0 responses.
> at org.apache.cassandra.auth.Auth.selectUser(Auth.java:271) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:88) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:67)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.service.ClientState$1.load(ClientState.java:339) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.service.ClientState$1.load(ClientState.java:336) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3524)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2317) 
> ~[guava-16.0.jar:na]
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2280)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2195) 
> ~[guava-16.0.jar:na]
> ... 16 common frames omitted
> Caused by: org.apache.cassandra.exceptions.ReadTimeout

[jira] [Updated] (CASSANDRA-7396) Allow selecting Map values and Set elements

2019-03-15 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-7396:
-
Status: Open  (was: Resolved)

> Allow selecting Map values and Set elements
> ---
>
> Key: CASSANDRA-7396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7396
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Legacy/CQL
>Reporter: Jonathan Ellis
>Assignee: Sylvain Lebresne
>Priority: Normal
>  Labels: cql, docs-impacting
> Fix For: 4.0
>
> Attachments: 7396_unit_tests.txt
>
>
> Allow "SELECT map['key]" and "SELECT list[index]."  (Selecting a UDT subfield 
> is already supported.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-7423) Allow updating individual subfields of UDT

2019-03-15 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-7423:
-
Status: Open  (was: Resolved)

> Allow updating individual subfields of UDT
> --
>
> Key: CASSANDRA-7423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7423
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/CQL
>Reporter: Tupshin Harper
>Assignee: Tyler Hobbs
>Priority: Normal
>  Labels: LWT, client-impacting, cql, docs-impacting
> Fix For: 3.6
>
>
> Since user defined types were implemented in CASSANDRA-5590 as blobs (you 
> have to rewrite the entire type in order to make any modifications), they 
> can't be safely used without LWT for any operation that wants to modify a 
> subset of the UDT's fields by any client process that is not authoritative 
> for the entire blob. 
> When trying to use UDTs to model complex records (particularly with nesting), 
> this is not an exceptional circumstance, this is the totally expected normal 
> situation. 
> The use of UDTs for anything non-trivial is harmful to either performance or 
> consistency or both.
> edit: to clarify, i believe that most potential uses of UDTs should be 
> considered anti-patterns until/unless we have field-level r/w access to 
> individual elements of the UDT, with individual timestamps and standard LWW 
> semantics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13433) RPM distribution improvements and known issues

2019-03-06 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13433:
--
Status: Testing  (was: Awaiting Feedback)

> RPM distribution improvements and known issues
> --
>
> Key: CASSANDRA-13433
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13433
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Major
> Attachments: cassandra-3.9-centos6.patch
>
>
> Starting with CASSANDRA-13252, new releases will be provided as both official 
> RPM and Debian packages.  While the Debian packages are already well 
> established with our user base, the RPMs just have been release for the first 
> time and still require some attention. 
> Feel free to discuss RPM related issues in this ticket and open a sub-task to 
> fill a bug report. 
> Please note that native systemd support will be implemented with 
> CASSANDRA-13148 and this is not strictly a RPM specific issue. We still 
> intent to offer non-systemd support based on the already working init scripts 
> that we ship. Therefor the first step is to make use of systemd backward 
> compatibility for SysV/LSB scripts, so we can provide RPMs for both systemd 
> and non-systemd environments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15015) Cassandra metrics documentation is not correct for Hint_delays metric

2019-03-03 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-15015:
--
Status: Ready to Commit  (was: Patch Available)

> Cassandra metrics documentation is not correct for Hint_delays metric
> -
>
> Key: CASSANDRA-15015
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15015
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation/Blog
>Reporter: Anup Shirolkar
>Assignee: Anup Shirolkar
>Priority: Minor
>  Labels: documentation, easyfix, low-hanging-fruit
> Fix For: 3.0.19, 3.11.5, 4.0
>
> Attachments: 150150-trunk.txt
>
>
> The Cassandra metrics for hint delays are not correctly referred on the 
> documentation web page: 
> [http://cassandra.apache.org/doc/latest/operating/metrics.html#hintsservice-metrics]
>  
> The metrics are defined in the 
> [code|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/metrics/HintsServiceMetrics.java#L45-L52]
>  as 'Hint_delays' and 'Hint_delays-' but those are listed on the 
> website as 'Hints_delays' and 'Hints_delays-'.
> The documentation should be fixed by removing the extra 's' in Hints to match 
> it with code.
> The Jira for adding hint_delays: CASSANDRA-13234 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15030) Add support for SSL and bindable address to sidecar

2019-02-22 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-15030:
--
Status: Ready to Commit  (was: Patch Available)

> Add support for SSL and bindable address to sidecar
> ---
>
> Key: CASSANDRA-15030
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15030
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Sidecar
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We need to support SSL for the sidecar's REST interface. We should also have 
> the ability to bind the sidecar's API to a specific network interface. This 
> patch adds support for both.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15030) Add support for SSL and bindable address to sidecar

2019-02-22 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-15030:
--
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

> Add support for SSL and bindable address to sidecar
> ---
>
> Key: CASSANDRA-15030
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15030
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Sidecar
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We need to support SSL for the sidecar's REST interface. We should also have 
> the ability to bind the sidecar's API to a specific network interface. This 
> patch adds support for both.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14848) When upgrading 3.11.3->4.0 using SSL 4.0 nodes does not connect to old non seed nodes

2019-02-19 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14848:
--
Status: Ready to Commit  (was: Patch Available)

> When upgrading 3.11.3->4.0 using SSL 4.0 nodes does not connect to old non 
> seed nodes
> -
>
> Key: CASSANDRA-14848
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14848
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Internode
>Reporter: Tommy Stendahl
>Assignee: Tommy Stendahl
>Priority: Blocker
>  Labels: security
> Fix For: 4.0
>
>
> When upgrading from 3.11.3 to 4.0 with server encryption enabled the new 4.0 
> node only connects to 3.11.3 seed node, there are no connection established 
> to non-seed nodes on the old version.
> I have four nodes, *.242 is upgraded to 4.0, *.243 and *.244 are 3.11.3 
> non-seed and *.246 are 3.11.3 seed. After starting the 4.0 node I get this 
> nodetool status on the different nodes:
> {noformat}
> *.242
> -- Address Load Tokens Owns (effective) Host ID Rack
> UN 10.216.193.242 1017.77 KiB 256 75,1% 7d278e14-d549-42f3-840d-77cfd852fbf4 
> RAC1
> DN 10.216.193.243 743.32 KiB 256 74,8% 5586243a-ca74-4125-8e7e-09e82e23c4e5 
> RAC1
> DN 10.216.193.244 711.54 KiB 256 75,2% c155e262-b898-4e86-9e1d-d4d0f97e88f6 
> RAC1
> UN 10.216.193.246 659.81 KiB 256 74,9% 502dd00f-fc02-4024-b65f-b98ba3808291 
> RAC1
> *.243 and *.244
> -- Address Load Tokens Owns (effective) Host ID Rack
> DN 10.216.193.242 657.4 KiB 256 75,1% 7d278e14-d549-42f3-840d-77cfd852fbf4 
> RAC1
> UN 10.216.193.243 471 KiB 256 74,8% 5586243a-ca74-4125-8e7e-09e82e23c4e5 RAC1
> UN 10.216.193.244 471.71 KiB 256 75,2% c155e262-b898-4e86-9e1d-d4d0f97e88f6 
> RAC1
> UN 10.216.193.246 388.54 KiB 256 74,9% 502dd00f-fc02-4024-b65f-b98ba3808291 
> RAC1
> *.246
> -- Address Load Tokens Owns (effective) Host ID Rack
> UN 10.216.193.242 657.4 KiB 256 75,1% 7d278e14-d549-42f3-840d-77cfd852fbf4 
> RAC1
> UN 10.216.193.243 471 KiB 256 74,8% 5586243a-ca74-4125-8e7e-09e82e23c4e5 RAC1
> UN 10.216.193.244 471.71 KiB 256 75,2% c155e262-b898-4e86-9e1d-d4d0f97e88f6 
> RAC1
> UN 10.216.193.246 388.54 KiB 256 74,9% 502dd00f-fc02-4024-b65f-b98ba3808291 
> RAC1
> {noformat}
>  
> I have built 4.0 with wire tracing activated and in my config the 
> storage_port=12700 and ssl_storage_port=12701. In the log I can see that the 
> 4.0 node start to connect to the 3.11.3 seed node on the storage_port but 
> quickly switch to the ssl_storage_port, but when connecting to the non-seed 
> nodes it never switch to the ssl_storage_port.
> {noformat}
> >grep 193.246 system.log | grep Outbound
> 2018-10-25T10:57:36.799+0200 [MessagingService-NettyOutbound-Thread-4-1] INFO 
> i.n.u.internal.logging.Slf4JLogger:101 info [id: 0x2f0e5e55] CONNECT: 
> /10.216.193.246:12700
> 2018-10-25T10:57:36.902+0200 [MessagingService-NettyOutbound-Thread-4-2] INFO 
> i.n.u.internal.logging.Slf4JLogger:101 info [id: 0x9e81f62c] CONNECT: 
> /10.216.193.246:12701
> 2018-10-25T10:57:36.905+0200 [MessagingService-NettyOutbound-Thread-4-2] INFO 
> i.n.u.internal.logging.Slf4JLogger:101 info [id: 0x9e81f62c, 
> L:/10.216.193.242:37252 - R:10.216.193.246/10.216.193.246:12701] ACTIVE
> 2018-10-25T10:57:36.906+0200 [MessagingService-NettyOutbound-Thread-4-2] INFO 
> i.n.u.internal.logging.Slf4JLogger:101 info [id: 0x9e81f62c, 
> L:/10.216.193.242:37252 - R:10.216.193.246/10.216.193.246:12701] WRITE: 8B
> >grep 193.243 system.log | grep Outbound
> 2018-10-25T10:57:38.438+0200 [MessagingService-NettyOutbound-Thread-4-3] INFO 
> i.n.u.internal.logging.Slf4JLogger:101 info [id: 0xd8f1d6c4] CONNECT: 
> /10.216.193.243:12700
> 2018-10-25T10:57:38.540+0200 [MessagingService-NettyOutbound-Thread-4-4] INFO 
> i.n.u.internal.logging.Slf4JLogger:101 info [id: 0xfde6cc9f] CONNECT: 
> /10.216.193.243:12700
> 2018-10-25T10:57:38.694+0200 [MessagingService-NettyOutbound-Thread-4-5] INFO 
> i.n.u.internal.logging.Slf4JLogger:101 info [id: 0x7e87fc4e] CONNECT: 
> /10.216.193.243:12700
> 2018-10-25T10:57:38.741+0200 [MessagingService-NettyOutbound-Thread-4-7] INFO 
> i.n.u.internal.logging.Slf4JLogger:101 info [id: 0x39395296] CONNECT: 
> /10.216.193.243:12700{noformat}
>  
> When I had the dbug log activated and started the 4.0 node I can see that it 
> switch port for *.246 but not for *.243 and *.244.
> {noformat}
> >grep DEBUG system.log| grep OutboundMessagingConnection | grep 
> >maybeUpdateConnectionId
> 2018-10-25T13:12:56.095+0200 [ScheduledFastTasks:1] DEBUG 
> o.a.c.n.a.OutboundMessagingConnection:314 maybeUpdateConnectionId changing 
> connectionId to 10.216.193.246:12701 (GOSSIP), with a different port for 
> secure communication, because peer version is 11
> 2018-10-25T13:12:58.100+0200 [ReadStage-1] DEB

[jira] [Updated] (CASSANDRA-14298) cqlshlib tests broken on b.a.o

2019-01-27 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14298:
--
Status: Ready to Commit  (was: Patch Available)

> cqlshlib tests broken on b.a.o
> --
>
> Key: CASSANDRA-14298
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14298
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build, Legacy/Testing
>Reporter: Stefan Podkowinski
>Assignee: Patrick Bannister
>Priority: Major
>  Labels: cqlsh, dtest
> Attachments: CASSANDRA-14298-old.txt, CASSANDRA-14298.txt, 
> cqlsh_tests_notes.md
>
>
> It appears that cqlsh-tests on builds.apache.org on all branches stopped 
> working since we removed nosetests from the system environment. See e.g. 
> [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console].
>  Looks like we either have to make nosetests available again or migrate to 
> pytest as we did with dtests. Giving pytest a quick try resulted in many 
> errors locally, but I haven't inspected them in detail yet. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14298) cqlshlib tests broken on b.a.o

2019-01-27 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14298:
--
Status: In Progress  (was: Ready to Commit)

> cqlshlib tests broken on b.a.o
> --
>
> Key: CASSANDRA-14298
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14298
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build, Legacy/Testing
>Reporter: Stefan Podkowinski
>Assignee: Patrick Bannister
>Priority: Major
>  Labels: cqlsh, dtest
> Attachments: CASSANDRA-14298-old.txt, CASSANDRA-14298.txt, 
> cqlsh_tests_notes.md
>
>
> It appears that cqlsh-tests on builds.apache.org on all branches stopped 
> working since we removed nosetests from the system environment. See e.g. 
> [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console].
>  Looks like we either have to make nosetests available again or migrate to 
> pytest as we did with dtests. Giving pytest a quick try resulted in many 
> errors locally, but I haven't inspected them in detail yet. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13545) Exception in CompactionExecutor leading to tmplink files not being removed

2019-01-24 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13545:
--
Resolution: Fixed
Status: Resolved  (was: Awaiting Feedback)

> Exception in CompactionExecutor leading to tmplink files not being removed
> --
>
> Key: CASSANDRA-13545
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13545
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Compaction
>Reporter: Dmitry Erokhin
>Priority: Critical
>
> We are facing an issue where compactions fail on a few nodes with the 
> following message
> {code}
> ERROR [CompactionExecutor:1248] 2017-05-22 15:32:55,390 
> CassandraDaemon.java:185 - Exception in thread 
> Thread[CompactionExecutor:1248,1,main]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.io.sstable.IndexSummary.(IndexSummary.java:86) 
> ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.io.sstable.IndexSummaryBuilder.build(IndexSummaryBuilder.java:235)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:316)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:170)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:115)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.append(DefaultCompactionWriter.java:64)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:256)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_121]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_121]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_121]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_121]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> {code}
> Also, the number of tmplink files in /var/lib/cassandra/data/ name>/blocks/tmplink* is growing constantly until node runs out of space. 
> Restarting cassandra removes all tmplink files, but the issue still continues.
> We are using Cassandra 2.2.5 on Debian 8 with Oracle Java 8
> {code}
> root@cassandra-p10:/var/lib/cassandra/data/mugenstorage/blocks-33167ef0447a11e68f3e5b42fc45b62f#
>  dpkg -l | grep -E "java|cassandra"
> ii  cassandra  2.2.5all  
> distributed storage system for structured data
> ii  cassandra-tools2.2.5all  
> distributed storage system for structured data
> ii  java-common0.52 all  
> Base of all Java packages
> ii  javascript-common  11   all  
> Base support for JavaScript library packages
> ii  oracle-java8-installer 8u121-1~webupd8~0all  
> Oracle Java(TM) Development Kit (JDK) 8
> ii  oracle-java8-set-default   8u121-1~webupd8~0all  
> Set Oracle JDK 8 as default Java
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13545) Exception in CompactionExecutor leading to tmplink files not being removed

2019-01-24 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13545:
--
Status: Awaiting Feedback  (was: Open)

> Exception in CompactionExecutor leading to tmplink files not being removed
> --
>
> Key: CASSANDRA-13545
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13545
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Compaction
>Reporter: Dmitry Erokhin
>Priority: Critical
>
> We are facing an issue where compactions fail on a few nodes with the 
> following message
> {code}
> ERROR [CompactionExecutor:1248] 2017-05-22 15:32:55,390 
> CassandraDaemon.java:185 - Exception in thread 
> Thread[CompactionExecutor:1248,1,main]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.io.sstable.IndexSummary.(IndexSummary.java:86) 
> ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.io.sstable.IndexSummaryBuilder.build(IndexSummaryBuilder.java:235)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:316)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:170)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:115)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.append(DefaultCompactionWriter.java:64)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:256)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_121]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_121]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_121]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_121]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> {code}
> Also, the number of tmplink files in /var/lib/cassandra/data/ name>/blocks/tmplink* is growing constantly until node runs out of space. 
> Restarting cassandra removes all tmplink files, but the issue still continues.
> We are using Cassandra 2.2.5 on Debian 8 with Oracle Java 8
> {code}
> root@cassandra-p10:/var/lib/cassandra/data/mugenstorage/blocks-33167ef0447a11e68f3e5b42fc45b62f#
>  dpkg -l | grep -E "java|cassandra"
> ii  cassandra  2.2.5all  
> distributed storage system for structured data
> ii  cassandra-tools2.2.5all  
> distributed storage system for structured data
> ii  java-common0.52 all  
> Base of all Java packages
> ii  javascript-common  11   all  
> Base support for JavaScript library packages
> ii  oracle-java8-installer 8u121-1~webupd8~0all  
> Oracle Java(TM) Development Kit (JDK) 8
> ii  oracle-java8-set-default   8u121-1~webupd8~0all  
> Set Oracle JDK 8 as default Java
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11105) cassandra-stress tool - InvalidQueryException: Batch too large

2018-12-31 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-11105:
--
Status: Ready to Commit  (was: Patch Available)

> cassandra-stress tool - InvalidQueryException: Batch too large
> --
>
> Key: CASSANDRA-11105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11105
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Cassandra 2.2.4, Java 8, CentOS 6.5
>Reporter: Ralf Steppacher
>Priority: Major
> Fix For: 4.0
>
> Attachments: 11105-trunk.txt, batch_too_large.yaml
>
>
> I am using Cassandra 2.2.4 and I am struggling to get the cassandra-stress 
> tool to work for my test scenario. I have followed the example on 
> http://www.datastax.com/dev/blog/improved-cassandra-2-1-stress-tool-benchmark-any-schema
>  to create a yaml file describing my test (attached).
> I am collecting events per user id (text, partition key). Events have a 
> session type (text), event type (text), and creation time (timestamp) 
> (clustering keys, in that order). Plus some more attributes required for 
> rendering the events in a UI. For testing purposes I ended up with the 
> following column spec and insert distribution:
> {noformat}
> columnspec:
>   - name: created_at
> cluster: uniform(10..1)
>   - name: event_type
> size: uniform(5..10)
> population: uniform(1..30)
> cluster: uniform(1..30)
>   - name: session_type
> size: fixed(5)
> population: uniform(1..4)
> cluster: uniform(1..4)
>   - name: user_id
> size: fixed(15)
> population: uniform(1..100)
>   - name: message
> size: uniform(10..100)
> population: uniform(1..100B)
> insert:
>   partitions: fixed(1)
>   batchtype: UNLOGGED
>   select: fixed(1)/120
> {noformat}
> Running stress tool for just the insert prints 
> {noformat}
> Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
> total rows in the partitions)
> {noformat}
> and then immediately starts flooding me with 
> {{com.datastax.driver.core.exceptions.InvalidQueryException: Batch too 
> large}}. 
> Why I should be exceeding the {{batch_size_fail_threshold_in_kb: 50}} in the 
> {{cassandra.yaml}} I do not understand. My understanding is that the stress 
> tool should generate one row per batch. The size of a single row should not 
> exceed {{8+10*3+5*3+15*3+100*3 = 398 bytes}}. Assuming a worst case of all 
> text characters being 3 byte unicode characters. 
> This is how I start the attached user scenario:
> {noformat}
> [rsteppac@centos bin]$ ./cassandra-stress user 
> profile=../batch_too_large.yaml ops\(insert=1\) -log level=verbose 
> file=~/centos_event_by_patient_session_event_timestamp_insert_only.log -node 
> 10.211.55.8
> INFO  08:00:07 Did not find Netty's native epoll transport in the classpath, 
> defaulting to NIO.
> INFO  08:00:08 Using data-center name 'datacenter1' for 
> DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct 
> datacenter name with DCAwareRoundRobinPolicy constructor)
> INFO  08:00:08 New Cassandra host /10.211.55.8:9042 added
> Connected to cluster: Titan_DEV
> Datatacenter: datacenter1; Host: /10.211.55.8; Rack: rack1
> Created schema. Sleeping 1s for propagation.
> Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
> total rows in the partitions)
> com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large
>   at 
> com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:271)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:185)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:55)
>   at 
> org.apache.cassandra.stress.operations.userdefined.SchemaInsert$JavaDriverRun.run(SchemaInsert.java:87)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:159)
>   at 
> org.apache.cassandra.stress.operations.userdefined.SchemaInsert.run(SchemaInsert.java:119)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:309)
> Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Batch 
> too large
>   at 
> com.datastax.driver.core.Responses$Error.asException(Responses.java:125)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:120)
>   at 
> com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:186)
>   at 
> com.datastax.driver.core.RequestHandler.access$2300(RequestHandler.java:45)
>   at 
> com.

[jira] [Updated] (CASSANDRA-14934) Cassandra Too many open files .

2018-12-22 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14934:
--
Since Version: 3.11.3
   Status: Testing  (was: Awaiting Feedback)

> Cassandra Too many open files .
> ---
>
> Key: CASSANDRA-14934
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14934
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: This is in the Production environment .
> We have observed 70k open files for the process in non-prd . 
>  
> Attached the entire logs of the open files . 
>Reporter: Umashankar
>Assignee: Umashankar
>Priority: Major
>  Labels: performance
> Fix For: 3.10
>
>
> We have 4 Node with 2 DC . 
> We are seeing 
> Caused by: java.nio.file.FileSystemException: 
> /apps/cassandra/data/commitlog/CommitLog-6-1544453835569.log: Too many open 
> files .
> We have increased the ulimit to 24 k. 
> lsof | grep  
> .3.3 million records . 
>  
> Found ~ .2.5 millions of records are of TCP connection with either Socket 
> connection status as ESTABLISHED/LISTEN . Found that connections are of 
> datasource on 9042 . 
>  
> Thanks
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14900) DigestMismatchException log messages should be at TRACE

2018-12-22 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14900:
--
Status: Ready to Commit  (was: Patch Available)

> DigestMismatchException log messages should be at TRACE
> ---
>
> Key: CASSANDRA-14900
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14900
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration, Observability
>Reporter: Aleksandr Sorokoumov
>Assignee: Aleksandr Sorokoumov
>Priority: Minor
> Fix For: 3.0.x, 3.11.x
>
>
> DigestMismatchException log messages should probably be at TRACE. These log 
> messages about normal digest mismatches that include scary stacktraces:
> {noformat}
> DEBUG [ReadRepairStage:40] 2017-10-24 19:45:50,349  ReadCallback.java:242 - 
> Digest mismatch:
> org.apache.cassandra.service.DigestMismatchException: Mismatch for key 
> DecoratedKey(-786225366477494582, 
> 31302e33322e37382e31332d6765744469736b5574696c50657263656e742d736463) 
> (943070f62d72259e3c25be0c6f76e489 vs f4c7c7675c803e0028992e11e0bbc5a0)
> at 
> org.apache.cassandra.service.DigestResolver.compareResponses(DigestResolver.java:92)
>  ~[cassandra-all-3.11.0.1855.jar:3.11.0.1855]
> at 
> org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:233)
>  ~[cassandra-all-3.11.0.1855.jar:3.11.0.1855]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_121]
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
>  [cassandra-all-3.11.0.1855.jar:3.11.0.1855]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13365) Nodes entering GC loop, does not recover

2018-11-30 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13365:
--
Reproduced In: 3.11.1, 3.10  (was: 3.10, 3.11.1)
   Tester: Marc Ende 
Since Version: 3.11.3  (was: 3.9)
   Status: Awaiting Feedback  (was: Open)

> Nodes entering GC loop, does not recover
> 
>
> Key: CASSANDRA-13365
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13365
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 34-node cluster over 4 DCs
> Linux CentOS 7.2 x86
> Mix of 64GB/128GB RAM / node
> Mix of 32/40 hardware threads / node, Xeon ~2.4Ghz
> High read volume, low write volume, occasional sstable bulk loading
>Reporter: Mina Naguib
>Priority: Major
>
> Over the last week we've been observing two related problems affecting our 
> Cassandra cluster
> Problem 1: 1-few nodes per DC entering GC loop, not recovering
> Checking the heap usage stats, there's a sudden jump of 1-3GB. Some nodes 
> recover, but some don't and log this:
> {noformat}
> 2017-03-21T11:23:02.957-0400: 54099.519: [Full GC (Allocation Failure)  
> 13G->11G(14G), 29.4127307 secs]
> 2017-03-21T11:23:45.270-0400: 54141.833: [Full GC (Allocation Failure)  
> 13G->12G(14G), 28.1561881 secs]
> 2017-03-21T11:24:20.307-0400: 54176.869: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.7019501 secs]
> 2017-03-21T11:24:50.528-0400: 54207.090: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1372267 secs]
> 2017-03-21T11:25:19.190-0400: 54235.752: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.0703975 secs]
> 2017-03-21T11:25:46.711-0400: 54263.273: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3187768 secs]
> 2017-03-21T11:26:15.419-0400: 54291.981: [Full GC (Allocation Failure)  
> 13G->13G(14G), 26.9493405 secs]
> 2017-03-21T11:26:43.399-0400: 54319.961: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.5222085 secs]
> 2017-03-21T11:27:11.383-0400: 54347.945: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1769581 secs]
> 2017-03-21T11:27:40.174-0400: 54376.737: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.4639031 secs]
> 2017-03-21T11:28:08.946-0400: 54405.508: [Full GC (Allocation Failure)  
> 13G->13G(14G), 30.3480523 secs]
> 2017-03-21T11:28:40.117-0400: 54436.680: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.8220513 secs]
> 2017-03-21T11:29:08.459-0400: 54465.022: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.4691271 secs]
> 2017-03-21T11:29:37.114-0400: 54493.676: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.0275733 secs]
> 2017-03-21T11:30:04.635-0400: 54521.198: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1902627 secs]
> 2017-03-21T11:30:32.114-0400: 54548.676: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.8872850 secs]
> 2017-03-21T11:31:01.430-0400: 54577.993: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1609706 secs]
> 2017-03-21T11:31:29.024-0400: 54605.587: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3635138 secs]
> 2017-03-21T11:31:57.303-0400: 54633.865: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.4143510 secs]
> 2017-03-21T11:32:25.110-0400: 54661.672: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.8595986 secs]
> 2017-03-21T11:32:53.922-0400: 54690.485: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.5242543 secs]
> 2017-03-21T11:33:21.867-0400: 54718.429: [Full GC (Allocation Failure)  
> 13G->13G(14G), 30.8930130 secs]
> 2017-03-21T11:33:53.712-0400: 54750.275: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.6523013 secs]
> 2017-03-21T11:34:21.760-0400: 54778.322: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3030198 secs]
> 2017-03-21T11:34:50.073-0400: 54806.635: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1594154 secs]
> 2017-03-21T11:35:17.743-0400: 54834.306: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3766949 secs]
> 2017-03-21T11:35:45.797-0400: 54862.360: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.5756770 secs]
> 2017-03-21T11:36:13.816-0400: 54890.378: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.5541813 secs]
> 2017-03-21T11:36:41.926-0400: 54918.488: [Full GC (Allocation Failure)  
> 13G->13G(14G), 33.7510103 secs]
> 2017-03-21T11:37:16.132-0400: 54952.695: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.4856611 secs]
> 2017-03-21T11:37:44.454-0400: 54981.017: [Full GC (Allocation Failure)  
> 13G->13G(14G), 28.1269335 secs]
> 2017-03-21T11:38:12.774-0400: 55009.337: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.7830448 secs]
> 2017-03-21T11:38:40.840-0400: 55037.402: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3527326 secs]
> 2017-03-21T11:39:08.610-0400: 55065.173: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.5828941 secs]
> 2017-03-21T11:39:36.833-0400: 55093.3

[jira] [Updated] (CASSANDRA-11194) materialized views - support explode() on collections

2018-11-19 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-11194:
--
Status: Open  (was: Awaiting Feedback)

> materialized views - support explode() on collections
> -
>
> Key: CASSANDRA-11194
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11194
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Materialized Views
>Reporter: Jon Haddad
>Priority: Major
>
> I'm working on a database design to model a product catalog.  Products can 
> belong to categories.  Categories can belong to multiple sub categories 
> (think about Amazon's complex taxonomies).
> My category table would look like this, giving me individual categories & 
> their parents:
> {code}
> CREATE TABLE category (
> category_id uuid primary key,
> name text,
> parents set
> );
> {code}
> To get a list of all the children of a particular category, I need a table 
> that looks like the following:
> {code}
> CREATE TABLE categories_by_parent (
> parent_id uuid,
> category_id uuid,
> name text,
> primary key (parent_id, category_id)
> );
> {code}
> The important thing to note here is that a single category can have multiple 
> parents.
> I'd like to propose support for collections in materialized views via an 
> explode() function that would create 1 row per item in the collection.  For 
> instance, I'll insert the following 3 rows (2 parents, 1 child) into the 
> category table:
> {code}
> insert into category (category_id, name, parents) values 
> (009fe0e1-5b09-4efc-a92d-c03720324a4f, 'Parent', null);
> insert into category (category_id, name, parents) values 
> (1f2914de-0adf-4afc-b7ad-ddd8dc876ab1, 'Parent2', null);
> insert into category (category_id, name, parents) values 
> (1f93bc07-9874-42a5-a7d1-b741dc9c509c, 'Child', 
> {009fe0e1-5b09-4efc-a92d-c03720324a4f, 1f2914de-0adf-4afc-b7ad-ddd8dc876ab1 
> });
> cqlsh:test> select * from category;
>  category_id  | name| parents
> --+-+--
>  009fe0e1-5b09-4efc-a92d-c03720324a4f |  Parent | 
> null
>  1f2914de-0adf-4afc-b7ad-ddd8dc876ab1 | Parent2 | 
> null
>  1f93bc07-9874-42a5-a7d1-b741dc9c509c |   Child | 
> {009fe0e1-5b09-4efc-a92d-c03720324a4f, 1f2914de-0adf-4afc-b7ad-ddd8dc876ab1}
> (3 rows)
> {code}
> Given the following CQL to select the child category, utilizing an explode 
> function, I would expect to get back 2 rows, 1 for each parent:
> {code}
> select category_id, name, explode(parents) as parent_id from category where 
> category_id = 1f93bc07-9874-42a5-a7d1-b741dc9c509c;
> category_id  | name  | parent_id
> --+---+--
> 1f93bc07-9874-42a5-a7d1-b741dc9c509c | Child | 
> 009fe0e1-5b09-4efc-a92d-c03720324a4f
> 1f93bc07-9874-42a5-a7d1-b741dc9c509c | Child | 
> 1f2914de-0adf-4afc-b7ad-ddd8dc876ab1
> (2 rows)
> {code}
> This functionality would ideally apply to materialized views, since the 
> ability to control partitioning here would allow us to efficiently query our 
> MV for all categories belonging to a parent in a complex taxonomy.
> {code}
> CREATE MATERIALIZED VIEW categories_by_parent as
> SELECT explode(parents) as parent_id,
> category_id, name FROM category WHERE parents IS NOT NULL
> {code}
> The explode() function is available in Spark Dataframes and my proposed 
> function has the same behavior: 
> http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.explode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11194) materialized views - support explode() on collections

2018-11-19 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-11194:
--
Status: Awaiting Feedback  (was: Open)

> materialized views - support explode() on collections
> -
>
> Key: CASSANDRA-11194
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11194
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Materialized Views
>Reporter: Jon Haddad
>Priority: Major
>
> I'm working on a database design to model a product catalog.  Products can 
> belong to categories.  Categories can belong to multiple sub categories 
> (think about Amazon's complex taxonomies).
> My category table would look like this, giving me individual categories & 
> their parents:
> {code}
> CREATE TABLE category (
> category_id uuid primary key,
> name text,
> parents set
> );
> {code}
> To get a list of all the children of a particular category, I need a table 
> that looks like the following:
> {code}
> CREATE TABLE categories_by_parent (
> parent_id uuid,
> category_id uuid,
> name text,
> primary key (parent_id, category_id)
> );
> {code}
> The important thing to note here is that a single category can have multiple 
> parents.
> I'd like to propose support for collections in materialized views via an 
> explode() function that would create 1 row per item in the collection.  For 
> instance, I'll insert the following 3 rows (2 parents, 1 child) into the 
> category table:
> {code}
> insert into category (category_id, name, parents) values 
> (009fe0e1-5b09-4efc-a92d-c03720324a4f, 'Parent', null);
> insert into category (category_id, name, parents) values 
> (1f2914de-0adf-4afc-b7ad-ddd8dc876ab1, 'Parent2', null);
> insert into category (category_id, name, parents) values 
> (1f93bc07-9874-42a5-a7d1-b741dc9c509c, 'Child', 
> {009fe0e1-5b09-4efc-a92d-c03720324a4f, 1f2914de-0adf-4afc-b7ad-ddd8dc876ab1 
> });
> cqlsh:test> select * from category;
>  category_id  | name| parents
> --+-+--
>  009fe0e1-5b09-4efc-a92d-c03720324a4f |  Parent | 
> null
>  1f2914de-0adf-4afc-b7ad-ddd8dc876ab1 | Parent2 | 
> null
>  1f93bc07-9874-42a5-a7d1-b741dc9c509c |   Child | 
> {009fe0e1-5b09-4efc-a92d-c03720324a4f, 1f2914de-0adf-4afc-b7ad-ddd8dc876ab1}
> (3 rows)
> {code}
> Given the following CQL to select the child category, utilizing an explode 
> function, I would expect to get back 2 rows, 1 for each parent:
> {code}
> select category_id, name, explode(parents) as parent_id from category where 
> category_id = 1f93bc07-9874-42a5-a7d1-b741dc9c509c;
> category_id  | name  | parent_id
> --+---+--
> 1f93bc07-9874-42a5-a7d1-b741dc9c509c | Child | 
> 009fe0e1-5b09-4efc-a92d-c03720324a4f
> 1f93bc07-9874-42a5-a7d1-b741dc9c509c | Child | 
> 1f2914de-0adf-4afc-b7ad-ddd8dc876ab1
> (2 rows)
> {code}
> This functionality would ideally apply to materialized views, since the 
> ability to control partitioning here would allow us to efficiently query our 
> MV for all categories belonging to a parent in a complex taxonomy.
> {code}
> CREATE MATERIALIZED VIEW categories_by_parent as
> SELECT explode(parents) as parent_id,
> category_id, name FROM category WHERE parents IS NOT NULL
> {code}
> The explode() function is available in Spark Dataframes and my proposed 
> function has the same behavior: 
> http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.explode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13433) RPM distribution improvements and known issues

2018-11-12 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13433:
--
Reproduced In: 3.5
Since Version: 3.10
   Status: Awaiting Feedback  (was: Open)

> RPM distribution improvements and known issues
> --
>
> Key: CASSANDRA-13433
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13433
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Major
> Attachments: cassandra-3.9-centos6.patch
>
>
> Starting with CASSANDRA-13252, new releases will be provided as both official 
> RPM and Debian packages.  While the Debian packages are already well 
> established with our user base, the RPMs just have been release for the first 
> time and still require some attention. 
> Feel free to discuss RPM related issues in this ticket and open a sub-task to 
> fill a bug report. 
> Please note that native systemd support will be implemented with 
> CASSANDRA-13148 and this is not strictly a RPM specific issue. We still 
> intent to offer non-systemd support based on the already working init scripts 
> that we ship. Therefor the first step is to make use of systemd backward 
> compatibility for SysV/LSB scripts, so we can provide RPMs for both systemd 
> and non-systemd environments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14810) Upgrade dtests to pytest-3.8

2018-11-07 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14810:
--
Status: Ready to Commit  (was: Patch Available)

> Upgrade dtests to pytest-3.8
> 
>
> Key: CASSANDRA-14810
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14810
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Stefan Podkowinski
>Assignee: Thomas Pouget-Abadie
>Priority: Minor
>  Labels: lhf
> Attachments: 14810-master.txt
>
>
> The [dtest project|https://github.com/apache/cassandra-dtest] uses pytest as 
> test runner of choice for executing tests on builds.apache.org or CircleCI. 
> The pytest dependency has recently been upgrade to 3.6, but couldn't be 
> upgrade to the most recent 3.8 version, due to issues described below.
> Before test execution, the {{run_dtests.py}} script will gather a list of all 
> tests:
>  {{./run_dtests.py --dtest-print-tests-only}}
> Afterwards pytest can be started with any of the output lines as argument. 
> With pytest-3.8 however, the output format changed and preventing pytest to 
> find the test:
>  {{pytest 
> upgrade_tests/upgrade_supercolumns_test.py::TestSCUpgrade::test_upgrade_super_columns_through_limited_versions}}
>  vs
>  {{pytest 
> upgrade_supercolumns_test.py::TestSCUpgrade::test_upgrade_super_columns_through_limited_versions}}
> The underlying issue appears to be the changes in the {{pytest 
> --collect-only}} output, consumed in {{run_dtests.py}}, which now includes a 
>  element that needs to be parsed as well to derive at the path as we 
> did before. We'd have to parse the new output and assemble the correct paths 
> again, so we can use run_dtests.py as we did before with pytest 3.6.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14810) Upgrade dtests to pytest-3.8

2018-11-07 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14810:
--
Status: Open  (was: Ready to Commit)

> Upgrade dtests to pytest-3.8
> 
>
> Key: CASSANDRA-14810
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14810
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Stefan Podkowinski
>Assignee: Thomas Pouget-Abadie
>Priority: Minor
>  Labels: lhf
> Attachments: 14810-master.txt
>
>
> The [dtest project|https://github.com/apache/cassandra-dtest] uses pytest as 
> test runner of choice for executing tests on builds.apache.org or CircleCI. 
> The pytest dependency has recently been upgrade to 3.6, but couldn't be 
> upgrade to the most recent 3.8 version, due to issues described below.
> Before test execution, the {{run_dtests.py}} script will gather a list of all 
> tests:
>  {{./run_dtests.py --dtest-print-tests-only}}
> Afterwards pytest can be started with any of the output lines as argument. 
> With pytest-3.8 however, the output format changed and preventing pytest to 
> find the test:
>  {{pytest 
> upgrade_tests/upgrade_supercolumns_test.py::TestSCUpgrade::test_upgrade_super_columns_through_limited_versions}}
>  vs
>  {{pytest 
> upgrade_supercolumns_test.py::TestSCUpgrade::test_upgrade_super_columns_through_limited_versions}}
> The underlying issue appears to be the changes in the {{pytest 
> --collect-only}} output, consumed in {{run_dtests.py}}, which now includes a 
>  element that needs to be parsed as well to derive at the path as we 
> did before. We'd have to parse the new output and assemble the correct paths 
> again, so we can use run_dtests.py as we did before with pytest 3.6.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14566) Cache CSM.onlyPurgeRepairedTombstones()

2018-10-24 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14566:
--
Status: Ready to Commit  (was: Patch Available)

> Cache CSM.onlyPurgeRepairedTombstones()
> ---
>
> Key: CASSANDRA-14566
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14566
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Stefan Podkowinski
>Assignee: Thomas Pouget-Abadie
>Priority: Minor
>  Labels: lhf, performance
> Attachments: 14566-3.11.txt
>
>
> We currently call {{CompactionStrategyManager.onlyPurgeRepairedTombstones()}} 
> *a lot* during compaction, I think at least for every key. So we should 
> probably cache the value, instead of constantly reading from a volatile and 
> calling parseBoolean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13010) nodetool compactionstats should say which disk a compaction is writing to

2018-10-23 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13010:
--
Status: Open  (was: Ready to Commit)

> nodetool compactionstats should say which disk a compaction is writing to
> -
>
> Key: CASSANDRA-13010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13010
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, Tools
>Reporter: Jon Haddad
>Assignee: Alex Lourie
>Priority: Major
>  Labels: 4.0-feature-freeze-review-requested, lhf
> Attachments: cleanup.png, multiple operations.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13010) nodetool compactionstats should say which disk a compaction is writing to

2018-10-23 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13010:
--
Status: Ready to Commit  (was: Patch Available)

> nodetool compactionstats should say which disk a compaction is writing to
> -
>
> Key: CASSANDRA-13010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13010
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, Tools
>Reporter: Jon Haddad
>Assignee: Alex Lourie
>Priority: Major
>  Labels: 4.0-feature-freeze-review-requested, lhf
> Attachments: cleanup.png, multiple operations.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14361) Allow SimpleSeedProvider to resolve multiple IPs per DNS name

2018-10-13 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14361:
--
Status: Ready to Commit  (was: Patch Available)

> Allow SimpleSeedProvider to resolve multiple IPs per DNS name
> -
>
> Key: CASSANDRA-14361
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14361
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Ben Bromhead
>Assignee: Ben Bromhead
>Priority: Minor
> Fix For: 4.0
>
>
> Currently SimpleSeedProvider can accept a comma separated string of IPs or 
> hostnames as the set of Cassandra seeds. hostnames are resolved via 
> InetAddress.getByName, which will only return the first IP associated with an 
> A,  or CNAME record.
> By changing to InetAddress.getAllByName, existing behavior is preserved, but 
> now Cassandra can discover multiple IP address per record, allowing seed 
> discovery by DNS to be a little easier.
> Some examples of improved workflows with this change include: 
>  * specify the DNS name of a headless service in Kubernetes which will 
> resolve to all IP addresses of pods within that service. 
>  * seed discovery for multi-region clusters via AWS route53, AzureDNS etc
>  * Other common DNS service discovery mechanisms.
> The only behavior this is likely to impact would be where users are relying 
> on the fact that getByName only returns a single IP address.
> I can't imagine any scenario where that is a sane choice. Even when that 
> choice has been made, it only impacts the first startup of Cassandra and 
> would not be on any critical path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14346) Scheduled Repair in Cassandra

2018-09-10 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14346:
--
Status: Ready to Commit  (was: Patch Available)

> Scheduled Repair in Cassandra
> -
>
> Key: CASSANDRA-14346
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14346
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Repair
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Major
>  Labels: 4.0-feature-freeze-review-requested, 
> CommunityFeedbackRequested
> Fix For: 4.x
>
> Attachments: ScheduledRepairV1_20180327.pdf
>
>
> There have been many attempts to automate repair in Cassandra, which makes 
> sense given that it is necessary to give our users eventual consistency. Most 
> recently CASSANDRA-10070, CASSANDRA-8911 and CASSANDRA-13924 have all looked 
> for ways to solve this problem.
> At Netflix we've built a scheduled repair service within Priam (our sidecar), 
> which we spoke about last year at NGCC. Given the positive feedback at NGCC 
> we focussed on getting it production ready and have now been using it in 
> production to repair hundreds of clusters, tens of thousands of nodes, and 
> petabytes of data for the past six months. Also based on feedback at NGCC we 
> have invested effort in figuring out how to integrate this natively into 
> Cassandra rather than open sourcing it as an external service (e.g. in Priam).
> As such, [~vinaykumarcse] and I would like to re-work and merge our 
> implementation into Cassandra, and have created a [design 
> document|https://docs.google.com/document/d/1RV4rOrG1gwlD5IljmrIq_t45rz7H3xs9GbFSEyGzEtM/edit?usp=sharing]
>  showing how we plan to make it happen, including the the user interface.
> As we work on the code migration from Priam to Cassandra, any feedback would 
> be greatly appreciated about the interface or v1 implementation features. I 
> have tried to call out in the document features which we explicitly consider 
> future work (as well as a path forward to implement them in the future) 
> because I would very much like to get this done before the 4.0 merge window 
> closes, and to do that I think aggressively pruning scope is going to be a 
> necessity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14098) Potential Integer Overflow

2018-08-31 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14098:
--
Status: Awaiting Feedback  (was: Open)

> Potential Integer Overflow
> --
>
> Key: CASSANDRA-14098
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14098
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: songwanging
>Priority: Trivial
>  Labels: lhf
> Attachments: 14098-3.0.txt
>
>
> Our tool DeepTect has detected a potential integer overflow: 
> Path: cassandra/src/java/org/apache/cassandra/service/StorageService.java
> {code:java}
> ...
> long totalRowCountEstimate = cfs.estimatedKeysForRange(range);
> ...
>  int splitCount = Math.max(1, Math.min(maxSplitCount, 
> (int)(totalRowCountEstimate / keysPerSplit)));
> {code}
> In the above code snippet, "totalRowCountEstimate" is a long variable, 
> "keysPerSplit" is an integer variable. If "totalRowCountEstimate" is super 
> large, directly casting "(totalRowCountEstimate / keysPerSplit" into integer 
> will definitely lead to a potential integer overflow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14098) Potential Integer Overflow

2018-08-31 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14098:
--
Status: Open  (was: Ready to Commit)

> Potential Integer Overflow
> --
>
> Key: CASSANDRA-14098
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14098
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: songwanging
>Priority: Trivial
>  Labels: lhf
> Attachments: 14098-3.0.txt
>
>
> Our tool DeepTect has detected a potential integer overflow: 
> Path: cassandra/src/java/org/apache/cassandra/service/StorageService.java
> {code:java}
> ...
> long totalRowCountEstimate = cfs.estimatedKeysForRange(range);
> ...
>  int splitCount = Math.max(1, Math.min(maxSplitCount, 
> (int)(totalRowCountEstimate / keysPerSplit)));
> {code}
> In the above code snippet, "totalRowCountEstimate" is a long variable, 
> "keysPerSplit" is an integer variable. If "totalRowCountEstimate" is super 
> large, directly casting "(totalRowCountEstimate / keysPerSplit" into integer 
> will definitely lead to a potential integer overflow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14098) Potential Integer Overflow

2018-08-31 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14098:
--
Status: Ready to Commit  (was: Patch Available)

> Potential Integer Overflow
> --
>
> Key: CASSANDRA-14098
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14098
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: songwanging
>Priority: Trivial
>  Labels: lhf
> Attachments: 14098-3.0.txt
>
>
> Our tool DeepTect has detected a potential integer overflow: 
> Path: cassandra/src/java/org/apache/cassandra/service/StorageService.java
> {code:java}
> ...
> long totalRowCountEstimate = cfs.estimatedKeysForRange(range);
> ...
>  int splitCount = Math.max(1, Math.min(maxSplitCount, 
> (int)(totalRowCountEstimate / keysPerSplit)));
> {code}
> In the above code snippet, "totalRowCountEstimate" is a long variable, 
> "keysPerSplit" is an integer variable. If "totalRowCountEstimate" is super 
> large, directly casting "(totalRowCountEstimate / keysPerSplit" into integer 
> will definitely lead to a potential integer overflow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14681) SafeMemoryWriterTest doesn't compile on trunk

2018-08-30 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14681:
--
Status: Ready to Commit  (was: Patch Available)

> SafeMemoryWriterTest doesn't compile on trunk
> -
>
> Key: CASSANDRA-14681
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14681
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Trivial
> Fix For: 4.0
>
>
> {{SafeMemoryWriterTest}} references {{sun.misc.VM}}, which doesn't exist in 
> Java 11, so the build fails.
> Proposed patch makes the test work against Java 8 + 11.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9289) Recover from unknown table when deserializing internode messages

2018-08-08 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-9289:
-
Status: Ready to Commit  (was: Patch Available)

> Recover from unknown table when deserializing internode messages
> 
>
> Key: CASSANDRA-9289
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9289
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tyler Hobbs
>Assignee: ZhaoYang
>Priority: Minor
>  Labels: lhf
> Fix For: 3.11.x
>
>
> As discussed in CASSANDRA-9136, it would be nice to gracefully recover from 
> seeing an unknown table in a message from another node.  Currently, we close 
> the connection and reconnect, which can cause other concurrent queries to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14419) Resume compresed hints delivery broken

2018-06-16 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14419:
--
Status: Ready to Commit  (was: Patch Available)

> Resume compresed hints delivery broken
> --
>
> Key: CASSANDRA-14419
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14419
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hints
>Reporter: Tommy Stendahl
>Assignee: Tommy Stendahl
>Priority: Blocker
> Fix For: 3.0.17
>
>
> We are using Cassandra 3.0.15 and are using compressed hints, but if hint 
> delivery is interrupted resuming hint delivery is failing.
> {code}
> 2018-04-04T13:27:48.948+0200 ERROR [HintsDispatcher:14] 
> CassandraDaemon.java:207 Exception in thread Thread[HintsDispatcher:14,1,main]
> java.lang.IllegalArgumentException: Unable to seek to position 1789149057 in 
> /var/lib/cassandra/hints/9592c860-1054-4c60-b3b8-faa9adc6d769-1522838912649-1.hints
>  (118259682 bytes) in read-only mode
>     at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:287)
>  ~[apache-cassandra-clientutil-3.0.15.jar:3.0.15]
>     at org.apache.cassandra.hints.HintsReader.seek(HintsReader.java:114) 
> ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatcher.seek(HintsDispatcher.java:83) 
> ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:263)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:248)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:226)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:205)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_152]
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_152]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[na:1.8.0_152]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_152]
>     at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>  [apache-cassandra-3.0.15.jar:3.0.15]
>     at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_152]
> {code}
>  I think the problem is similar to CASSANDRA-11960.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14344) Support filtering using IN restrictions

2018-05-23 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14344:
--
Status: Ready to Commit  (was: Patch Available)

> Support filtering using IN restrictions
> ---
>
> Key: CASSANDRA-14344
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14344
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Dikang Gu
>Assignee: Venkata Harikrishna Nukala
>Priority: Major
> Attachments: 14344-trunk.txt
>
>
> Support IN filter query like this:
>  
> CREATE TABLE ks1.t1 (
>     key int,
>     col1 int,
>     col2 int,
>     value int,
>     PRIMARY KEY (key, col1, col2)
> ) WITH CLUSTERING ORDER BY (col1 ASC, col2 ASC)
>  
> cqlsh:ks1> select * from t1 where key = 1 and col2 in (1) allow filtering;
>  
>  key | col1 | col2 | value
> -+--+--+---
>    1 |    1 |    1 |     1
>    1 |    2 |    1 |     3
>  
> (2 rows)
> cqlsh:ks1> select * from t1 where key = 1 and col2 in (1, 2) allow filtering;
> *{color:#ff}InvalidRequest: Error from server: code=2200 [Invalid query] 
> message="IN restrictions are not supported on indexed columns"{color}*
> cqlsh:ks1>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14464) stop-server.bat -p ../pid.txt -f command not working on windows 2016

2018-05-23 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14464:
--
Reproduced In: 3.11.2
Since Version: 3.11.2
   Status: Awaiting Feedback  (was: Open)

> stop-server.bat -p ../pid.txt -f command not working on windows 2016
> 
>
> Key: CASSANDRA-14464
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14464
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Shyam Phirke
>Priority: Critical
>
> Steps to reproduce:
> 1. Copy and extract cassandra binaries on windows 2016 machine
> 2. Start cassandra in non-legacy mode
> 3. Check pid of cassandra in task manager and compare it with in pid.txt
> 4. Now stop cassandra using command stop-server.bat -p ../pid.txt -f
> Expected:
> After executing \bin:\> stop-server.bat -p 
> ../pid.txt -f
> cassandra process as in pid.txt should get killed.
>  
> Actual:
> After executing above stop command, the cassandra process as in pid.txt gets 
> killed but a new process gets created with new pid. Also the pid.txt not 
> updated with new pid.
> This new process should not get created.
>  
> Please comment on this issue if more details required.
> I am using cassandra 3.11.2.
>  
> This issue impacting me much since because of this new process getting 
> created my application uninstallation getting impacted.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14398) Client TOPOLOGY_CHANGE messages have wrong port.

2018-04-19 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14398:
--
Status: In Progress  (was: Ready to Commit)

> Client TOPOLOGY_CHANGE  messages have wrong port.
> -
>
> Key: CASSANDRA-14398
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14398
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Greg Bestland
>Assignee: Ariel Weisberg
>Priority: Blocker
>  Labels: client-impacting
> Fix For: 4.0
>
>
> Summary:
> TOPOLOGY_CHANGE events that are recieved by the client(Driver), with C* 4.0 
> (Trunk). Contain the storage port (7000) rather than the native port (9042). 
> I believe this is due to changes stuck in for CASSANDRA-7544.  
> I was able to track it down to this specific call here.
>  
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StorageService.java#L1703]
> We need an accurate port number from this Topology change event, otherwise we 
> won't be able to correlate node up event against the system.peers_v2 table 
> accurately.
> C* version I'm testing against : 4.0.x (trunk)
> Steps to reproduce:
> 1. create a single node, cluster, in this case I'm using ccm.
> {code:java}
> ccm create 400-1 --install-dir=/Users/gregbestland/git/cassandra
> ccm populate -n 1
> ccm start
> {code}
> 2. Connect the java driver, and check metadata.
>  see that there is one node with the correct native port (9042).
> 3. Add a brand new node:
> {code:java}
> ccm add node2 -i 127.0.0.2
> ccm node2 start
> {code}
> 4. Incoming topology change message to client will have the storage port, 
> rather then the native port in the message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14398) Client TOPOLOGY_CHANGE messages have wrong port.

2018-04-19 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14398:
--
Status: Ready to Commit  (was: Patch Available)

> Client TOPOLOGY_CHANGE  messages have wrong port.
> -
>
> Key: CASSANDRA-14398
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14398
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Greg Bestland
>Assignee: Ariel Weisberg
>Priority: Blocker
>  Labels: client-impacting
> Fix For: 4.0
>
>
> Summary:
> TOPOLOGY_CHANGE events that are recieved by the client(Driver), with C* 4.0 
> (Trunk). Contain the storage port (7000) rather than the native port (9042). 
> I believe this is due to changes stuck in for CASSANDRA-7544.  
> I was able to track it down to this specific call here.
>  
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StorageService.java#L1703]
> We need an accurate port number from this Topology change event, otherwise we 
> won't be able to correlate node up event against the system.peers_v2 table 
> accurately.
> C* version I'm testing against : 4.0.x (trunk)
> Steps to reproduce:
> 1. create a single node, cluster, in this case I'm using ccm.
> {code:java}
> ccm create 400-1 --install-dir=/Users/gregbestland/git/cassandra
> ccm populate -n 1
> ccm start
> {code}
> 2. Connect the java driver, and check metadata.
>  see that there is one node with the correct native port (9042).
> 3. Add a brand new node:
> {code:java}
> ccm add node2 -i 127.0.0.2
> ccm node2 start
> {code}
> 4. Incoming topology change message to client will have the storage port, 
> rather then the native port in the message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13304) Add checksumming to the native protocol

2018-04-19 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13304:
--
Status: In Progress  (was: Awaiting Feedback)

> Add checksumming to the native protocol
> ---
>
> Key: CASSANDRA-13304
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13304
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
>Priority: Blocker
>  Labels: client-impacting
> Fix For: 4.x
>
> Attachments: 13304_v1.diff, boxplot-read-throughput.png, 
> boxplot-write-throughput.png
>
>
> The native binary transport implementation doesn't include checksums. This 
> makes it highly susceptible to silently inserting corrupted data either due 
> to hardware issues causing bit flips on the sender/client side, C*/receiver 
> side, or network in between.
> Attaching an implementation that makes checksum'ing mandatory (assuming both 
> client and server know about a protocol version that supports checksums) -- 
> and also adds checksumming to clients that request compression.
> The serialized format looks something like this:
> {noformat}
>  *  1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3
>  *  0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  Number of Compressed Chunks  | Compressed Length (e1)/
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * /  Compressed Length cont. (e1) |Uncompressed Length (e1)   /
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Uncompressed Length cont. (e1)| CRC32 Checksum of Lengths (e1)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Checksum of Lengths cont. (e1)|Compressed Bytes (e1)+//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (e1) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |Compressed Length (e2) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |   Uncompressed Length (e2)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |CRC32 Checksum of Lengths (e2) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Compressed Bytes (e2)   +//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (e2) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |Compressed Length (en) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |   Uncompressed Length (en)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |CRC32 Checksum of Lengths (en) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  Compressed Bytes (en)  +//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (en) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
> {noformat}
> The first pass here adds checksums only to the actual contents of the frame 
> body itself (and doesn't actually checksum lengths and headers). While it 
> would be great to fully add checksuming across the entire protocol, the 
> proposed implementation will ensure we at least catch corrupted data and 
> likely protect ourselves pretty well anyways.
> I didn't go to the trouble of implementing a Snappy Checksum'ed Compressor 
> implementation as it's been deprecated for a while -- is really slow and 
> crappy compared to LZ4 -- and we should do everything in our power to make 
> sure no one in the community is still using it. I left it in (for obvious 
> backwards compatibility aspects) old for clients that don't know about the 
> new protocol.
> The current protocol has a 256MB (max) frame body -- where the serialized 
> contents are simply written in to the frame body.
> If the client sends a compression option in the startup, we will install a 
> FrameCompressor inline. Unfortunately, we went with a decision to treat the 
> frame body separately from the header bits etc in a given message. So, 
> instead we put a compressor implementation in the options and then if it's 
> not null, we push the serialized bytes for the frame bo

[jira] [Updated] (CASSANDRA-13403) nodetool repair breaks SASI index

2018-04-05 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13403:
--
Status: Ready to Commit  (was: Patch Available)

> nodetool repair breaks SASI index
> -
>
> Key: CASSANDRA-13403
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13403
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: 3.10
>Reporter: Igor Novgorodov
>Assignee: Alex Petrov
>Priority: Major
>  Labels: patch
> Attachments: 3_nodes_compaction.log, 4_nodes_compaction.log, 
> CASSANDRA-13403.patch, testSASIRepair.patch
>
>
> I've got table:
> {code}
> CREATE TABLE cservice.bulks_recipients (
> recipient text,
> bulk_id uuid,
> datetime_final timestamp,
> datetime_sent timestamp,
> request_id uuid,
> status int,
> PRIMARY KEY (recipient, bulk_id)
> ) WITH CLUSTERING ORDER BY (bulk_id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> CREATE CUSTOM INDEX bulk_recipients_bulk_id ON cservice.bulks_recipients 
> (bulk_id) USING 'org.apache.cassandra.index.sasi.SASIIndex';
> {code}
> There are 11 rows in it:
> {code}
> > select * from bulks_recipients;
> ...
> (11 rows)
> {code}
> Let's query by index (all rows have the same *bulk_id*):
> {code}
> > select * from bulks_recipients where bulk_id = 
> > baa94815-e276-4ca4-adda-5b9734e6c4a5;   
> >   
> ...
> (11 rows)
> {code}
> Ok, everything is fine.
> Now i'm doing *nodetool repair --partitioner-range --job-threads 4 --full* on 
> each node in cluster sequentially.
> After it finished:
> {code}
> > select * from bulks_recipients where bulk_id = 
> > baa94815-e276-4ca4-adda-5b9734e6c4a5;
> ...
> (2 rows)
> {code}
> Only two rows.
> While the rows are actually there:
> {code}
> > select * from bulks_recipients;
> ...
> (11 rows)
> {code}
> If i issue an incremental repair on a random node, i can get like 7 rows 
> after index query.
> Dropping index and recreating it fixes the issue. Is it a bug or am i doing 
> the repair the wrong way?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13981) Enable Cassandra for Persistent Memory

2018-04-01 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13981:
--
Status: Ready to Commit  (was: Patch Available)

> Enable Cassandra for Persistent Memory 
> ---
>
> Key: CASSANDRA-13981
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13981
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Preetika Tyagi
>Assignee: Preetika Tyagi
>Priority: Major
> Fix For: 4.0
>
> Attachments: in-mem-cassandra-1.0.patch, readme.txt
>
>
> Currently, Cassandra relies on disks for data storage and hence it needs data 
> serialization, compaction, bloom filters and partition summary/index for 
> speedy access of the data. However, with persistent memory, data can be 
> stored directly in the form of Java objects and collections, which can 
> greatly simplify the retrieval mechanism of the data. What we are proposing 
> is to make use of faster and scalable B+ tree-based data collections built 
> for persistent memory in Java (PCJ: https://github.com/pmem/pcj) and enable a 
> complete in-memory version of Cassandra, while still keeping the data 
> persistent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14284) Chunk checksum test needs to occur before uncompress to avoid JVM crash

2018-03-28 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14284:
--
Status: Ready to Commit  (was: Patch Available)

> Chunk checksum test needs to occur before uncompress to avoid JVM crash
> ---
>
> Key: CASSANDRA-14284
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14284
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: The check-only-after-doing-the-decompress logic appears 
> to be in all current releases.
> Here are some samples at different evolution points :
> 3.11.2:
> [https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L146]
> https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L207
>  
> 3.5:
>  
> [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135]
> [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196]
> 2.1.17:
>  
> [https://github.com/apache/cassandra/blob/cassandra-2.1.17/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L122]
>  
>Reporter: Gil Tene
>Assignee: Benjamin Lerer
>Priority: Major
>
> While checksums are (generally) performed on compressed data, the checksum 
> test when reading is currently (in all variants of C* 2.x, 3.x I've looked 
> at) done [on the compressed data] only after the uncompress operation has 
> completed. 
> The issue here is that LZ4_decompress_fast (as documented in e.g. 
> [https://github.com/lz4/lz4/blob/dev/lib/lz4.h#L214)] can result in memory 
> overruns when provided with malformed source data. This in turn can (and 
> does, e.g. in CASSANDRA-13757) lead to JVM crashes during the uncompress of 
> corrupted chunks. The checksum operation would obviously detect the issue, 
> but we'd never get to it if the JVM crashes first.
> Moving the checksum test of the compressed data to before the uncompress 
> operation (in cases where the checksum is done on compressed data) will 
> resolve this issue.
> -
> The check-only-after-doing-the-decompress logic appears to be in all current 
> releases.
> Here are some samples at different evolution points :
> 3.11.2:
> [https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L146]
> https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L207
>  
> 3.5:
>  
> [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135]
> [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196]
> 2.1.17:
>  
> [https://github.com/apache/cassandra/blob/cassandra-2.1.17/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L122]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13396) Cassandra 3.10: ClassCastException in ThreadAwareSecurityManager

2018-03-25 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13396:
--
Status: Ready to Commit  (was: Patch Available)

> Cassandra 3.10: ClassCastException in ThreadAwareSecurityManager
> 
>
> Key: CASSANDRA-13396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13396
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Eugene Fedotov
>Priority: Minor
> Attachments: CASSANDRA-13396_ehubert_1.patch, 
> CASSANDRA-13396_ehubert_2.patch, CASSANDRA-13396_ehubert_3.patch
>
>
> https://www.mail-archive.com/user@cassandra.apache.org/msg51603.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14281) Improve LatencyMetrics performance by reducing write path processing

2018-03-08 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14281:
--
Status: Ready to Commit  (was: Patch Available)

> Improve LatencyMetrics performance by reducing write path processing
> 
>
> Key: CASSANDRA-14281
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14281
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Michael Burman
>Assignee: Michael Burman
>Priority: Major
>
> Currently for each write/read/rangequery/CAS touching the CFS we write a 
> latency metric which takes a lot of processing time (up to 66% of the total 
> processing time if the update was empty). 
> The way latencies are recorded is to use both a dropwizard "Timer" as well as 
> "Counter". Latter is used for totalLatency and the previous is decaying 
> metric for rates and certain percentile metrics. We then replicate all of 
> these CFS writes to the KeyspaceMetrics and globalWriteLatencies. 
> Instead of doing this on the write phase we should merge the metrics when 
> they're read. This is much less common occurrence and thus we save a lot of 
> CPU time in total. This also speeds up the write path.
> Currently, the DecayingEstimatedHistogramReservoir acquires a lock for each 
> update operation, which causes a contention if there are more than one thread 
> updating the histogram. This impacts scalability when using larger machines. 
> We should make it lock-free as much as possible and also avoid a single 
> CAS-update from blocking all the concurrent threads from making an update.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14280) Fix timeout test - org.apache.cassandra.cql3.ViewTest

2018-02-27 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14280:
--
Status: Ready to Commit  (was: Patch Available)

> Fix timeout test - org.apache.cassandra.cql3.ViewTest
> -
>
> Key: CASSANDRA-14280
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14280
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Major
> Fix For: 4.0
>
>
> The test timeout very often, it seems too big, try to split it into multiple 
> tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14200) NullPointerException when dumping sstable with null value for timestamp column

2018-02-14 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14200:
--
Status: In Progress  (was: Ready to Commit)

> NullPointerException when dumping sstable with null value for timestamp column
> --
>
> Key: CASSANDRA-14200
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14200
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Simon Zhou
>Assignee: Simon Zhou
>Priority: Major
> Fix For: 3.0.x
>
>
> We have an sstable whose schema has a column of type timestamp and it's not 
> part of primary key. When dumping the sstable using sstabledump there is NPE 
> like this:
> {code:java}
> Exception in thread "main" java.lang.NullPointerException
> at java.util.Calendar.setTime(Calendar.java:1770)
> at java.text.SimpleDateFormat.format(SimpleDateFormat.java:943)
> at java.text.SimpleDateFormat.format(SimpleDateFormat.java:936)
> at java.text.DateFormat.format(DateFormat.java:345)
> at 
> org.apache.cassandra.db.marshal.TimestampType.toJSONString(TimestampType.java:93)
> at 
> org.apache.cassandra.tools.JsonTransformer.serializeCell(JsonTransformer.java:442)
> at 
> org.apache.cassandra.tools.JsonTransformer.serializeColumnData(JsonTransformer.java:376)
> at 
> org.apache.cassandra.tools.JsonTransformer.serializeRow(JsonTransformer.java:280)
> at 
> org.apache.cassandra.tools.JsonTransformer.serializePartition(JsonTransformer.java:215)
> at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
> at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> at java.util.Iterator.forEachRemaining(Iterator.java:116)
> at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
> at org.apache.cassandra.tools.JsonTransformer.toJson(JsonTransformer.java:104)
> at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:242){code}
> The reason is that we use a null Date when there is no value for this column:
> {code}
> public Date deserialize(ByteBuffer bytes)
> {
> return bytes.remaining() == 0 ? null : new 
> Date(ByteBufferUtil.toLong(bytes));
> }
> {code}
> It seems that we should not deserialize columns with null values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14173) JDK 8u161 breaks JMX integration

2018-01-28 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14173:
--
Status: Ready to Commit  (was: Patch Available)

> JDK 8u161 breaks JMX integration
> 
>
> Key: CASSANDRA-14173
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14173
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 3.11.2
>
>
> {\{org.apache.cassandra.utils.JMXServerUtils}} which is used to 
> programatically configure the JMX server and RMI registry (CASSANDRA-2967, 
> CASSANDRA-10091) depends on some JDK internal classes/interfaces. A change to 
> one of these, introduced in Oracle JDK 1.8.0_162 is incompatible, which means 
> we cannot build using that JDK version. Upgrading the JVM on a node running 
> 3.6+ will result in Cassandra being unable to start.
> {noformat}
> ERROR [main] 2018-01-18 07:33:18,804 CassandraDaemon.java:706 - Exception 
> encountered during startup
> java.lang.AbstractMethodError: 
> org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
>     at 
> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150)
>  ~[na:1.8.0_162]
>     at 
> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135)
>  ~[na:1.8.0_162]
>     at 
> javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405)
>  ~[na:1.8.0_162]
>     at 
> org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104)
>  ~[apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:143)
>  [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:188) 
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600)
>  [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689) 
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]{noformat}
> This is also a problem for CASSANDRA-9608, as the internals are completely 
> re-organised in JDK9, so a more stable solution that can be applied to both 
> JDK8 & JDK9 is required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14055) Index redistribution breaks SASI index

2018-01-17 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-14055:
--
Status: Ready to Commit  (was: Patch Available)

> Index redistribution breaks SASI index
> --
>
> Key: CASSANDRA-14055
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14055
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Ludovic Boutros
>Assignee: Ludovic Boutros
>Priority: Major
>  Labels: patch
> Fix For: 3.11.2
>
> Attachments: CASSANDRA-14055.patch, CASSANDRA-14055.patch, 
> CASSANDRA-14055.patch
>
>
> During index redistribution process, a new view is created.
> During this creation, old indexes should be released.
> But, new indexes are "attached" to the same SSTable as the old indexes.
> This leads to the deletion of the last SASI index file and breaks the index.
> The issue is in this function : 
> [https://github.com/apache/cassandra/blob/9ee44db49b13d4b4c91c9d6332ce06a6e2abf944/src/java/org/apache/cassandra/index/sasi/conf/view/View.java#L62]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9608) Support Java 9

2018-01-13 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-9608:
-
Status: Ready to Commit  (was: Patch Available)

> Support Java 9
> --
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13396) Cassandra 3.10: ClassCastException in ThreadAwareSecurityManager

2018-01-09 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13396:
--
Status: Ready to Commit  (was: Patch Available)

> Cassandra 3.10: ClassCastException in ThreadAwareSecurityManager
> 
>
> Key: CASSANDRA-13396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13396
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Eugene Fedotov
>Priority: Minor
>
> https://www.mail-archive.com/user@cassandra.apache.org/msg51603.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13087) Not enough bytes exception during compaction

2017-10-23 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13087:
--
Status: In Progress  (was: Ready to Commit)

> Not enough bytes exception during compaction
> 
>
> Key: CASSANDRA-13087
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13087
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Ubuntu 14.04.3 LTS, Cassandra 2.1.14
>Reporter: FACORAT
>Assignee: Cyril Scetbon
> Attachments: CASSANDRA-13087.patch
>
>
> After a repair we have compaction exceptions on some nodes and its spreading
> {noformat}
> ERROR [CompactionExecutor:14065] 2016-12-30 14:45:07,245 
> CassandraDaemon.java:229 - Exception in thread 
> Thread[CompactionExecutor:14065,1,main]
> java.lang.IllegalArgumentException: Not enough bytes. Offset: 5. Length: 
> 20275. Buffer size: 12594
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:378)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:100)
>  ~[apache-cassandra-2.1.14.ja
> r:2.1.14]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:398)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:382)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:171)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:202)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:645) 
> ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(ColumnIndex.java:166)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:121)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:193) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:127)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:197)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264)
>  ~[apache-cassandra-2.1.14.jar:2
> .1.14]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_60]
>  

[jira] [Updated] (CASSANDRA-13087) Not enough bytes exception during compaction

2017-10-23 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13087:
--
Status: Ready to Commit  (was: Patch Available)

> Not enough bytes exception during compaction
> 
>
> Key: CASSANDRA-13087
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13087
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Ubuntu 14.04.3 LTS, Cassandra 2.1.14
>Reporter: FACORAT
>Assignee: Cyril Scetbon
> Attachments: CASSANDRA-13087.patch
>
>
> After a repair we have compaction exceptions on some nodes and its spreading
> {noformat}
> ERROR [CompactionExecutor:14065] 2016-12-30 14:45:07,245 
> CassandraDaemon.java:229 - Exception in thread 
> Thread[CompactionExecutor:14065,1,main]
> java.lang.IllegalArgumentException: Not enough bytes. Offset: 5. Length: 
> 20275. Buffer size: 12594
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:378)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:100)
>  ~[apache-cassandra-2.1.14.ja
> r:2.1.14]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:398)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:382)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:171)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:202)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:645) 
> ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(ColumnIndex.java:166)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:121)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:193) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:127)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:197)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264)
>  ~[apache-cassandra-2.1.14.jar:2
> .1.14]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_60]
>  

[jira] [Updated] (CASSANDRA-13808) Integer overflows with Amazon Elastic File System (EFS)

2017-09-26 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13808:
--
Status: Ready to Commit  (was: Patch Available)

> Integer overflows with Amazon Elastic File System (EFS)
> ---
>
> Key: CASSANDRA-13808
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13808
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Vitalii Ishchenko
>Assignee: Benjamin Lerer
> Fix For: 3.11.1
>
>
> Integer overflow issue was fixed for cassandra 2.2, but in 3.8 new property 
> was introduced in config that also derives from disk size  
> {{cdc_total_space_in_mb}}, see CASSANDRA-8844
> It should be updated too 
> https://github.com/apache/cassandra/blob/6b7d73a49695c0ceb78bc7a003ace606a806c13a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java#L484



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13787) RangeTombstoneMarker and PartitionDeletion is not properly included in MV

2017-09-20 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13787:
--
Status: Ready to Commit  (was: Patch Available)

> RangeTombstoneMarker and PartitionDeletion is not properly included in MV
> -
>
> Key: CASSANDRA-13787
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13787
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>
> Found two problems related to MV tombstone. 
> 1. Range-tombstone-Marker being ignored after shadowing first row, subsequent 
> base rows are not shadowed in TableViews.
> If the range tombstone was not flushed, it was used as deleted row to 
> shadow new updates. It works correctly.
> After range tombstone was flushed, it was used as RangeTombstoneMarker 
> and being skipped after shadowing first update. The bound of 
> RangeTombstoneMarker seems wrong, it contained full clustering, but it should 
> contain range or it should be multiple RangeTombstoneMarkers for multiple 
> slices(aka. new updates)
> -2. Partition tombstone is not used when no existing live data, it will 
> resurrect deleted cells. It was found in 11500 and included in that patch.- 
> (Merged in CASSANDRA-11500)
> In order not to make 11500 patch more complicated, I will try fix 
> range/partition tombstone issue here.
> {code:title=Tests to reproduce}
> @Test
> public void testExistingRangeTombstoneWithFlush() throws Throwable
> {
> testExistingRangeTombstone(true);
> }
> @Test
> public void testExistingRangeTombstoneWithoutFlush() throws Throwable
> {
> testExistingRangeTombstone(false);
> }
> public void testExistingRangeTombstone(boolean flush) throws Throwable
> {
> createTable("CREATE TABLE %s (k1 int, c1 int, c2 int, v1 int, v2 int, 
> PRIMARY KEY (k1, c1, c2))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("view1",
>"CREATE MATERIALIZED VIEW view1 AS SELECT * FROM %%s WHERE 
> k1 IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL PRIMARY KEY (k1, c2, 
> c1)");
> updateView("DELETE FROM %s USING TIMESTAMP 10 WHERE k1 = 1 and c1=1");
> if (flush)
> 
> Keyspace.open(keyspace()).getColumnFamilyStore(currentTable()).forceBlockingFlush();
> String table = KEYSPACE + "." + currentTable();
> updateView("BEGIN BATCH " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 0, 0, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 1, 0, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 0, 1, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 1, 1, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 2, 1, 2) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 3, 1, 3) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 2, 
> 0, 2, 0) USING TIMESTAMP 5; " +
> "APPLY BATCH");
> assertRowsIgnoringOrder(execute("select * from %s"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> assertRowsIgnoringOrder(execute("select k1,c1,c2,v1,v2 from view1"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> }
> @Test
> public void testExistingParitionDeletionWithFlush() throws Throwable
> {
> testExistingParitionDeletion(true);
> }
> @Test
> public void testExistingParitionDeletionWithoutFlush() throws Throwable
> {
> testExistingParitionDeletion(false);
> }
> public void testExistingParitionDeletion(boolean flush) throws Throwable
> {
> // for partition range deletion, need to know that existing row is 
> shadowed instead of not existed.
> createTable("CREATE TABLE %s (a int, b int, c int, d int, PRIMARY KEY 
> (a))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("mv_test1",
>"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a 
> IS NOT NULL AND b IS NOT NULL PRIMARY KEY (a, b)");
> Keyspace ks = Keyspace.open(keyspace());
> ks.getColumn

[jira] [Updated] (CASSANDRA-9954) Improve Java-UDF timeout detection

2017-09-17 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-9954:
-
Status: Ready to Commit  (was: Patch Available)

> Improve Java-UDF timeout detection
> --
>
> Key: CASSANDRA-9954
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9954
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.11.x
>
>
> CASSANDRA-9402 introduced a sandbox using a thread-pool to enforce security 
> constraints and to detect "amok UDFs" - i.e. UDFs that essentially never 
> return (e.g. {{while (true)}}.
> Currently the safest way to react on such an "amok UDF" is to _fail-fast_ - 
> to stop the C* daemon since stopping a thread (in Java) is just no solution.
> CASSANDRA-9890 introduced further protection by inspecting the byte-code. The 
> same mechanism can also be used to manipulate the Java-UDF byte-code.
> By manipulating the byte-code I mean to add regular "is-amok-UDF" checks in 
> the compiled code.
> EDIT: These "is-amok-UDF" checks would also work for _UNFENCED_ Java-UDFs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11263) Repair scheduling - Polling and monitoring module

2017-08-30 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-11263:
--
Status: Awaiting Feedback  (was: Open)

> Repair scheduling - Polling and monitoring module
> -
>
> Key: CASSANDRA-11263
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11263
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Marcus Olsson
>Assignee: Marcus Olsson
>Priority: Minor
>
> Create a module that keeps track of currently running repairs and make sure 
> that they are either associated with a resource lock or are user-created. 
> Otherwise they should be aborted and the on-going validations/streaming 
> sessions should be stopped. A log warning should be issued as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10786) Include hash of result set metadata in prepared statement id

2017-07-22 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-10786:
--
Status: Ready to Commit  (was: Patch Available)

> Include hash of result set metadata in prepared statement id
> 
>
> Key: CASSANDRA-10786
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10786
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Reporter: Olivier Michallat
>Assignee: Alex Petrov
>Priority: Minor
>  Labels: client-impacting, doc-impacting, protocolv5
> Fix For: 4.x
>
>
> *_Initial description:_*
> This is a follow-up to CASSANDRA-7910, which was about invalidating a 
> prepared statement when the table is altered, to force clients to update 
> their local copy of the metadata.
> There's still an issue if multiple clients are connected to the same host. 
> The first client to execute the query after the cache was invalidated will 
> receive an UNPREPARED response, re-prepare, and update its local metadata. 
> But other clients might miss it entirely (the MD5 hasn't changed), and they 
> will keep using their old metadata. For example:
> # {{SELECT * ...}} statement is prepared in Cassandra with md5 abc123, 
> clientA and clientB both have a cache of the metadata (columns b and c) 
> locally
> # column a gets added to the table, C* invalidates its cache entry
> # clientA sends an EXECUTE request for md5 abc123, gets UNPREPARED response, 
> re-prepares on the fly and updates its local metadata to (a, b, c)
> # prepared statement is now in C*’s cache again, with the same md5 abc123
> # clientB sends an EXECUTE request for id abc123. Because the cache has been 
> populated again, the query succeeds. But clientB still has not updated its 
> metadata, it’s still (b,c)
> One solution that was suggested is to include a hash of the result set 
> metadata in the md5. This way the md5 would change at step 3, and any client 
> using the old md5 would get an UNPREPARED, regardless of whether another 
> client already reprepared.
> -
> *_Resolution (2017/02/13):_*
> The following changes were made to native protocol v5:
> - the PREPARED response includes {{result_metadata_id}}, a hash of the result 
> set metadata.
> - every EXECUTE message must provide {{result_metadata_id}} in addition to 
> the prepared statement id. If it doesn't match the current one on the server, 
> it means the client is operating on a stale schema.
> - to notify the client, the server returns a ROWS response with a new 
> {{Metadata_changed}} flag, the new {{result_metadata_id}} and the updated 
> result metadata (this overrides the {{No_metadata}} flag, even if the client 
> had requested it)
> - the client updates its copy of the result metadata before it decodes the 
> results.
> So the scenario above would now look like:
> # {{SELECT * ...}} statement is prepared in Cassandra with md5 abc123, and 
> result set (b, c) that hashes to cde456
> # column a gets added to the table, C* does not invalidate its cache entry, 
> but only updates the result set to (a, b, c) which hashes to fff789
> # client sends an EXECUTE request for (statementId=abc123, resultId=cde456) 
> and skip_metadata flag
> # cde456!=fff789, so C* responds with ROWS(..., no_metadata=false, 
> metadata_changed=true, new_metadata_id=fff789,col specs for (a,b,c))
> # client updates its column specifications, and will send the next execute 
> queries with (statementId=abc123, resultId=fff789)
> This works the same with multiple clients.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13067) Integer overflows with file system size reported by Amazon Elastic File System (EFS)

2017-07-09 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13067:
--
Status: In Progress  (was: Ready to Commit)

> Integer overflows with file system size reported by Amazon Elastic File 
> System (EFS)
> 
>
> Key: CASSANDRA-13067
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13067
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra in OpenShift running on Amazon EC2 instance 
> with EFS mounted for data
>Reporter: Michael Hanselmann
>Assignee: Benjamin Lerer
> Attachments: 0001-Handle-exabyte-sized-filesystems.patch
>
>
> When not explicitly configured Cassandra uses 
> [{{nio.FileStore.getTotalSpace}}|https://docs.oracle.com/javase/7/docs/api/java/nio/file/FileStore.html]
>  to determine the total amount of available space in order to [calculate the 
> preferred commit log 
> size|https://github.com/apache/cassandra/blob/cassandra-3.9/src/java/org/apache/cassandra/config/DatabaseDescriptor.java#L553].
>  [Amazon EFS|https://aws.amazon.com/efs/] instances report a filesystem size 
> of 8 EiB when empty. [{{getTotalSpace}} causes an integer overflow 
> (JDK-8162520)|https://bugs.openjdk.java.net/browse/JDK-8162520] and returns a 
> negative number, resulting in a negative preferred size and causing the 
> checked integer to throw.
> Overriding {{commitlog_total_space_in_mb}} is not sufficient as 
> [{{DataDirectory.getAvailableSpace}}|https://github.com/apache/cassandra/blob/cassandra-3.9/src/java/org/apache/cassandra/db/Directories.java#L550]
>  makes use of {{nio.FileStore.getUsableSpace}}.
> [AMQ-6441] is a comparable issue in ActiveMQ.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13067) Integer overflows with file system size reported by Amazon Elastic File System (EFS)

2017-07-09 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13067:
--
Status: Ready to Commit  (was: Patch Available)

> Integer overflows with file system size reported by Amazon Elastic File 
> System (EFS)
> 
>
> Key: CASSANDRA-13067
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13067
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra in OpenShift running on Amazon EC2 instance 
> with EFS mounted for data
>Reporter: Michael Hanselmann
>Assignee: Benjamin Lerer
> Attachments: 0001-Handle-exabyte-sized-filesystems.patch
>
>
> When not explicitly configured Cassandra uses 
> [{{nio.FileStore.getTotalSpace}}|https://docs.oracle.com/javase/7/docs/api/java/nio/file/FileStore.html]
>  to determine the total amount of available space in order to [calculate the 
> preferred commit log 
> size|https://github.com/apache/cassandra/blob/cassandra-3.9/src/java/org/apache/cassandra/config/DatabaseDescriptor.java#L553].
>  [Amazon EFS|https://aws.amazon.com/efs/] instances report a filesystem size 
> of 8 EiB when empty. [{{getTotalSpace}} causes an integer overflow 
> (JDK-8162520)|https://bugs.openjdk.java.net/browse/JDK-8162520] and returns a 
> negative number, resulting in a negative preferred size and causing the 
> checked integer to throw.
> Overriding {{commitlog_total_space_in_mb}} is not sufficient as 
> [{{DataDirectory.getAvailableSpace}}|https://github.com/apache/cassandra/blob/cassandra-3.9/src/java/org/apache/cassandra/db/Directories.java#L550]
>  makes use of {{nio.FileStore.getUsableSpace}}.
> [AMQ-6441] is a comparable issue in ActiveMQ.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13557) allow different NUMACTL_ARGS to be passed in

2017-06-29 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13557:
--
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

> allow different NUMACTL_ARGS to be passed in
> 
>
> Key: CASSANDRA-13557
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13557
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Matt Byrd
>Assignee: Matt Byrd
>Priority: Minor
> Fix For: 4.x
>
>
> Currently in bin/cassandra the following is hardcoded:
> NUMACTL_ARGS="--interleave=all"
> Ideally users of cassandra/bin could pass in a different set of NUMACTL_ARGS 
> if they wanted to say bind the process to a socket for cpu/memory reasons, 
> rather than having to comment out/modify this line in the deployed 
> cassandra/bin. e.g as described in:
> https://tobert.github.io/pages/als-cassandra-21-tuning-guide.html
> This could be done by just having the default be set to "--interleave=all" 
> but pickup any value which has already been set for the variable NUMACTL_ARGS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13535) Error decoding JSON for timestamp smaller than Integer.MAX_VALUE

2017-06-27 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13535:
--
Reproduced In: 3.10, 3.8, 3.7  (was: 3.7, 3.8, 3.10)
   Status: Open  (was: Awaiting Feedback)

> Error decoding JSON for timestamp smaller than Integer.MAX_VALUE
> 
>
> Key: CASSANDRA-13535
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13535
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jeremy Nguyen Xuan
>
> When trying to insert a JSON with field of type timestamp, the field is 
> decoded as an Integer instead as a Long.
> {code}
> CREATE TABLE foo.bar (
> myfield timestamp,
> PRIMARY KEY (myfield)
> );
> cqlsh:foo> INSERT INTO bar JSON '{"myfield":0}';
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Error 
> decoding JSON value for myfield: Expected a long or a datestring 
> representation of a timestamp value, but got a Integer: 0"
> cqlsh:foo> INSERT INTO bar JSON '{"myfield":2147483647}';
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Error 
> decoding JSON value for myfield: Expected a long or a datestring 
> representation of a timestamp value, but got a Integer: 2147483647"
> cqlsh:foo> INSERT INTO bar JSON '{"myfield":2147483648}';
> cqlsh:foo> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13535) Error decoding JSON for timestamp smaller than Integer.MAX_VALUE

2017-06-27 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13535:
--
Reproduced In: 3.10, 3.8, 3.7  (was: 3.7, 3.8, 3.10)
   Status: Awaiting Feedback  (was: Open)

> Error decoding JSON for timestamp smaller than Integer.MAX_VALUE
> 
>
> Key: CASSANDRA-13535
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13535
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jeremy Nguyen Xuan
>
> When trying to insert a JSON with field of type timestamp, the field is 
> decoded as an Integer instead as a Long.
> {code}
> CREATE TABLE foo.bar (
> myfield timestamp,
> PRIMARY KEY (myfield)
> );
> cqlsh:foo> INSERT INTO bar JSON '{"myfield":0}';
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Error 
> decoding JSON value for myfield: Expected a long or a datestring 
> representation of a timestamp value, but got a Integer: 0"
> cqlsh:foo> INSERT INTO bar JSON '{"myfield":2147483647}';
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Error 
> decoding JSON value for myfield: Expected a long or a datestring 
> representation of a timestamp value, but got a Integer: 2147483647"
> cqlsh:foo> INSERT INTO bar JSON '{"myfield":2147483648}';
> cqlsh:foo> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-7396) Allow selecting Map key, List index

2017-05-23 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-7396:
-
Since Version: 4.x
   Status: Awaiting Feedback  (was: Open)

> Allow selecting Map key, List index
> ---
>
> Key: CASSANDRA-7396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7396
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Sylvain Lebresne
>  Labels: cql, docs-impacting
> Fix For: 4.x
>
> Attachments: 7396_unit_tests.txt
>
>
> Allow "SELECT map['key]" and "SELECT list[index]."  (Selecting a UDT subfield 
> is already supported.)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-7396) Allow selecting Map key, List index

2017-05-23 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-7396:
-
Status: Patch Available  (was: Awaiting Feedback)

> Allow selecting Map key, List index
> ---
>
> Key: CASSANDRA-7396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7396
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Sylvain Lebresne
>  Labels: cql, docs-impacting
> Fix For: 4.x
>
> Attachments: 7396_unit_tests.txt
>
>
> Allow "SELECT map['key]" and "SELECT list[index]."  (Selecting a UDT subfield 
> is already supported.)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-7396) Allow selecting Map key, List index

2017-05-23 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-7396:
-
Status: Open  (was: Ready to Commit)

> Allow selecting Map key, List index
> ---
>
> Key: CASSANDRA-7396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7396
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Sylvain Lebresne
>  Labels: cql, docs-impacting
> Fix For: 4.x
>
> Attachments: 7396_unit_tests.txt
>
>
> Allow "SELECT map['key]" and "SELECT list[index]."  (Selecting a UDT subfield 
> is already supported.)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-7396) Allow selecting Map key, List index

2017-05-23 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-7396:
-
Status: Ready to Commit  (was: Patch Available)

> Allow selecting Map key, List index
> ---
>
> Key: CASSANDRA-7396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7396
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Sylvain Lebresne
>  Labels: cql, docs-impacting
> Fix For: 4.x
>
> Attachments: 7396_unit_tests.txt
>
>
> Allow "SELECT map['key]" and "SELECT list[index]."  (Selecting a UDT subfield 
> is already supported.)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13001) pluggable slow query logging / handling

2017-05-17 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13001:
--
Status: Ready to Commit  (was: Patch Available)

> pluggable slow query logging / handling
> ---
>
> Key: CASSANDRA-13001
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13001
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jon Haddad
>Assignee: Murukesh Mohanan
> Fix For: 4.0
>
> Attachments: 
> 0001-Add-basic-pluggable-logging-to-debug.log-and-table.patch, 
> 0001-Add-multiple-logging-methods-for-slow-queries-CASSAN.patch
>
>
> Currently CASSANDRA-12403 logs slow queries as DEBUG to a file.  It would be 
> better to have this as an interface which we can log to alternative 
> locations, such as to a table on the cluster or to a remote location (statsd, 
> graphite, etc).  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13366) Possible AssertionError in UnfilteredRowIteratorWithLowerBound

2017-03-23 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13366:
--
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

> Possible AssertionError in UnfilteredRowIteratorWithLowerBound
> --
>
> Key: CASSANDRA-13366
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13366
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.11.x
>
>
> In the code introduced by CASSANDRA-8180, we build a lower bound for a 
> partition (sometimes) based on the min clustering values of the stats file. 
> We can't do that if the sstable has and range tombston marker and the code 
> does check that this is the case, but unfortunately the check is done using 
> the stats {{minLocalDeletionTime}} but that value isn't populated properly in 
> pre-3.0. This means that if you upgrade from 2.1/2.2 to 3.4+, you may end up 
> getting an exception like
> {noformat}
> WARN  [ReadStage-2] 2017-03-20 13:29:39,165  
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-2,5,main]: {}
> java.lang.AssertionError: Lower bound [INCL_START_BOUND(Foo, 
> -9223372036854775808, -9223372036854775808) ]is bigger than first returned 
> value [Marker INCL_START_BOUND(Foo)@1490013810540999] for sstable 
> /var/lib/cassandra/data/system/size_estimates-618f817b005f3678b8a453f3930b8e86/system-size_estimates-ka-1-Data.db
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:122)
> {noformat}
> and this until the sstable is upgraded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12653) In-flight shadow round requests

2017-03-22 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-12653:
--
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

> In-flight shadow round requests
> ---
>
> Key: CASSANDRA-12653
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12653
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.11.x, 4.x
>
>
> Bootstrapping or replacing a node in the cluster requires to gather and check 
> some host IDs or tokens by doing a gossip "shadow round" once before joining 
> the cluster. This is done by sending a gossip SYN to all seeds until we 
> receive a response with the cluster state, from where we can move on in the 
> bootstrap process. Receiving a response will call the shadow round done and 
> calls {{Gossiper.resetEndpointStateMap}} for cleaning up the received state 
> again.
> The issue here is that at this point there might be other in-flight requests 
> and it's very likely that shadow round responses from other seeds will be 
> received afterwards, while the current state of the bootstrap process doesn't 
> expect this to happen (e.g. gossiper may or may not be enabled). 
> One side effect will be that MigrationTasks are spawned for each shadow round 
> reply except the first. Tasks might or might not execute based on whether at 
> execution time {{Gossiper.resetEndpointStateMap}} had been called, which 
> effects the outcome of {{FailureDetector.instance.isAlive(endpoint))}} at 
> start of the task. You'll see error log messages such as follows when this 
> happend:
> {noformat}
> INFO  [SharedPool-Worker-1] 2016-09-08 08:36:39,255 Gossiper.java:993 - 
> InetAddress /xx.xx.xx.xx is now UP
> ERROR [MigrationStage:1]2016-09-08 08:36:39,255 FailureDetector.java:223 
> - unknown endpoint /xx.xx.xx.xx
> {noformat}
> Although is isn't pretty, I currently don't see any serious harm from this, 
> but it would be good to get a second opinion (feel free to close as "wont 
> fix").
> /cc [~Stefania] [~thobbs]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-9633) Add ability to encrypt sstables

2017-03-16 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-9633:
-
Status: Ready to Commit  (was: Patch Available)

> Add ability to encrypt sstables
> ---
>
> Key: CASSANDRA-9633
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9633
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jason Brown
>Assignee: Jason Brown
>  Labels: encryption, security, sstable
> Fix For: 4.x
>
>
> Add option to allow encrypting of sstables.
> I have a version of this functionality built on cassandra 2.0 that 
> piggy-backs on the existing sstable compression functionality and ICompressor 
> interface (similar in nature to what DataStax Enterprise does). However, if 
> we're adding the feature to the main OSS product, I'm not sure if we want to 
> use the pluggable compression framework or if it's worth investigating a 
> different path. I think there's a lot of upside in reusing the sstable 
> compression scheme, but perhaps add a new component in cqlsh for table 
> encryption and a corresponding field in CFMD.
> Encryption configuration in the yaml can use the same mechanism as 
> CASSANDRA-6018 (which is currently pending internal review).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13289) Make it possible to monitor an ideal consistency level separate from actual consistency level

2017-03-13 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13289:
--
Status: Ready to Commit  (was: Patch Available)

> Make it possible to monitor an ideal consistency level separate from actual 
> consistency level
> -
>
> Key: CASSANDRA-13289
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13289
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 4.0
>
>
> As an operator there are several issues related to multi-datacenter 
> replication and consistency you may want to have more information on from 
> your production database.
> For instance. If your application writes at LOCAL_QUORUM how often are those 
> writes failing to achieve EACH_QUORUM at other data centers. If you failed 
> your application over to one of those data centers roughly how inconsistent 
> might it be given the number of writes that didn't propagate since the last 
> incremental repair?
> You might also want to know roughly what the latency of writes would be if 
> you switched to a different consistency level. For instance you are writing 
> at LOCAL_QUORUM and want to know what would happen if you switched to 
> EACH_QUORUM.
> The proposed change is to allow an ideal_consistency_level to be specified in 
> cassandra.yaml as well as get/set via JMX. If no ideal consistency level is 
> specified no additional tracking is done.
> if an ideal consistency level is specified then the 
> {{AbstractWriteResponesHandler}} will contain a delegate WriteResponseHandler 
> that tracks whether the ideal consistency level is met before a write times 
> out. It also tracks the latency for achieving the ideal CL  of successful 
> writes.
> These two metrics would be reported on a per keyspace basis.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13289) Make it possible to monitor an ideal consistency level separate from actual consistency level

2017-03-13 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13289:
--
Status: In Progress  (was: Ready to Commit)

> Make it possible to monitor an ideal consistency level separate from actual 
> consistency level
> -
>
> Key: CASSANDRA-13289
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13289
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 4.0
>
>
> As an operator there are several issues related to multi-datacenter 
> replication and consistency you may want to have more information on from 
> your production database.
> For instance. If your application writes at LOCAL_QUORUM how often are those 
> writes failing to achieve EACH_QUORUM at other data centers. If you failed 
> your application over to one of those data centers roughly how inconsistent 
> might it be given the number of writes that didn't propagate since the last 
> incremental repair?
> You might also want to know roughly what the latency of writes would be if 
> you switched to a different consistency level. For instance you are writing 
> at LOCAL_QUORUM and want to know what would happen if you switched to 
> EACH_QUORUM.
> The proposed change is to allow an ideal_consistency_level to be specified in 
> cassandra.yaml as well as get/set via JMX. If no ideal consistency level is 
> specified no additional tracking is done.
> if an ideal consistency level is specified then the 
> {{AbstractWriteResponesHandler}} will contain a delegate WriteResponseHandler 
> that tracks whether the ideal consistency level is met before a write times 
> out. It also tracks the latency for achieving the ideal CL  of successful 
> writes.
> These two metrics would be reported on a per keyspace basis.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12233) Cassandra stress should obfuscate password in cmd in graph

2017-02-10 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-12233:
--
Status: Ready to Commit  (was: Patch Available)

> Cassandra stress should obfuscate password in cmd in graph
> --
>
> Key: CASSANDRA-12233
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12233
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Christopher Batey
>Assignee: Christopher Batey
>Priority: Minor
> Fix For: 3.11.x
>
> Attachments: 0001-Obfuscate-password-in-stress-graphs.patch
>
>
> The graph currently has the entire cmd which will could contain a user / 
> password



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12121) CommitLogReplayException on Start Up

2017-02-06 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-12121:
--
Reproduced In: 3.9, 3.7  (was: 3.7)
   Status: Awaiting Feedback  (was: Open)

> CommitLogReplayException on Start Up
> 
>
> Key: CASSANDRA-12121
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12121
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tom Burdick
> Attachments: 000_epoch.cql, mutation7038154871517187161dat, 
> sane_distribution.cql
>
>
> Using cassandra 3.7 and executing the attached .cql schema change files, then 
> restarting one of the cassandra nodes in a cluster, I get this traceback
> I had to change the 000_epoch.cql file slightly to remove some important 
> pieces, apologies if that makes this more difficult to verify.
> {noformat}
> ERROR [main] 2016-06-30 09:25:23,089 JVMStabilityInspector.java:82 - Exiting 
> due to error while processing commit log during initialization.
> org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: 
> Unexpected error deserializing mutation; saved to 
> /tmp/mutation7038154871517187161dat.  This may be caused by replaying a 
> mutation against a table with the same name but incompatible schema.  
> Exception follows: org.apache.cassandra.serializers.MarshalException: Not 
> enough bytes to read 0th field java.nio.HeapByteBuffer[pos=0 lim=4 cap=4]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:616)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayMutation(CommitLogReplayer.java:573)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:526)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:412)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:228)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:185) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:165) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:314) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:585)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12928) Assert error, 3.9 mutation stage

2016-12-22 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-12928:
--
Status: Patch Available  (was: Awaiting Feedback)

> Assert error, 3.9 mutation stage
> 
>
> Key: CASSANDRA-12928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12928
> Project: Cassandra
>  Issue Type: Bug
> Environment: 3.9 
>Reporter: Jeff Jirsa
> Attachments: 0001-Bump-version-of-netty-all-to-4.1.6.patch
>
>
> {code}
> WARN  [MutationStage-341] 2016-11-17 18:39:18,781 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[MutationStage-341,5,main]: {}
> java.lang.AssertionError: null
>   at 
> io.netty.util.Recycler$WeakOrderQueue.(Recycler.java:225) 
> ~[netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.util.Recycler$DefaultHandle.recycle(Recycler.java:180) 
> ~[netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at io.netty.util.Recycler.recycle(Recycler.java:141) 
> ~[netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> org.apache.cassandra.utils.btree.BTree$Builder.recycle(BTree.java:836) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.utils.btree.BTree$Builder.build(BTree.java:1089) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate.build(PartitionUpdate.java:587)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate.maybeBuild(PartitionUpdate.java:577)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate.holder(PartitionUpdate.java:388)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.AbstractBTreePartition.unfilteredIterator(AbstractBTreePartition.java:177)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.AbstractBTreePartition.unfilteredIterator(AbstractBTreePartition.java:172)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serializedSize(PartitionUpdate.java:868)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.serializedSize(Mutation.java:456)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:257) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:227) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:241) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1347)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2539)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_91]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.9.jar:3.9]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8780) cassandra-stress should support multiple table operations

2016-12-20 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-8780:
-
Status: Ready to Commit  (was: Patch Available)

> cassandra-stress should support multiple table operations
> -
>
> Key: CASSANDRA-8780
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8780
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Benedict
>Assignee: Ben Slater
>  Labels: stress
> Fix For: 3.x
>
> Attachments: 8780-trunk.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10857) Allow dropping COMPACT STORAGE flag from tables in 3.X

2016-11-03 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-10857:
--
Status: Open  (was: Ready to Commit)

> Allow dropping COMPACT STORAGE flag from tables in 3.X
> --
>
> Key: CASSANDRA-10857
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10857
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL, Distributed Metadata
>Reporter: Aleksey Yeschenko
>Assignee: Alex Petrov
> Fix For: 3.x
>
>
> Thrift allows users to define flexible mixed column families - where certain 
> columns would have explicitly pre-defined names, potentially non-default 
> validation types, and be indexed.
> Example:
> {code}
> create column family foo
> and default_validation_class = UTF8Type
> and column_metadata = [
> {column_name: bar, validation_class: Int32Type, index_type: KEYS},
> {column_name: baz, validation_class: UUIDType, index_type: KEYS}
> ];
> {code}
> Columns named {{bar}} and {{baz}} will be validated as {{Int32Type}} and 
> {{UUIDType}}, respectively, and be indexed. Columns with any other name will 
> be validated by {{UTF8Type}} and will not be indexed.
> With CASSANDRA-8099, {{bar}} and {{baz}} would be mapped to static columns 
> internally. However, being {{WITH COMPACT STORAGE}}, the table will only 
> expose {{bar}} and {{baz}} columns. Accessing any dynamic columns (any column 
> not named {{bar}} and {{baz}}) right now requires going through Thrift.
> This is blocking Thrift -> CQL migration for users who have mixed 
> dynamic/static column families. That said, it *shouldn't* be hard to allow 
> users to drop the {{compact}} flag to expose the table as it is internally 
> now, and be able to access all columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10857) Allow dropping COMPACT STORAGE flag from tables in 3.X

2016-11-03 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-10857:
--
Status: Ready to Commit  (was: Patch Available)

> Allow dropping COMPACT STORAGE flag from tables in 3.X
> --
>
> Key: CASSANDRA-10857
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10857
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL, Distributed Metadata
>Reporter: Aleksey Yeschenko
>Assignee: Alex Petrov
> Fix For: 3.x
>
>
> Thrift allows users to define flexible mixed column families - where certain 
> columns would have explicitly pre-defined names, potentially non-default 
> validation types, and be indexed.
> Example:
> {code}
> create column family foo
> and default_validation_class = UTF8Type
> and column_metadata = [
> {column_name: bar, validation_class: Int32Type, index_type: KEYS},
> {column_name: baz, validation_class: UUIDType, index_type: KEYS}
> ];
> {code}
> Columns named {{bar}} and {{baz}} will be validated as {{Int32Type}} and 
> {{UUIDType}}, respectively, and be indexed. Columns with any other name will 
> be validated by {{UTF8Type}} and will not be indexed.
> With CASSANDRA-8099, {{bar}} and {{baz}} would be mapped to static columns 
> internally. However, being {{WITH COMPACT STORAGE}}, the table will only 
> expose {{bar}} and {{baz}} columns. Accessing any dynamic columns (any column 
> not named {{bar}} and {{baz}}) right now requires going through Thrift.
> This is blocking Thrift -> CQL migration for users who have mixed 
> dynamic/static column families. That said, it *shouldn't* be hard to allow 
> users to drop the {{compact}} flag to expose the table as it is internally 
> now, and be able to access all columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12365) Document cassandra stress

2016-10-17 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-12365:
--
Status: Ready to Commit  (was: Patch Available)

> Document cassandra stress
> -
>
> Key: CASSANDRA-12365
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12365
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Christopher Batey
>Assignee: Christopher Batey
>Priority: Minor
>  Labels: stress
> Fix For: 3.x
>
>
> I've started on this so creating a ticket to avoid duplicate work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12698) add json/yaml format option to nodetool status

2016-10-06 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-12698:
--
Status: Ready to Commit  (was: Patch Available)

> add json/yaml format option to nodetool status
> --
>
> Key: CASSANDRA-12698
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12698
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Shogo Hoshii
>Assignee: Shogo Hoshii
> Attachments: ntstatus_json.patch, sample.json, sample.yaml
>
>
> Hello,
> This patch enables nodetool status to be output in json/yaml format.
> I think this format could be useful interface for tools that operate or 
> deploy cassandra.
> The format could free tools from parsing the result in their own way.
> It would be great if someone would review this patch.
> Thank you.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12729) Cassandra-Stress: Use single seed in UUID generation

2016-10-05 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-12729:
--
Status: Ready to Commit  (was: Patch Available)

> Cassandra-Stress: Use single seed in UUID generation
> 
>
> Key: CASSANDRA-12729
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12729
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Chris Splinter
>Priority: Minor
> Fix For: 3.x
>
> Attachments: CASSANDRA-12729-trunk.patch
>
>
> While testing the [new sequence 
> distribution|https://issues.apache.org/jira/browse/CASSANDRA-12490] for the 
> user module of cassandra-stress I noticed that half of the expected rows (848 
> / 1696) were produced when using a single uuid primary key.
> {code}
> table: player_info_by_uuid
> table_definition: |
>   CREATE TABLE player_info_by_uuid (
> player_uuid uuid,
> player_full_name text,
> team_name text,
> weight double,
> height double,
> position text,
> PRIMARY KEY (player_uuid)
>   )
> columnspec:
>   - name: player_uuid
> size: fixed(32) # no. of chars of UUID
> population: seq(1..1696)  # 53 active players per team, 32 teams = 1696 
> players
> insert:
>   partitions: fixed(1)  # 1 partition per batch
>   batchtype: UNLOGGED   # use unlogged batches
>   select: fixed(1)/1 # no chance of skipping a row when generating inserts
> {code}
> The following debug output showed that we were over-incrementing the seed
> {code}
> SeedManager.next.index: 341824
> SeriesGenerator.Seed.next: 0
> SeriesGenerator.Seed.start: 1
> SeriesGenerator.Seed.totalCount: 20
> SeriesGenerator.Seed.next % totalCount: 0
> SeriesGenerator.Seed.start + (next % totalCount): 1
> PartitionOperation.ready.seed: org.apache.cassandra.stress.generate.Seed@1
> DistributionSequence.nextWithWrap.next: 0
> DistributionSequence.nextWithWrap.start: 1
> DistributionSequence.nextWithWrap.totalCount: 20
> DistributionSequence.nextWithWrap.next % totalCount: 0
> DistributionSequence.nextWithWrap.start + (next % totalCount): 1
> DistributionSequence.nextWithWrap.next: 1
> DistributionSequence.nextWithWrap.start: 1
> DistributionSequence.nextWithWrap.totalCount: 20
> DistributionSequence.nextWithWrap.next % totalCount: 1
> DistributionSequence.nextWithWrap.start + (next % totalCount): 2
> Generated uuid: --0001--0002
> {code}
> This patch fixes this issue by calling {{identityDistribution.next()}} once 
> [instead of 
> twice|https://github.com/apache/cassandra/blob/trunk/tools/stress/src/org/apache/cassandra/stress/generate/values/UUIDs.java/#L37]
>  when generating UUID's



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8457) nio MessagingService

2016-09-26 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-8457:
-
Status: Ready to Commit  (was: Patch Available)

> nio MessagingService
> 
>
> Key: CASSANDRA-8457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8457
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jonathan Ellis
>Assignee: Jason Brown
>Priority: Minor
>  Labels: netty, performance
> Fix For: 4.x
>
>
> Thread-per-peer (actually two each incoming and outbound) is a big 
> contributor to context switching, especially for larger clusters.  Let's look 
> at switching to nio, possibly via Netty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8457) nio MessagingService

2016-09-26 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-8457:
-
Status: Open  (was: Ready to Commit)

> nio MessagingService
> 
>
> Key: CASSANDRA-8457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8457
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jonathan Ellis
>Assignee: Jason Brown
>Priority: Minor
>  Labels: netty, performance
> Fix For: 4.x
>
>
> Thread-per-peer (actually two each incoming and outbound) is a big 
> contributor to context switching, especially for larger clusters.  Let's look 
> at switching to nio, possibly via Netty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11850) cannot use cql since upgrading python to 2.7.11+

2016-07-20 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-11850:
--
Status: Ready to Commit  (was: Patch Available)

> cannot use cql since upgrading python to 2.7.11+
> 
>
> Key: CASSANDRA-11850
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11850
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Development
>Reporter: Andrew Madison
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> OS: Debian GNU/Linux stretch/sid 
> Kernel: 4.5.0-2-amd64 #1 SMP Debian 4.5.4-1 (2016-05-16) x86_64 GNU/Linux
> Python version: 2.7.11+ (default, May  9 2016, 15:54:33)
> [GCC 5.3.1 20160429]
> cqlsh --version: cqlsh 5.0.1
> cassandra -v: 3.5 (also occurs with 3.0.6)
> Issue:
> when running cqlsh, it returns the following error:
> cqlsh -u dbarpt_usr01
> Password: *
> Connection error: ('Unable to connect to any servers', {'odbasandbox1': 
> TypeError('ref() does not take keyword arguments',)})
> I cleared PYTHONPATH:
> python -c "import json; print dir(json); print json.__version__"
> ['JSONDecoder', 'JSONEncoder', '__all__', '__author__', '__builtins__', 
> '__doc__', '__file__', '__name__', '__package__', '__path__', '__version__', 
> '_default_decoder', '_default_encoder', 'decoder', 'dump', 'dumps', 
> 'encoder', 'load', 'loads', 'scanner']
> 2.0.9
> Java based clients can connect to Cassandra with no issue. Just CQLSH and 
> Python clients cannot.
> nodetool status also works.
> Thank you for your help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11850) cannot use cql since upgrading python to 2.7.11+

2016-07-05 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-11850:
--
Status: Ready to Commit  (was: Patch Available)

> cannot use cql since upgrading python to 2.7.11+
> 
>
> Key: CASSANDRA-11850
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11850
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Development
>Reporter: Andrew Madison
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> OS: Debian GNU/Linux stretch/sid 
> Kernel: 4.5.0-2-amd64 #1 SMP Debian 4.5.4-1 (2016-05-16) x86_64 GNU/Linux
> Python version: 2.7.11+ (default, May  9 2016, 15:54:33)
> [GCC 5.3.1 20160429]
> cqlsh --version: cqlsh 5.0.1
> cassandra -v: 3.5 (also occurs with 3.0.6)
> Issue:
> when running cqlsh, it returns the following error:
> cqlsh -u dbarpt_usr01
> Password: *
> Connection error: ('Unable to connect to any servers', {'odbasandbox1': 
> TypeError('ref() does not take keyword arguments',)})
> I cleared PYTHONPATH:
> python -c "import json; print dir(json); print json.__version__"
> ['JSONDecoder', 'JSONEncoder', '__all__', '__author__', '__builtins__', 
> '__doc__', '__file__', '__name__', '__package__', '__path__', '__version__', 
> '_default_decoder', '_default_encoder', 'decoder', 'dump', 'dumps', 
> 'encoder', 'load', 'loads', 'scanner']
> 2.0.9
> Java based clients can connect to Cassandra with no issue. Just CQLSH and 
> Python clients cannot.
> nodetool status also works.
> Thank you for your help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11638) Add cassandra-stress test source

2016-07-01 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-11638:
--
Status: Ready to Commit  (was: Patch Available)

> Add cassandra-stress test source
> 
>
> Key: CASSANDRA-11638
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11638
> Project: Cassandra
>  Issue Type: Test
>  Components: Tools
>Reporter: Christopher Batey
>Assignee: Christopher Batey
>Priority: Minor
>  Labels: stress
> Fix For: 3.x
>
> Attachments: 
> 0001-Add-a-test-source-directory-for-Cassandra-stress.patch
>
>
> This adds a test root for cassandra-stress and a couple of noddy tests for a 
> jira I did last week to prove it works / fails the build if they fail.
> I put the source in {{tools/stress/test/unit}} and the classes in 
> {{build/test/stress-classes}} (rather than putting them in with the main test 
> classes).
> Patch attached or at: https://github.com/chbatey/cassandra-1/tree/stress-tests



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11537) Give clear error when certain nodetool commands are issued before server is ready

2016-06-19 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-11537:
--
Status: Ready to Commit  (was: Patch Available)

> Give clear error when certain nodetool commands are issued before server is 
> ready
> -
>
> Key: CASSANDRA-11537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11537
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Minor
>  Labels: lhf
>
> As an ops person upgrading and servicing Cassandra servers, I require a more 
> clear message when I issue a nodetool command that the server is not ready 
> for it so that I am not confused.
> Technical description:
> If you deploy a new binary, restart, and issue nodetool 
> scrub/compact/updatess etc you get unfriendly assertion. An exception would 
> be easier to understand. Also if a user has turned assertions off it is 
> unclear what might happen. 
> {noformat}
> EC1: Throw exception to make it clear server is still in start up process. 
> :~# nodetool upgradesstables
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
> at 
> org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
> at 
> org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
> at 
> org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
> {noformat}
> EC1: 
> Patch against 2.1 (branch)
> https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7396) Allow selecting Map key, List index

2016-06-16 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-7396:
-
Status: Ready to Commit  (was: Patch Available)

> Allow selecting Map key, List index
> ---
>
> Key: CASSANDRA-7396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7396
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Robert Stupp
>  Labels: cql, docs-impacting
> Fix For: 3.x
>
> Attachments: 7396_unit_tests.txt
>
>
> Allow "SELECT map['key]" and "SELECT list[index]."  (Selecting a UDT subfield 
> is already supported.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8844) Change Data Capture (CDC)

2016-05-26 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-8844:
-
Status: Ready to Commit  (was: Patch Available)

> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Coordination, Local Write-Read Paths
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible mechanisms for deliver-exactly-once ) 
> continuous semi-realtime feeds of mutations going into a Cassandra cluster.
> # To eliminate the developmental and operational burden of users so that they 
> don't have to do dual writes to other systems.
> # For users that are currently doing batch export from a Cassandra system, 
> give them the opportunity to make that realtime with a minimum of coding.
> h2. The mechanism:
> We propose a durable logging mechanism that functions similar to a commitlog, 
> with the following nuances:
> - Takes place on every node, not just the coordinator, so RF number of copies 
> are logged.
> - Separate log per table.
> - Per-table configuration. Only tables that are specified as CDC_LOG would do 
> any logging.
> - Per DC. We are trying to keep the complexity to a minimum to make this an 
> easy enhancement, but most likely use cases would prefer to only implement 
> CDC logging in one (or a subset) of the DCs that are being replicated to
> - In the critical path of ConsistencyLevel acknowledgment. Just as with the 
> commitlog, failure to write to the CDC log should fail that node's write. If 
> that means the requested consistency level was not met, then clients *should* 
> experience UnavailableExceptions.
> - Be written in a Row-centric manner such that it is easy for consumers to 
> reconstitute rows atomically.
> - Written in a simple format designed to be consumed *directly* by daemons 
> written in non JVM languages
> h2. Nice-to-haves
> I strongly suspect that the following features will be asked for, but I also 
> believe that they can be deferred for a subsequent release, and to guage 
> actual interest.
> - Multiple logs per table. This would make it easy to have multiple 
> "subscribers" to a single table's changes. A workaround would be to create a 
> forking daemon listener, but that's not a great answer.
> - Log filtering. Being able to apply filters, including UDF-based filters 
> would make Casandra a much more versatile feeder into other systems, and 
> again, reduce complexity that would otherwise need to be built into the 
> daemons.
> h2. Format and Consumption
> - Cassandra would only write to the CDC log, and never delete from it. 
> - Cleaning up consumed logfiles would be the client daemon's responibility
> - Logfile size should probably be configurable.
> - Logfiles should be named with a predictable naming schema, making it 
> triivial to process them in order.
> - Daemons should be able to checkpoint their work, and resume from where they 
> left off. This means they would have to leave some file artifact in the CDC 
> log's directory.
> - A sophisticated daemon should be able to be written that could 
> -- Catch up, in written-order, even when it is multiple logfiles behind in 
> processing
> -- Be able to continuously "tail" the most recent logfile and get 
> low-latency(ms?) access to the data as it is written.
> h2. Alternate approach
> In order to make consuming a change log easy and efficient to do with low 
> latency, the following could supplement the approach outlined above
> - Instead of writing to a logfile, by default, Cassandra could expose a 
> socket for a daemon to connect to, and from which 

[jira] [Updated] (CASSANDRA-7396) Allow selecting Map key, List index

2016-05-22 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-7396:
-
Status: Ready to Commit  (was: Patch Available)

> Allow selecting Map key, List index
> ---
>
> Key: CASSANDRA-7396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7396
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Robert Stupp
>  Labels: cql, docs-impacting
> Fix For: 3.x
>
> Attachments: 7396_unit_tests.txt
>
>
> Allow "SELECT map['key]" and "SELECT list[index]."  (Selecting a UDT subfield 
> is already supported.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7396) Allow selecting Map key, List index

2016-05-22 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-7396:
-
Status: Open  (was: Ready to Commit)

> Allow selecting Map key, List index
> ---
>
> Key: CASSANDRA-7396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7396
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Robert Stupp
>  Labels: cql, docs-impacting
> Fix For: 3.x
>
> Attachments: 7396_unit_tests.txt
>
>
> Allow "SELECT map['key]" and "SELECT list[index]."  (Selecting a UDT subfield 
> is already supported.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11703) To extend Cassandra CI for Linux on z(s390x).

2016-05-10 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-11703:
--
Status: Awaiting Feedback  (was: Open)

> To extend Cassandra CI for Linux on z(s390x).
> -
>
> Key: CASSANDRA-11703
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11703
> Project: Cassandra
>  Issue Type: Improvement
> Environment: Linux on z
>Reporter: Meerabo Shah
>Priority: Minor
>
> Hi Team, I am not sure if this is right channel to discuss this topic but we 
> would like to know if Cassandra CI can be extended to support z system. 
> Please let us know what support will be required from IBM side (we are aware 
> on the H/w support).
>  Let me know if you want to start this discussion on some other channel.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11703) To extend Cassandra CI for Linux on z(s390x).

2016-05-10 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-11703:
--
Status: Open  (was: Awaiting Feedback)

> To extend Cassandra CI for Linux on z(s390x).
> -
>
> Key: CASSANDRA-11703
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11703
> Project: Cassandra
>  Issue Type: Improvement
> Environment: Linux on z
>Reporter: Meerabo Shah
>Priority: Minor
>
> Hi Team, I am not sure if this is right channel to discuss this topic but we 
> would like to know if Cassandra CI can be extended to support z system. 
> Please let us know what support will be required from IBM side (we are aware 
> on the H/w support).
>  Let me know if you want to start this discussion on some other channel.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11011) DateTieredCompactionStrategy not compacting sstables in 2.1.12

2016-05-03 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-11011:
--
Resolution: Duplicate
Status: Resolved  (was: Awaiting Feedback)

> DateTieredCompactionStrategy not compacting  sstables in 2.1.12
> ---
>
> Key: CASSANDRA-11011
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11011
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Ubuntu 14.04.3 LTS
> 2.1.12
>Reporter: Alexander Piavlo
>Assignee: Marcus Eriksson
>  Labels: dtcs
>
> The following CF is never compacting from day one
> CREATE TABLE globaldb."DynamicParameter" (
> dp_id bigint PRIMARY KEY,
> dp_advertiser_id int,
> dp_application_id int,
> dp_application_user_id bigint,
> dp_banner_id int,
> dp_campaign_id int,
> dp_click_timestamp timestamp,
> dp_country text,
> dp_custom_parameters text,
> dp_flags bigint,
> dp_ip int,
> dp_machine_id text,
> dp_string text
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'max_sstable_age_days': '30', 'base_time_seconds': 
> '3600', 'timestamp_resolution': 'MILLISECONDS', 'enabled': 'true', 
> 'min_threshold': '2', 'class': 
> 'org.apache.cassandra.db.compaction.DateTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.2
> AND default_time_to_live = 10713600
> AND gc_grace_seconds = 1209600
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11206) Support large partitions on the 3.0 sstable format

2016-04-15 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-11206:
--
Status: Open  (was: Ready to Commit)

> Support large partitions on the 3.0 sstable format
> --
>
> Key: CASSANDRA-11206
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11206
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jonathan Ellis
>Assignee: Robert Stupp
> Fix For: 3.x
>
> Attachments: 11206-gc.png, trunk-gc.png
>
>
> Cassandra saves a sample of IndexInfo objects that store the offset within 
> each partition of every 64KB (by default) range of rows.  To find a row, we 
> binary search this sample, then scan the partition of the appropriate range.
> The problem is that this scales poorly as partitions grow: on a cache miss, 
> we deserialize the entire set of IndexInfo, which both creates a lot of GC 
> overhead (as noted in CASSANDRA-9754) but is also non-negligible i/o activity 
> (relative to reading a single 64KB row range) as partitions get truly large.
> We introduced an "offset map" in CASSANDRA-10314 that allows us to perform 
> the IndexInfo bsearch while only deserializing IndexInfo that we need to 
> compare against, i.e. log(N) deserializations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11206) Support large partitions on the 3.0 sstable format

2016-04-15 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-11206:
--
Status: Ready to Commit  (was: Patch Available)

> Support large partitions on the 3.0 sstable format
> --
>
> Key: CASSANDRA-11206
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11206
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jonathan Ellis
>Assignee: Robert Stupp
> Fix For: 3.x
>
> Attachments: 11206-gc.png, trunk-gc.png
>
>
> Cassandra saves a sample of IndexInfo objects that store the offset within 
> each partition of every 64KB (by default) range of rows.  To find a row, we 
> binary search this sample, then scan the partition of the appropriate range.
> The problem is that this scales poorly as partitions grow: on a cache miss, 
> we deserialize the entire set of IndexInfo, which both creates a lot of GC 
> overhead (as noted in CASSANDRA-9754) but is also non-negligible i/o activity 
> (relative to reading a single 64KB row range) as partitions get truly large.
> We introduced an "offset map" in CASSANDRA-10314 that allows us to perform 
> the IndexInfo bsearch while only deserializing IndexInfo that we need to 
> compare against, i.e. log(N) deserializations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10805) Additional Compaction Logging

2016-04-05 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-10805:
--
Status: In Progress  (was: Ready to Commit)

> Additional Compaction Logging
> -
>
> Key: CASSANDRA-10805
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10805
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, Observability
>Reporter: Carl Yeksigian
>Assignee: Carl Yeksigian
>Priority: Minor
> Fix For: 3.x
>
>
> Currently, viewing the results of past compactions requires parsing the log 
> and looking at the compaction history system table, which doesn't have 
> information about, for example, flushed sstables not previously compacted.
> This is a proposal to extend the information captured for compaction. 
> Initially, this would be done through a JMX call, but if it proves to be 
> useful and not much overhead, it might be a feature that could be enabled for 
> the compaction strategy all the time.
> Initial log information would include:
> - The compaction strategy type controlling each column family
> - The set of sstables included in each compaction strategy
> - Information about flushes and compactions, including times and all involved 
> sstables
> - Information about sstables, including generation, size, and tokens
> - Any additional metadata the strategy wishes to add to a compaction or an 
> sstable, like the level of an sstable or the type of compaction being 
> performed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10805) Additional Compaction Logging

2016-04-05 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-10805:
--
Status: Ready to Commit  (was: Patch Available)

> Additional Compaction Logging
> -
>
> Key: CASSANDRA-10805
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10805
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, Observability
>Reporter: Carl Yeksigian
>Assignee: Carl Yeksigian
>Priority: Minor
> Fix For: 3.x
>
>
> Currently, viewing the results of past compactions requires parsing the log 
> and looking at the compaction history system table, which doesn't have 
> information about, for example, flushed sstables not previously compacted.
> This is a proposal to extend the information captured for compaction. 
> Initially, this would be done through a JMX call, but if it proves to be 
> useful and not much overhead, it might be a feature that could be enabled for 
> the compaction strategy all the time.
> Initial log information would include:
> - The compaction strategy type controlling each column family
> - The set of sstables included in each compaction strategy
> - Information about flushes and compactions, including times and all involved 
> sstables
> - Information about sstables, including generation, size, and tokens
> - Any additional metadata the strategy wishes to add to a compaction or an 
> sstable, like the level of an sstable or the type of compaction being 
> performed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9348) Nodetool move output should be more user friendly if bad token is supplied

2016-03-30 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-9348:
-
Status: Ready to Commit  (was: Patch Available)

> Nodetool move output should be more user friendly if bad token is supplied
> --
>
> Key: CASSANDRA-9348
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9348
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sequoyha pelletier
>Assignee: Abhishek Verma
>Priority: Trivial
>  Labels: lhf
> Attachments: CASSANDRA-9348.txt
>
>
> If you put a token into nodetool move that is out of range for the 
> partitioner you get the following error:
> {noformat}
> [architect@md03-gcsarch-lapp33 11:01:06 ]$ nodetool -h 10.11.48.229 -u 
> cassandra -pw cassandra move \\-9223372036854775809 
> Exception in thread "main" java.io.IOException: For input string: 
> "-9223372036854775809" 
> at org.apache.cassandra.service.StorageService.move(StorageService.java:3104) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:606) 
> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75) 
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:606) 
> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279) 
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>  
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>  
> at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) 
> at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) 
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) 
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>  
> at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) 
> at 
> com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>  
> at java.security.AccessController.doPrivileged(Native Method) 
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
>  
> at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:606) 
> at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322) 
> at sun.rmi.transport.Transport$1.run(Transport.java:177) 
> at sun.rmi.transport.Transport$1.run(Transport.java:174) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at sun.rmi.transport.Transport.serviceCall(Transport.java:173) 
> at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556) 
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
>  
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  
> at java.lang.Thread.run(Thread.java:745) 
> {noformat}
> This ticket is just requesting that we catch the exception an output 
> something along the lines of "Token supplied is outside of the acceptable 
> range" for those that are still in the Cassandra learning curve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6246) EPaxos

2016-03-25 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-6246:
-
Status: Ready to Commit  (was: Patch Available)

> EPaxos
> --
>
> Key: CASSANDRA-6246
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6246
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jonathan Ellis
>Assignee: Blake Eggleston
>  Labels: messaging-service-bump-required
> Fix For: 3.x
>
>
> One reason we haven't optimized our Paxos implementation with Multi-paxos is 
> that Multi-paxos requires leader election and hence, a period of 
> unavailability when the leader dies.
> EPaxos is a Paxos variant that requires (1) less messages than multi-paxos, 
> (2) is particularly useful across multiple datacenters, and (3) allows any 
> node to act as coordinator: 
> http://sigops.org/sosp/sosp13/papers/p358-moraru.pdf
> However, there is substantial additional complexity involved if we choose to 
> implement it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11387) If "using ttl" not present in an update statement, ttl shouldn't be updated to null

2016-03-24 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-11387:
--
Reproduced In: 3.3
   Status: Awaiting Feedback  (was: Open)

> If "using ttl" not present in an update statement, ttl shouldn't be updated 
> to null
> ---
>
> Key: CASSANDRA-11387
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11387
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core, CQL
>Reporter: Çelebi Murat
>Priority: Minor
>
> When I update a value if I don't use "using ttl = ?", ttl value becomes null 
> instead of staying untouched. Thus causing unexpected behaviours. Selecting 
> the ttl value of the updated values before the actual update operation is 
> hindering both software performance and development speed. Also makes the 
> software prone to be buggy.
> Instead, It would be very helpful if the behavior is changed as follows; If 
> "using ttl" clause is not present in an update statement, ttl value should 
> stay unchanged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10508) Remove hard-coded SSL cipher suites and protocols

2016-03-19 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-10508:
--
Status: Ready to Commit  (was: Patch Available)

> Remove hard-coded SSL cipher suites and protocols
> -
>
> Key: CASSANDRA-10508
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10508
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>  Labels: docs-impacting, lhf
> Fix For: 3.x
>
> Attachments: 10508-trunk.patch
>
>
> Currently each SSL connections will be initialized using a hard-coded list of 
> protocols ("SSLv2Hello", "TLSv1", "TLSv1.1", "TLSv1.2") and cipher suites. We 
> now require Java 8 which comes with solid defaults for these kind of SSL 
> settings and I'm wondering if the current behavior shouldn't be re-evaluated. 
> In my impression the way cipher suites are currently whitelisted is 
> problematic, as this will prevent the JVM from using more recent and more 
> secure suites that haven't been added to the hard-coded list. JVM updates may 
> also cause issues in case the limited number of ciphers cannot be used, e.g. 
> see CASSANDRA-6613.
> -Looking at the source I've also stumbled upon a bug in the 
> {{filterCipherSuites()}} method that would return the filtered list of 
> ciphers in undetermined order where the result is passed to 
> {{setEnabledCipherSuites()}}. However, the list of ciphers will reflect the 
> order of preference 
> ([source|https://bugs.openjdk.java.net/browse/JDK-8087311]) and therefore you 
> may end up with weaker algorithms on the top. Currently it's not that 
> critical, as we only whitelist a couple of ciphers anyway. But it adds to the 
> question if it still really makes sense to work with the cipher list at all 
> in the Cassandra code base.- (fixed in CASSANDRA-11164)
> Another way to effect used ciphers is by changing the security properties. 
> This is a more versatile way to work with cipher lists instead of relying on 
> hard-coded values, see 
> [here|https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html#DisabledAlgorithms]
>  for details.
> The same applies to the protocols. Introduced in CASSANDRA-8265 to prevent 
> SSLv3 attacks, this is not necessary anymore as SSLv3 is now blacklisted 
> anyway and will stop using safer protocol sets on new JVM releases or user 
> request. Again, we should stick with the JVM defaults. Using the 
> {{jdk.tls.client.protocols}} systems property will always allow to restrict 
> the set of protocols in case another emergency fix is needed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >