[jira] [Comment Edited] (CASSANDRA-15154) Add console log to indicate the node is ready to accept requests

2019-06-11 Thread Abhijit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16861734#comment-16861734
 ] 

Abhijit Sarkar edited comment on CASSANDRA-15154 at 6/12/19 3:58 AM:
-

[~mshuler] I'm sure something in the logs can be dug out that'd tell an 
experienced Cassandra user that the node is ready; however, the point of this 
ticket is to make that very obvious. Even in the snippet you posted, "Starting 
listening for CQL clients" isn't necessarily the same as "I'm ready". It can 
easily be presumed that the node might have more to do after it had started 
listening for CQL clients.
BTW, this ticket is filed as an improvement, not a bug. "Not a bug" doesn't 
apply; "Won't do because we don't feel like it" might, but isn't one of the 
goals of good software is to be more intuitive to use?


was (Author: socialguy):
[~mshuler] I'm sure something in the logs can be dug out that'd tell an 
experienced Cassandra user that the node is ready; however, the point of this 
ticket is to make that very obvious. Even in the snippet you posted, "Starting 
listening for CQL clients" isn't necessarily the same as "I'm ready". It can 
easily be presumed that the node might have more to do after it had started 
listening for CQL clients.

> Add console log to indicate the node is ready to accept requests
> 
>
> Key: CASSANDRA-15154
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15154
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Cluster/Membership, Local/Startup and Shutdown, 
> Observability/Logging
>Reporter: Abhijit Sarkar
>Priority: Normal
>
> Depending on whether a cluster is initialized the first time, or a node is 
> restarted, the last message on the console varies. In either case, there's no 
> indication that the cluster/node is ready to accept requests.
> For example, when I create a new Cassandra Docker container locally:
> {code}
> $ docker run --name cas -p 9042:9042 -p 9091:9091 -e CASSANDRA_DC=dev 
> cassandra
> ...
> INFO  [OptionalTasks:1] 2019-06-11 23:31:35,527 CassandraRoleManager.java:356 
> - Created default superuser role 'cassandra'
> {code}
> After shutting it down (CTRL + C), and restarting:
> {code}
> $ docker start cas
> ...
> INFO  [main] 2019-06-11 23:32:57,980 CassandraDaemon.java:556 - Not starting 
> RPC server as requested. Use JMX (StorageService->startRPCServer()) or 
> nodetool (enablethrift) to start it
> {code}
> In either of the above cases, how is a regular user, whose full time job is 
> not working with Cassandra, expected to know whether the server is ready? We 
> have a new member in the team who previously was an iOS developer. He left 
> the server running overnight, assuming the node hadn't finished 
> initialization; the next morning, the last message was still "Created default 
> superuser role 'cassandra'".
> Please add a simple log statement with basic information like node IPs in the 
> cluster indicating the node is ready. For example, this is what Spring Boot 
> does:
> {code}
> 2019-06-11 16:37:28.295  INFO [my-app,,,] 17392 --- [   main] 
> o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 11900 
> (http) with context path ''
> 2019-06-11 16:37:28.299  INFO [my-app,,,] 17392 --- [   main] 
> mypackage.MyApp   : Started MyApp in 5.279 seconds (JVM running for 5.916)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15154) Add console log to indicate the node is ready to accept requests

2019-06-11 Thread Abhijit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16861734#comment-16861734
 ] 

Abhijit Sarkar commented on CASSANDRA-15154:


[~mshuler] I'm sure something in the logs can be dug out that'd tell an 
experienced Cassandra user that the node is ready; however, the point of this 
ticket is to make that very obvious. Even in the snippet you posted, "Starting 
listening for CQL clients" isn't necessarily the same as "I'm ready". It can 
easily be presumed that the node might have more to do after it had started 
listening for CQL clients.

> Add console log to indicate the node is ready to accept requests
> 
>
> Key: CASSANDRA-15154
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15154
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Cluster/Membership, Local/Startup and Shutdown, 
> Observability/Logging
>Reporter: Abhijit Sarkar
>Priority: Normal
>
> Depending on whether a cluster is initialized the first time, or a node is 
> restarted, the last message on the console varies. In either case, there's no 
> indication that the cluster/node is ready to accept requests.
> For example, when I create a new Cassandra Docker container locally:
> {code}
> $ docker run --name cas -p 9042:9042 -p 9091:9091 -e CASSANDRA_DC=dev 
> cassandra
> ...
> INFO  [OptionalTasks:1] 2019-06-11 23:31:35,527 CassandraRoleManager.java:356 
> - Created default superuser role 'cassandra'
> {code}
> After shutting it down (CTRL + C), and restarting:
> {code}
> $ docker start cas
> ...
> INFO  [main] 2019-06-11 23:32:57,980 CassandraDaemon.java:556 - Not starting 
> RPC server as requested. Use JMX (StorageService->startRPCServer()) or 
> nodetool (enablethrift) to start it
> {code}
> In either of the above cases, how is a regular user, whose full time job is 
> not working with Cassandra, expected to know whether the server is ready? We 
> have a new member in the team who previously was an iOS developer. He left 
> the server running overnight, assuming the node hadn't finished 
> initialization; the next morning, the last message was still "Created default 
> superuser role 'cassandra'".
> Please add a simple log statement with basic information like node IPs in the 
> cluster indicating the node is ready. For example, this is what Spring Boot 
> does:
> {code}
> 2019-06-11 16:37:28.295  INFO [my-app,,,] 17392 --- [   main] 
> o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 11900 
> (http) with context path ''
> 2019-06-11 16:37:28.299  INFO [my-app,,,] 17392 --- [   main] 
> mypackage.MyApp   : Started MyApp in 5.279 seconds (JVM running for 5.916)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15154) Add console log to indicate the node is ready to accept requests

2019-06-11 Thread Michael Shuler (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16861732#comment-16861732
 ] 

Michael Shuler commented on CASSANDRA-15154:


Close - not a bug?

Generally, one or two lines before the ones you listed is the line you're 
looking for. The "listening for CQL clients" line is the equivalent of "I've 
started and I'm ready for connections":
{noformat}
INFO  [main] 2019-06-11 22:42:46,373 Server.java:156 - Starting listening for 
CQL clients on localhost/127.0.0.1:9042 (unencrypted)...
{noformat}

Longer sample on fresh start including lines you listed:
{noformat}
...
INFO  [MigrationStage:1] 2019-06-11 22:42:45,977 ColumnFamilyStore.java:430 - 
Initializing system_auth.resource_role_permissons_index
INFO  [MigrationStage:1] 2019-06-11 22:42:45,984 ColumnFamilyStore.java:430 - 
Initializing system_auth.role_members
INFO  [MigrationStage:1] 2019-06-11 22:42:45,993 ColumnFamilyStore.java:430 - 
Initializing system_auth.role_permissions
INFO  [MigrationStage:1] 2019-06-11 22:42:46,000 ColumnFamilyStore.java:430 - 
Initializing system_auth.roles
INFO  [main] 2019-06-11 22:42:46,297 NativeTransportService.java:70 - Netty 
using native Epoll event loop
INFO  [main] 2019-06-11 22:42:46,373 Server.java:155 - Using Netty Version: 
[netty-buffer=netty-buffer-4.0.44.Final.452812a, netty-codec=net
ty-codec-4.0.44.Final.452812a, 
netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, 
netty-codec-http=netty-codec-http-4.0.44.Final.
452812a, netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, 
netty-common=netty-common-4.0.44.Final.452812a, netty-handler=netty-handl
er-4.0.44.Final.452812a, netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, 
netty-transport=netty-transport-4.0.44.Final.452812a, netty-tr
ansport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a, 
netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, nett
y-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, 
netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
INFO  [main] 2019-06-11 22:42:46,373 Server.java:156 - Starting listening for 
CQL clients on localhost/127.0.0.1:9042 (unencrypted)...
INFO  [main] 2019-06-11 22:42:46,429 CassandraDaemon.java:556 - Not starting 
RPC server as requested. Use JMX (StorageService->startRPCServe
r()) or nodetool (enablethrift) to start it
INFO  [OptionalTasks:1] 2019-06-11 22:42:56,162 CassandraRoleManager.java:356 - 
Created default superuser role 'cassandra'
...
{noformat}

> Add console log to indicate the node is ready to accept requests
> 
>
> Key: CASSANDRA-15154
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15154
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Cluster/Membership, Local/Startup and Shutdown, 
> Observability/Logging
>Reporter: Abhijit Sarkar
>Priority: Normal
>
> Depending on whether a cluster is initialized the first time, or a node is 
> restarted, the last message on the console varies. In either case, there's no 
> indication that the cluster/node is ready to accept requests.
> For example, when I create a new Cassandra Docker container locally:
> {code}
> $ docker run --name cas -p 9042:9042 -p 9091:9091 -e CASSANDRA_DC=dev 
> cassandra
> ...
> INFO  [OptionalTasks:1] 2019-06-11 23:31:35,527 CassandraRoleManager.java:356 
> - Created default superuser role 'cassandra'
> {code}
> After shutting it down (CTRL + C), and restarting:
> {code}
> $ docker start cas
> ...
> INFO  [main] 2019-06-11 23:32:57,980 CassandraDaemon.java:556 - Not starting 
> RPC server as requested. Use JMX (StorageService->startRPCServer()) or 
> nodetool (enablethrift) to start it
> {code}
> In either of the above cases, how is a regular user, whose full time job is 
> not working with Cassandra, expected to know whether the server is ready? We 
> have a new member in the team who previously was an iOS developer. He left 
> the server running overnight, assuming the node hadn't finished 
> initialization; the next morning, the last message was still "Created default 
> superuser role 'cassandra'".
> Please add a simple log statement with basic information like node IPs in the 
> cluster indicating the node is ready. For example, this is what Spring Boot 
> does:
> {code}
> 2019-06-11 16:37:28.295  INFO [my-app,,,] 17392 --- [   main] 
> o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 11900 
> (http) with context path ''
> 2019-06-11 16:37:28.299  INFO [my-app,,,] 17392 --- [   main] 
> mypackage.MyApp   : Started MyApp in 5.279 seconds (JVM running for 5.916)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Updated] (CASSANDRA-14539) cql2 insert/update/batch statements don't function unless the keyspace is specified in the statement

2019-06-11 Thread Jon Haddad (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Haddad updated CASSANDRA-14539:
---
Resolution: Won't Fix
Status: Resolved  (was: Open)

It's been a while, closing this out.

> cql2 insert/update/batch statements don't function unless the keyspace is 
> specified in the statement
> 
>
> Key: CASSANDRA-14539
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14539
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/CQL
>Reporter: Michael Theroux
>Priority: Normal
> Fix For: 2.1.x
>
> Attachments: cql2.diff
>
>
> If you perform a cql2 insert/update or batch statement without a keyspace, 
> the following assertion will occur:
> java.lang.AssertionError: null
>  at org.apache.cassandra.config.Schema.getCFMetaData(Schema.java:243) 
> ~[apache-cassandra-2.1.20.jar:2.1.20]
>  at 
> org.apache.cassandra.cql.Attributes.maybeApplyExpirationDateOverflowPolicy(Attributes.java:81)
>  ~[apache-cassandra-2.1.20.jar:2.1.20]
>  at 
> org.apache.cassandra.cql.AbstractModification.getTimeToLive(AbstractModification.java:95)
>  ~[apache-cassandra-2.1.20.jar:2.1.20]
>  at 
> org.apache.cassandra.cql.UpdateStatement.mutationForKey(UpdateStatement.java:201)
>  ~[apache-cassandra-2.1.20.jar:2.1.20]
>  at 
> org.apache.cassandra.cql.UpdateStatement.prepareRowMutations(UpdateStatement.java:154)
>  ~[apache-cassandra-2.1.20.jar:2.1.20]
>  at 
> org.apache.cassandra.cql.UpdateStatement.prepareRowMutations(UpdateStatement.java:125)
>  ~[apache-cassandra-2.1.20.jar:2.1.20]
>  at 
> org.apache.cassandra.cql.QueryProcessor.processStatement(QueryProcessor.java:544)
>  ~[apache-cassandra-2.1.20.jar:2.1.20]
>  at org.apache.cassandra.cql.QueryProcessor.process(QueryProcessor.java:802) 
> ~[apache-cassandra-2.1.20.jar:2.1.20]
>  at 
> org.apache.cassandra.thrift.CassandraServer.execute_cql_query(CassandraServer.java:1962)
>  ~[apache-cassandra-2.1.20.jar:2.1.20]
>  at 
> org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:4558)
>  ~[apache-cassandra-thrift-2.1.20.jar:2.1.20]
>  at 
> org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:4542)
>  ~[apache-cassandra-thrift-2.1.20.jar:2.1.20]
>  at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
> ~[libthrift-0.9.2.jar:0.9.2]
>  at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[libthrift-0.9.2.jar:0.9.2]
>  at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
>  ~[apache-cassandra-2.1.20.jar:2.1.20]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[na:1.8.0_151]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[na:1.8.0_151]
>  
> It will fail with:
>     use test;
>     update users set 'test'='\{"d":1529683115340}' where 
> key='c426f519100da4cb24417bc87c5bfbd6' ;
> But will work fine with:
>     update test.users set 'test'='\{"d":1529683115340}' where 
> key='c426f519100da4cb24417bc87c5bfbd6' ;
>  
> Going through the code, looks like this was introduced with 
> https://issues.apache.org/jira/browse/CASSANDRA-14092 in February 2018.
> In org.apache.cassandra.cql.AbstractNotification.getTimeToLive(), and 
> org.apache.cassandra.cql.BatchStatement.getTimeToLive()
> cassandra is using the keyspace associated with the update statement, which 
> is set to null if its not in the query itself.
> I resolved this myself locally by changing the getTimeToLive() methods to 
> take a default keyspace, and use that if it is unavailable on the statement.  
> The fix looked fairly simple.  I've attached my diff.
> P.S. Yes, I realize that cql2 is deprecated, and no longer supported, however 
> I wanted to get this regression in if someone else hits it as I was unable to 
> find any other reports for this issue.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15154) Add console log to indicate the node is ready to accept requests

2019-06-11 Thread Abhijit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhijit Sarkar updated CASSANDRA-15154:
---
Description: 
Depending on whether a cluster is initialized the first time, or a node is 
restarted, the last message on the console varies. In either case, there's no 
indication that the cluster/node is ready to accept requests.

For example, when I create a new Cassandra Docker container locally:

{code}
$ docker run --name cas -p 9042:9042 -p 9091:9091 -e CASSANDRA_DC=dev cassandra
...
INFO  [OptionalTasks:1] 2019-06-11 23:31:35,527 CassandraRoleManager.java:356 - 
Created default superuser role 'cassandra'
{code}

After shutting it down (CTRL + C), and restarting:
{code}
$ docker start cas
...
INFO  [main] 2019-06-11 23:32:57,980 CassandraDaemon.java:556 - Not starting 
RPC server as requested. Use JMX (StorageService->startRPCServer()) or nodetool 
(enablethrift) to start it
{code}

In either of the above cases, how is a regular user, whose full time job is not 
working with Cassandra, expected to know whether the server is ready? We have a 
new member in the team who previously was an iOS developer. He left the server 
running overnight, assuming the node hadn't finished initialization; the next 
morning, the last message was still "Created default superuser role 
'cassandra'".

Please add a simple log statement with basic information like node IPs in the 
cluster indicating the node is ready. For example, this is what Spring Boot 
does:
{code}
2019-06-11 16:37:28.295  INFO [my-app,,,] 17392 --- [   main] 
o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 11900 
(http) with context path ''
2019-06-11 16:37:28.299  INFO [my-app,,,] 17392 --- [   main] 
mypackage.MyApp   : Started MyApp in 5.279 seconds (JVM running for 5.916)
{code}

  was:
Depending on whether a cluster is initialized the first time, or a node is 
restarted, the last message on the console varies. In either case, there's no 
indication that the cluster/node is ready to accept requests.

For example, when I create a new Cassandra Docker container locally:

{code}
$ docker run --name cas -p 9042:9042 -p 9091:9091 -e CASSANDRA_DC=dev cassandra
...
INFO  [OptionalTasks:1] 2019-06-11 23:31:35,527 CassandraRoleManager.java:356 - 
Created default superuser role 'cassandra'
{code}

After shutting it down (CTRL + C), and restarting:
{code}
$ docker start cas
...
INFO  [main] 2019-06-11 23:32:57,980 CassandraDaemon.java:556 - Not starting 
RPC server as requested. Use JMX (StorageService->startRPCServer()) or nodetool 
(enablethrift) to start it
{code}

In either of the above cases, how is a regular user, whose full time job is not 
working with Cassandra, expected to know whether the server is ready? We have a 
new member in the team who previously was an iOS developer. He left the server 
running overnight, assuming the node hadn't finished initialization; the next 
morning, the last message was still "Created default superuser role 
'cassandra'".

Please add a simple log statement with basic information like node IPs in the 
cluster indicating the node is ready. For example, this is what Spring Boot 
does:
{code}
2019-06-11 16:37:28.295  INFO [place-mapping,,,] 17392 --- [   main] 
o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 11900 
(http) with context path ''
2019-06-11 16:37:28.299  INFO [place-mapping,,,] 17392 --- [   main] 
c.n.dcs.content.placemapping.PlacesApp   : Started PlacesApp in 5.279 seconds 
(JVM running for 5.916)
{code}


> Add console log to indicate the node is ready to accept requests
> 
>
> Key: CASSANDRA-15154
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15154
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Cluster/Membership, Local/Startup and Shutdown, 
> Observability/Logging
>Reporter: Abhijit Sarkar
>Priority: Normal
>
> Depending on whether a cluster is initialized the first time, or a node is 
> restarted, the last message on the console varies. In either case, there's no 
> indication that the cluster/node is ready to accept requests.
> For example, when I create a new Cassandra Docker container locally:
> {code}
> $ docker run --name cas -p 9042:9042 -p 9091:9091 -e CASSANDRA_DC=dev 
> cassandra
> ...
> INFO  [OptionalTasks:1] 2019-06-11 23:31:35,527 CassandraRoleManager.java:356 
> - Created default superuser role 'cassandra'
> {code}
> After shutting it down (CTRL + C), and restarting:
> {code}
> $ docker start cas
> ...
> INFO  [main] 2019-06-11 23:32:57,980 CassandraDaemon.java:556 - Not starting 
> RPC server as requested. Use JMX (StorageService->startRPCServer()) or 
> nodetool (enablethrift) to start it
> {code}
> In either of the above cases

[jira] [Created] (CASSANDRA-15154) Add console log to indicate the node is ready to accept requests

2019-06-11 Thread Abhijit Sarkar (JIRA)
Abhijit Sarkar created CASSANDRA-15154:
--

 Summary: Add console log to indicate the node is ready to accept 
requests
 Key: CASSANDRA-15154
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15154
 Project: Cassandra
  Issue Type: Improvement
  Components: Cluster/Membership, Local/Startup and Shutdown, 
Observability/Logging
Reporter: Abhijit Sarkar


Depending on whether a cluster is initialized the first time, or a node is 
restarted, the last message on the console varies. In either case, there's no 
indication that the cluster/node is ready to accept requests.

For example, when I create a new Cassandra Docker container locally:

{code}
$ docker run --name cas -p 9042:9042 -p 9091:9091 -e CASSANDRA_DC=dev cassandra
...
INFO  [OptionalTasks:1] 2019-06-11 23:31:35,527 CassandraRoleManager.java:356 - 
Created default superuser role 'cassandra'
{code}

After shutting it down (CTRL + C), and restarting:
{code}
$ docker start cas
...
INFO  [main] 2019-06-11 23:32:57,980 CassandraDaemon.java:556 - Not starting 
RPC server as requested. Use JMX (StorageService->startRPCServer()) or nodetool 
(enablethrift) to start it
{code}

In either of the above cases, how is a regular user, whose full time job is not 
working with Cassandra, expected to know whether the server is ready? We have a 
new member in the team who previously was an iOS developer. He left the server 
running overnight, assuming the node hadn't finished initialization; the next 
morning, the last message was still "Created default superuser role 
'cassandra'".

Please add a simple log statement with basic information like node IPs in the 
cluster indicating the node is ready. For example, this is what Spring Boot 
does:
{code}
2019-06-11 16:37:28.295  INFO [place-mapping,,,] 17392 --- [   main] 
o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 11900 
(http) with context path ''
2019-06-11 16:37:28.299  INFO [place-mapping,,,] 17392 --- [   main] 
c.n.dcs.content.placemapping.PlacesApp   : Started PlacesApp in 5.279 seconds 
(JVM running for 5.916)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-dtest] branch master updated: Drop token_generator_test from 2.1 branch

2019-06-11 Thread mshuler
This is an automated email from the ASF dual-hosted git repository.

mshuler pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cassandra-dtest.git


The following commit(s) were added to refs/heads/master by this push:
 new baeefb6  Drop token_generator_test from 2.1 branch
baeefb6 is described below

commit baeefb6a658665a840bd0afd5f5f37d9aafb20db
Author: Michael Shuler 
AuthorDate: Tue Jun 11 18:38:46 2019 -0500

Drop token_generator_test from 2.1 branch

2.2+ got commits for python2/3 changes and we now run dtest in python3.
The print statements in 2.1 token_generator just error out, so skip the
branch:
```
  File "/home/automaton/cassandra/tools/bin/token-generator", line 160
print "%sDC #%d:" % (indentstr, dcnum + 1)
^
SyntaxError: invalid syntax
```
---
 token_generator_test.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/token_generator_test.py b/token_generator_test.py
index 4c0ce05..3d1afbd 100644
--- a/token_generator_test.py
+++ b/token_generator_test.py
@@ -15,7 +15,7 @@ since = pytest.mark.since
 logger = logging.getLogger(__name__)
 
 
-@since('2.0.16', max_version='3.0.0')
+@since('2.2', max_version='3.0.0')
 class TestTokenGenerator(Tester):
 """
 Basic tools/bin/token-generator test.


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-dtest] branch master updated: Run auth_join_ring_false_test (CASSANDRA-11381) on 2.2+

2019-06-11 Thread mshuler
This is an automated email from the ASF dual-hosted git repository.

mshuler pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cassandra-dtest.git


The following commit(s) were added to refs/heads/master by this push:
 new 2225061  Run auth_join_ring_false_test (CASSANDRA-11381) on 2.2+
2225061 is described below

commit 2225061506fcef662bf64c69759fa6ddd4c1c9dc
Author: Michael Shuler 
AuthorDate: Tue Jun 11 18:28:03 2019 -0500

Run auth_join_ring_false_test (CASSANDRA-11381) on 2.2+

These dtests fail on 2.1 for misconfiguration of cassandra.yaml and it
appears that the commit was only made for 2.2+, so skip on 2.1 branch.
---
 auth_join_ring_false_test.py | 1 +
 1 file changed, 1 insertion(+)

diff --git a/auth_join_ring_false_test.py b/auth_join_ring_false_test.py
index 34e2b4b..71bc001 100644
--- a/auth_join_ring_false_test.py
+++ b/auth_join_ring_false_test.py
@@ -6,6 +6,7 @@ from cassandra.cluster import NoHostAvailable
 from dtest import Tester
 
 
+@since('2.2')
 class TestAuth(Tester):
 
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15066) Improvements to Internode Messaging

2019-06-11 Thread Joseph Lynch (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16861604#comment-16861604
 ] 

Joseph Lynch commented on CASSANDRA-15066:
--

[~vinaykumarcse] and I have been updating the patch authors and reviewers 
continually on IRC and now ASF slack as we are running tests, but since this is 
getting close to merge I just want to chime in that our small scale (12 node) 
testing is showing excellent results so far. We've been working to validate 
this patch from a real-world-deployment/scalability/performance perspective and 
at this time the patchset appears more stable and more performant than 3.0. The 
testing methodology and results are being recording in an open 
[spreadsheet|https://docs.google.com/spreadsheets/d/1Vq_wC2q-rcG7UWim-t2leZZ4GgcuAjSREMFbG0QGy20/edit#gid=0]
 that we are updating as we test and once this is merged we can start resuming 
our formal tests as part of CASSANDRA-14746.

A summary of the results so far from our (Netflix) testing:
 * The unbounded hints that we used to see under load on trunk are no longer 
there.
 * Better process level stability (thread CPU distribution, JVM allocation, etc 
...)
 * Excellent CPU flamegraphs and profiles, messaging is almost never the 
dominant CPU factor.
 * Performance appears superior in this patch to 3.0.x across the board

So far we have very thoroughly tested the following on the small cluster:
 * LOCAL_ONE with variable read <-> write balance of 4kb multi column partitions
 * LOCAL_QUORUM with variable read <-> write balance of 4kb multi-column 
partitions
 * QUORUM with variable read <-> write balance of 4kb multi-column partitions

We have also begun verification of the following combinations of messaging:
 * Compression on, Encryption on
 * Compression on, Encryption off
 * Cross datacenter setups with 80ms of delay

> Improvements to Internode Messaging
> ---
>
> Key: CASSANDRA-15066
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15066
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Internode
>Reporter: Benedict
>Assignee: Benedict
>Priority: High
> Fix For: 4.0
>
> Attachments: 20k_backfill.png, 60k_RPS.png, 
> 60k_RPS_CPU_bottleneck.png, backfill_cass_perf_ft_msg_tst.svg, 
> baseline_patch_vs_30x.png, increasing_reads_latency.png, 
> many_reads_cass_perf_ft_msg_tst.svg
>
>
> CASSANDRA-8457 introduced asynchronous networking to internode messaging, but 
> there have been several follow-up endeavours to improve some semantic issues. 
>  CASSANDRA-14503 and CASSANDRA-13630 are the latest such efforts, and were 
> combined some months ago into a single overarching refactor of the original 
> work, to address some of the issues that have been discovered.  Given the 
> criticality of this work to the project, we wanted to bring some more eyes to 
> bear to ensure the release goes ahead smoothly.  In doing so, we uncovered a 
> number of issues with messaging, some of which long standing, that we felt 
> needed to be addressed.  This patch widens the scope of CASSANDRA-14503 and 
> CASSANDRA-13630 in an effort to close the book on the messaging service, at 
> least for the foreseeable future.
> The patch includes a number of clarifying refactors that touch outside of the 
> {{net.async}} package, and a number of semantic changes to the {{net.async}} 
> packages itself.  We believe it clarifies the intent and behaviour of the 
> code while improving system stability, which we will outline in comments 
> below.
> https://github.com/belliottsmith/cassandra/tree/messaging-improvements



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15066) Improvements to Internode Messaging

2019-06-11 Thread Aleksey Yeschenko (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16861509#comment-16861509
 ] 

Aleksey Yeschenko commented on CASSANDRA-15066:
---

[~ifesdjeen] Thanks for more feedback. Pushed a commit that addressed the 
remaining bits, I believe.

> Improvements to Internode Messaging
> ---
>
> Key: CASSANDRA-15066
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15066
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Internode
>Reporter: Benedict
>Assignee: Benedict
>Priority: High
> Fix For: 4.0
>
> Attachments: 20k_backfill.png, 60k_RPS.png, 
> 60k_RPS_CPU_bottleneck.png, backfill_cass_perf_ft_msg_tst.svg, 
> baseline_patch_vs_30x.png, increasing_reads_latency.png, 
> many_reads_cass_perf_ft_msg_tst.svg
>
>
> CASSANDRA-8457 introduced asynchronous networking to internode messaging, but 
> there have been several follow-up endeavours to improve some semantic issues. 
>  CASSANDRA-14503 and CASSANDRA-13630 are the latest such efforts, and were 
> combined some months ago into a single overarching refactor of the original 
> work, to address some of the issues that have been discovered.  Given the 
> criticality of this work to the project, we wanted to bring some more eyes to 
> bear to ensure the release goes ahead smoothly.  In doing so, we uncovered a 
> number of issues with messaging, some of which long standing, that we felt 
> needed to be addressed.  This patch widens the scope of CASSANDRA-14503 and 
> CASSANDRA-13630 in an effort to close the book on the messaging service, at 
> least for the foreseeable future.
> The patch includes a number of clarifying refactors that touch outside of the 
> {{net.async}} package, and a number of semantic changes to the {{net.async}} 
> packages itself.  We believe it clarifies the intent and behaviour of the 
> code while improving system stability, which we will outline in comments 
> below.
> https://github.com/belliottsmith/cassandra/tree/messaging-improvements



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15066) Improvements to Internode Messaging

2019-06-11 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16861370#comment-16861370
 ] 

Benedict commented on CASSANDRA-15066:
--

Thanks for even more review feedback!  I've pushed some modifications that you 
can take a look at, addressing most of your concerns.  Some comments / 
questions:

bq. in releaseSpace i’d leave a comment why thread is unparked but waiting is 
not set to null (parkUntilFlushed releases it)
Rather than this, I have made absolutely explicit in the class description, and 
on each variable, that every variable is updated by only one thread, and which 
thread this is.  Does this work for you?

bq. in resumeEpochSampling and resumeNowSampling we can use 
scheduleWithFixedDelay(task, 0, … ) instead of running a task upfront
I don't think so, because we want to be certain that after the method is 
invoked (particularly in the constructor) the relevant value has been updated - 
particularly for {{now}} and for the very first translation call

bq. looks like monotonic clock implementation might race. each operation is 
synchronized but their combination isn’t:
Good catch. This method should probably have been marked 
{{@VisisbleForTesting}}, but you're right that it could have been misused in 
future.

bq. In OutboundConnection, there seems to be a race condition between between 
promiseToExecuteLater(); requestConnect().addListener(f -> executeAgain()); and 
maybeExecuteAgain();. 
Really excellent catch, but I think this is much simpler than you think - it's 
a (pretty serious) typo / mistake in {{executeAgain}}, which has its condition 
inverted.  I'm a bit distracted today, but I'm pretty sure this fixes it (and I 
think it's caused because the condition is inverted from the outer condition, 
so I've inverted the inner condition to match, instead of the order of ternary 
operands)

bq. Short question: How is the approximate clock implementation with sampling 
better than just using a regular clock? 

Just to absolutely clarify, this feature existed already, I have just packaged 
it to permit disabling its use as others have also raised questions about its 
purpose.  The only point is to reduce the cost of the very frequent clock 
accesses we perform.  Ideally we would probably replace it with direct use of 
RDTSC.  Either way, this simply means that this patch's usage of clocks is 
cleanly separated into those we require to be precise, and those we permit to 
be approximate, and lets us override the implementation.  We can use 
{{System.nanoTime}} in all cases, and in fact see if there is even any 
measurable impact.

bq. Is the intention to normalize clock calls to yield the epoch timestamp or 
the intention is to improve performance by doing so periodically?
bq. Should error in approximate time be an absolute value? Especially since we 
seem to compare two error values later. However, it seems it has to be the case 
anyways

Could you expand a little on these questions?

bq. Also, what are we going to do with all the TODOs?.. Should we create 
follow-up tickets for them?

I've been periodically auditing them, and yes I will file follow-up Jira for 
any that seem to warrant it (in general I think it's OK to leave some TODOs in 
place without Jira, for the next maintainer)

Thanks again for this and all of your other feedback over the past few months!

> Improvements to Internode Messaging
> ---
>
> Key: CASSANDRA-15066
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15066
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Internode
>Reporter: Benedict
>Assignee: Benedict
>Priority: High
> Fix For: 4.0
>
> Attachments: 20k_backfill.png, 60k_RPS.png, 
> 60k_RPS_CPU_bottleneck.png, backfill_cass_perf_ft_msg_tst.svg, 
> baseline_patch_vs_30x.png, increasing_reads_latency.png, 
> many_reads_cass_perf_ft_msg_tst.svg
>
>
> CASSANDRA-8457 introduced asynchronous networking to internode messaging, but 
> there have been several follow-up endeavours to improve some semantic issues. 
>  CASSANDRA-14503 and CASSANDRA-13630 are the latest such efforts, and were 
> combined some months ago into a single overarching refactor of the original 
> work, to address some of the issues that have been discovered.  Given the 
> criticality of this work to the project, we wanted to bring some more eyes to 
> bear to ensure the release goes ahead smoothly.  In doing so, we uncovered a 
> number of issues with messaging, some of which long standing, that we felt 
> needed to be addressed.  This patch widens the scope of CASSANDRA-14503 and 
> CASSANDRA-13630 in an effort to close the book on the messaging service, at 
> least for the foreseeable future.
> The patch includes a number of clarifying refactors that

[jira] [Commented] (CASSANDRA-15066) Improvements to Internode Messaging

2019-06-11 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16861303#comment-16861303
 ] 

Alex Petrov commented on CASSANDRA-15066:
-

Thank you for the patch.

Patch looks good and latest additions are great. I'm happy to +1 it, I just 
have several comments, most of them are minor:

  * looks like monotonic clock implementation might race. each operation is 
synchronized but their combination isn’t:

{code}
public void refreshNow()
{
pauseNowSampling();
resumeNowSampling();
}
{code}

  * in {{waitUntilFlushed}} i’d swap {{int signalWhenExcessBytesWritten, int 
wakeUpWhenExcessBytesWritten}} since other methods (like 
{{parkUntilFlushed(long wakeUpWhenFlushed, long signalWhenFlushed)}}) use wake 
up first and then signal
  * in {{releaseSpace}} i’d leave a comment why thread is unparked but 
{{waiting}} is not set to {{null}} ({{parkUntilFlushed}} releases it)
  * in {{resumeEpochSampling}} and {{resumeNowSampling}} we can use 
{{scheduleWithFixedDelay(task, 0, … )}} instead of running a task upfront
  * Not sure if it’s necessary, but we can save the results of 
{{chunk0.free()}} and {{chunk1.free()}}. But since array was inlined, I assume 
performance was important.
  * second switch in {{BufferPool}} microqueue {{removeIf}} inverts the logic 
in case 1. I’d add a small comment that the only two cases we need to shift are 
{{null, chunk, null}} and {{null, null, chunk}}.
  * Should error in approximate time be an absolute value? Especially since we 
seem to compare two error values later. However, it seems it has to be the case 
anyways
  * In {{OutboundConnection}}, there seems to be a race condition between 
between {{promiseToExecuteLater(); requestConnect().addListener(f -> 
executeAgain());}} and {{maybeExecuteAgain();}}. Thing is {{executeAgain}} will 
run _only_ if {{maybeExecuteAgain}} was executed _before_ it. It works fine for 
small messages since we have strict ordering: {{maybeExecuteAgain}} runs on the 
event loop, and {{executeAgain}} will execute on the event loop, too. However, 
on large messages thread, {{maybeExecuteAgain}} is called from large messages 
thread, while {{executeAgain}} is called from the normal thread. 
{{executeAgain}} runs before {{maybeExecuteAgain}} and we’ll wait indefinitely 
long. You can reproduce it by running {{testCloseIfEndpointDown}} enough times 
(triggers extremely rarely, so better run only with large messages). I wasn’t 
able to reproduce this outside of closing.
  * it’d be great to add some comments to {{FrameDecoderLegacy}}
  * in IMH, we have the same code in {{onCorruptFrame}}, {{abort}}, 
{{onIntactFrame}}. I'm definitely not advocating against duplication (at least 
not in this particular case). But it might be good to either comment instances 
of usages of the code (see below) (i.e., explain that we don't want to 
double-release on corrupt+expired or other combinations of frames).

{code}
if (!isExpired && !isCorrupt)
{
releaseBuffers();
releaseCapacity(size);
}
{code}

Short question: How is the approximate clock implementation with sampling 
better than just using a regular clock? Is the intention to normalize clock 
calls to yield the epoch timestamp or the intention is to improve performance 
by doing so periodically?

Also, what are we going to do with all the TODOs?.. Should we create follow-up 
tickets for them?

New {{OutboundConnection}} state machine looks great. Also, wanted to give 
special props & thanks for the Verifier and some of the Queue tests. It is 
great to see more testing that tests behaviours through randomization and 
verification and not just unit testing. I think there's a lot to learn from 
these examples for everyone in the community.

> Improvements to Internode Messaging
> ---
>
> Key: CASSANDRA-15066
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15066
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Internode
>Reporter: Benedict
>Assignee: Benedict
>Priority: High
> Fix For: 4.0
>
> Attachments: 20k_backfill.png, 60k_RPS.png, 
> 60k_RPS_CPU_bottleneck.png, backfill_cass_perf_ft_msg_tst.svg, 
> baseline_patch_vs_30x.png, increasing_reads_latency.png, 
> many_reads_cass_perf_ft_msg_tst.svg
>
>
> CASSANDRA-8457 introduced asynchronous networking to internode messaging, but 
> there have been several follow-up endeavours to improve some semantic issues. 
>  CASSANDRA-14503 and CASSANDRA-13630 are the latest such efforts, and were 
> combined some months ago into a single overarching refactor of the original 
> work, to address some of the issues that have been discovered.  Given the 
> criticality

[jira] [Commented] (CASSANDRA-15131) Data Race between force remove and remove

2019-06-11 Thread lujie (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16861001#comment-16861001
 ] 

lujie commented on CASSANDRA-15131:
---

ping->

> Data Race between force remove and remove
> -
>
> Key: CASSANDRA-15131
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15131
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Bootstrap and Decommission
>Reporter: lujie
>Assignee: lujie
>Priority: Normal
>  Labels: pull-request-available
> Attachments: 0001-fix-CASSANDRA-15131.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Reproduce:
>  # start a three nodes cluster(A, B, C) by : ./bin/cassandra -f
>  # shutdown node A
>  # In Node B,  removing node A by:./bin/nodetool  removenode 
> 2331c0c1-f799-4f35-9323-c57ad020732b
>  # But this process is too slow, so in node B , we force remove A 
> by:./bin/nodetool  removenode force 
>  # But we meet NPE
>  # 
> {code:java}
> RemovalStatus: Removing token (-9206149340638432876). Waiting for replication 
> confirmation from [/10.3.1.11,/10.3.1.14].
> error: null
> -- StackTrace --
> java.lang.NullPointerException
> at 
> org.apache.cassandra.gms.VersionedValue$VersionedValueFactory.removedNonlocal(VersionedValue.java:214)
> at org.apache.cassandra.gms.Gossiper.advertiseTokenRemoved(Gossiper.java:556)
> at 
> org.apache.cassandra.service.StorageService.forceRemoveCompletion(StorageService.java:4353)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
> at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
> at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
> at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
> at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1471)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1312)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1404)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:832)
> at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
> at sun.rmi.transport.Transport$1.run(Transport.java:200)
> at sun.rmi.transport.Transport$1.run(Transport.java:197)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
> at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$81(TCPTransport.java:683)
> at java.security.AccessController.doPrivileged(Native Method)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Code Analysis 
> 1. removeNode will mark the node as Leaving
> {code:java}
> tokenMetadata.addLeavingEndpoint(endpoint);
> {code}
> 2. so forceRemoveNode can  step into remove(line3 - line12)
> {code:java}
> 1. if (!replicatingNodes.isEmpty() || 
> !tokenMetadata.getLeavingEndpoints().isEmpty())
> 2. {
> 3.   logger.warn("Removal not confirmed for for {}", 
> StringUtils.join(this.replicatingNodes, ",

[jira] [Updated] (CASSANDRA-14772) Fix issues in audit / full query log interactions

2019-06-11 Thread Marcus Eriksson (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-14772:

Reviewers: Aleksey Yeschenko, Vinay Chella

> Fix issues in audit / full query log interactions
> -
>
> Key: CASSANDRA-14772
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14772
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/CQL, Legacy/Tools
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Normal
> Fix For: 4.0
>
>
> There are some problems with the audit + full query log code that need to be 
> resolved before 4.0 is released:
> * Fix performance regression in FQL that makes it less usable than it should 
> be.
> * move full query log specific code to a separate package 
> * do some audit log class renames (I keep reading {{BinLogAuditLogger}} vs 
> {{BinAuditLogger}} wrong for example)
> * avoid parsing the CQL queries twice in {{QueryMessage}} when audit log is 
> enabled.
> * add a new tool to dump audit logs (ie, let fqltool be full query log 
> specific). fqltool crashes when pointed to them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14772) Fix issues in audit / full query log interactions

2019-06-11 Thread Marcus Eriksson (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-14772:

Test and Documentation Plan: unit tests + existing dtests
 Status: Patch Available  (was: Open)

Patch 
[here|https://github.com/apache/cassandra/compare/trunk...krummas:marcuse/14772?expand=1]
 and tests running 
[here|https://circleci.com/gh/krummas/workflows/cassandra/tree/marcuse%2F14772]

* Separates full query logging and audit logging - fql and audit logging now 
have no dependencies on each other, but both use {{BinLog}} to write log events 
to disk
* Introduces {{QueryEvents}} + {{AuthEvents}} where it is possible to register 
listeners for these events
* {{AuditLogManager}} and {{FullQueryLogger}} are now getting log events using 
these listeners
* Introduces a small change in the {{QueryHandler}} interface - the {{process}} 
method now takes a parsed {{CQLStatement}} - this was done to avoid re-parsing 
the statement when logging and a {{parse}} method was added to create this 
{{CQLStatement}}
* Move the {{enableFullQueryLogger}} etc methods from {{StorageProxy}} to 
{{StorageService}}


> Fix issues in audit / full query log interactions
> -
>
> Key: CASSANDRA-14772
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14772
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/CQL, Legacy/Tools
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Normal
> Fix For: 4.0
>
>
> There are some problems with the audit + full query log code that need to be 
> resolved before 4.0 is released:
> * Fix performance regression in FQL that makes it less usable than it should 
> be.
> * move full query log specific code to a separate package 
> * do some audit log class renames (I keep reading {{BinLogAuditLogger}} vs 
> {{BinAuditLogger}} wrong for example)
> * avoid parsing the CQL queries twice in {{QueryMessage}} when audit log is 
> enabled.
> * add a new tool to dump audit logs (ie, let fqltool be full query log 
> specific). fqltool crashes when pointed to them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org