[jira] [Commented] (CASSANDRA-16465) Increased Read Latency With Cassandra >= 3.11.7

2021-04-21 Thread Cyril Scetbon (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17327082#comment-17327082
 ] 

Cyril Scetbon commented on CASSANDRA-16465:
---

Any news on that ticket. [~AhmedElJAMI] confirmed that it doesn't happen with 
3.11.7 and [~skandyla] said the same for 3.11.8, so the title is misleading I 
think

> Increased Read Latency With Cassandra >= 3.11.7
> ---
>
> Key: CASSANDRA-16465
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16465
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
>Reporter: Ahmed ELJAMI
>Priority: Normal
> Fix For: 3.11.11
>
>
> After upgrading Cassandra from 3.11.3 to 3.11.9, Cassandra read latency 99% 
> increased significantly. Getting back to 3.11.3 immediately fixed the issue.
> I have observed "SStable reads" increases after upgrading to 3.11.9.
> The same behavior was observed by some other users: 
> [https://www.mail-archive.com/user@cassandra.apache.org/msg61247.html]
> According to Paulo Motta's comment, this behavior may be caused by 
> https://issues.apache.org/jira/browse/CASSANDRA-15690 which was introduced on 
> 3.11.7 and removed an optimization that may cause a correctness issue when 
> there are partition deletions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-16465) Increased Read Latency With Cassandra >= 3.11.7

2021-04-21 Thread Cyril Scetbon (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17327082#comment-17327082
 ] 

Cyril Scetbon edited comment on CASSANDRA-16465 at 4/22/21, 4:03 AM:
-

Any news on that ticket ? [~AhmedElJAMI] confirmed that it doesn't happen with 
3.11.7 and [~skandyla] said the same for 3.11.8, so the title is misleading I 
think


was (Author: cscetbon):
Any news on that ticket. [~AhmedElJAMI] confirmed that it doesn't happen with 
3.11.7 and [~skandyla] said the same for 3.11.8, so the title is misleading I 
think

> Increased Read Latency With Cassandra >= 3.11.7
> ---
>
> Key: CASSANDRA-16465
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16465
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
>Reporter: Ahmed ELJAMI
>Priority: Normal
> Fix For: 3.11.11
>
>
> After upgrading Cassandra from 3.11.3 to 3.11.9, Cassandra read latency 99% 
> increased significantly. Getting back to 3.11.3 immediately fixed the issue.
> I have observed "SStable reads" increases after upgrading to 3.11.9.
> The same behavior was observed by some other users: 
> [https://www.mail-archive.com/user@cassandra.apache.org/msg61247.html]
> According to Paulo Motta's comment, this behavior may be caused by 
> https://issues.apache.org/jira/browse/CASSANDRA-15690 which was introduced on 
> 3.11.7 and removed an optimization that may cause a correctness issue when 
> there are partition deletions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16199) cassandra.logdir undefined when CASSANDRA_LOG_DIR

2020-10-07 Thread Cyril Scetbon (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-16199:
--
Description: 
When ${cassandra.logdir} is used in logback.xml nodetool doesn’t use the env 
variable CASSANDRA_LOG_DIR or the default value. and complains
{noformat}
03:07:27,387 |-ERROR in 
ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - Failed to create 
parent directories for [/cassandra.logdir_IS_UNDEFINED/debug.log]03:07:27,387 
|-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - Failed 
to create parent directories for 
[/cassandra.logdir_IS_UNDEFINED/debug.log]03:07:27,388 |-ERROR in 
ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - 
openFile(cassandra.logdir_IS_UNDEFINED/debug.log,true) call failed. 
java.io.FileNotFoundException: cassandra.logdir_IS_UNDEFINED/debug.log (No such 
file or directory) at java.io.FileNotFoundException: 
cassandra.logdir_IS_UNDEFINED/debug.log (No such file or directory)
...{noformat}
It’s different for cassandra for instance 
[https://github.com/apache/cassandra/blob/324267b3c0676ad31bd4f2fac0e2e673a9257a37/bin/cassandra#L186].
 I feel like it should be added to 
[https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/bin/nodetool],
 or that it should call cassandra-env.sh

 

Seen on 3.11 and 4.0-beta1

  was:
When ${cassandra.logdir} is used in logback.xml nodetool doesn’t use the env 
variable CASSANDRA_LOG_DIR. and complains
{noformat}
03:07:27,387 |-ERROR in 
ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - Failed to create 
parent directories for [/cassandra.logdir_IS_UNDEFINED/debug.log]03:07:27,387 
|-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - Failed 
to create parent directories for 
[/cassandra.logdir_IS_UNDEFINED/debug.log]03:07:27,388 |-ERROR in 
ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - 
openFile(cassandra.logdir_IS_UNDEFINED/debug.log,true) call failed. 
java.io.FileNotFoundException: cassandra.logdir_IS_UNDEFINED/debug.log (No such 
file or directory) at java.io.FileNotFoundException: 
cassandra.logdir_IS_UNDEFINED/debug.log (No such file or directory)
...{noformat}
It’s different for cassandra for instance 
[https://github.com/apache/cassandra/blob/324267b3c0676ad31bd4f2fac0e2e673a9257a37/bin/cassandra#L186].
 I feel like it should be added to 
[https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/bin/nodetool],
 or that it should call cassandra-env.sh

 

Seen on 3.11 and 4.0-beta1


> cassandra.logdir undefined when CASSANDRA_LOG_DIR
> -
>
> Key: CASSANDRA-16199
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16199
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Config
>Reporter: Cyril Scetbon
>Priority: Normal
>
> When ${cassandra.logdir} is used in logback.xml nodetool doesn’t use the env 
> variable CASSANDRA_LOG_DIR or the default value. and complains
> {noformat}
> 03:07:27,387 |-ERROR in 
> ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - Failed to create 
> parent directories for [/cassandra.logdir_IS_UNDEFINED/debug.log]03:07:27,387 
> |-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - Failed 
> to create parent directories for 
> [/cassandra.logdir_IS_UNDEFINED/debug.log]03:07:27,388 |-ERROR in 
> ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - 
> openFile(cassandra.logdir_IS_UNDEFINED/debug.log,true) call failed. 
> java.io.FileNotFoundException: cassandra.logdir_IS_UNDEFINED/debug.log (No 
> such file or directory) at java.io.FileNotFoundException: 
> cassandra.logdir_IS_UNDEFINED/debug.log (No such file or directory)
> ...{noformat}
> It’s different for cassandra for instance 
> [https://github.com/apache/cassandra/blob/324267b3c0676ad31bd4f2fac0e2e673a9257a37/bin/cassandra#L186].
>  I feel like it should be added to 
> [https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/bin/nodetool],
>  or that it should call cassandra-env.sh
>  
> Seen on 3.11 and 4.0-beta1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16199) cassandra.logdir undefined when CASSANDRA_LOG_DIR

2020-10-07 Thread Cyril Scetbon (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-16199:
--
Description: 
When ${cassandra.logdir} is used in logback.xml nodetool doesn’t use the env 
variable CASSANDRA_LOG_DIR. and complains
{noformat}
03:07:27,387 |-ERROR in 
ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - Failed to create 
parent directories for [/cassandra.logdir_IS_UNDEFINED/debug.log]03:07:27,387 
|-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - Failed 
to create parent directories for 
[/cassandra.logdir_IS_UNDEFINED/debug.log]03:07:27,388 |-ERROR in 
ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - 
openFile(cassandra.logdir_IS_UNDEFINED/debug.log,true) call failed. 
java.io.FileNotFoundException: cassandra.logdir_IS_UNDEFINED/debug.log (No such 
file or directory) at java.io.FileNotFoundException: 
cassandra.logdir_IS_UNDEFINED/debug.log (No such file or directory)
...{noformat}
It’s different for cassandra for instance 
[https://github.com/apache/cassandra/blob/324267b3c0676ad31bd4f2fac0e2e673a9257a37/bin/cassandra#L186].
 I feel like it should be added to 
[https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/bin/nodetool],
 or that it should call cassandra-env.sh

 

Seen on 3.11 and 4.0-beta1

  was:
When ${cassandra.logdir} is used in logback.xml nodetool doesn’t use the env 
variable CASSANDRA_LOG_DIR. and complains
{noformat}
03:07:27,387 |-ERROR in 
ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - Failed to create 
parent directories for [/cassandra.logdir_IS_UNDEFINED/debug.log]03:07:27,387 
|-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - Failed 
to create parent directories for 
[/cassandra.logdir_IS_UNDEFINED/debug.log]03:07:27,388 |-ERROR in 
ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - 
openFile(cassandra.logdir_IS_UNDEFINED/debug.log,true) call failed. 
java.io.FileNotFoundException: cassandra.logdir_IS_UNDEFINED/debug.log (No such 
file or directory) at java.io.FileNotFoundException: 
cassandra.logdir_IS_UNDEFINED/debug.log (No such file or directory)
...{noformat}
It’s different for cassandra for instance 
[https://github.com/apache/cassandra/blob/324267b3c0676ad31bd4f2fac0e2e673a9257a37/bin/cassandra#L186].
 I feel like it should be added to 
[https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/bin/nodetool],
 or that it should call cassandra-env.sh.


> cassandra.logdir undefined when CASSANDRA_LOG_DIR
> -
>
> Key: CASSANDRA-16199
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16199
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Config
>Reporter: Cyril Scetbon
>Priority: Normal
>
> When ${cassandra.logdir} is used in logback.xml nodetool doesn’t use the env 
> variable CASSANDRA_LOG_DIR. and complains
> {noformat}
> 03:07:27,387 |-ERROR in 
> ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - Failed to create 
> parent directories for [/cassandra.logdir_IS_UNDEFINED/debug.log]03:07:27,387 
> |-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - Failed 
> to create parent directories for 
> [/cassandra.logdir_IS_UNDEFINED/debug.log]03:07:27,388 |-ERROR in 
> ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - 
> openFile(cassandra.logdir_IS_UNDEFINED/debug.log,true) call failed. 
> java.io.FileNotFoundException: cassandra.logdir_IS_UNDEFINED/debug.log (No 
> such file or directory) at java.io.FileNotFoundException: 
> cassandra.logdir_IS_UNDEFINED/debug.log (No such file or directory)
> ...{noformat}
> It’s different for cassandra for instance 
> [https://github.com/apache/cassandra/blob/324267b3c0676ad31bd4f2fac0e2e673a9257a37/bin/cassandra#L186].
>  I feel like it should be added to 
> [https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/bin/nodetool],
>  or that it should call cassandra-env.sh
>  
> Seen on 3.11 and 4.0-beta1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-16199) cassandra.logdir undefined when CASSANDRA_LOG_DIR

2020-10-07 Thread Cyril Scetbon (Jira)
Cyril Scetbon created CASSANDRA-16199:
-

 Summary: cassandra.logdir undefined when CASSANDRA_LOG_DIR
 Key: CASSANDRA-16199
 URL: https://issues.apache.org/jira/browse/CASSANDRA-16199
 Project: Cassandra
  Issue Type: Bug
  Components: Local/Config
Reporter: Cyril Scetbon


When ${cassandra.logdir} is used in logback.xml nodetool doesn’t use the env 
variable CASSANDRA_LOG_DIR. and complains
{noformat}
03:07:27,387 |-ERROR in 
ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - Failed to create 
parent directories for [/cassandra.logdir_IS_UNDEFINED/debug.log]03:07:27,387 
|-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - Failed 
to create parent directories for 
[/cassandra.logdir_IS_UNDEFINED/debug.log]03:07:27,388 |-ERROR in 
ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - 
openFile(cassandra.logdir_IS_UNDEFINED/debug.log,true) call failed. 
java.io.FileNotFoundException: cassandra.logdir_IS_UNDEFINED/debug.log (No such 
file or directory) at java.io.FileNotFoundException: 
cassandra.logdir_IS_UNDEFINED/debug.log (No such file or directory)
...{noformat}
It’s different for cassandra for instance 
[https://github.com/apache/cassandra/blob/324267b3c0676ad31bd4f2fac0e2e673a9257a37/bin/cassandra#L186].
 I feel like it should be added to 
[https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/bin/nodetool],
 or that it should call cassandra-env.sh.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16199) cassandra.logdir undefined when CASSANDRA_LOG_DIR

2020-10-07 Thread Cyril Scetbon (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-16199:
--
Impacts:   (was: None)

> cassandra.logdir undefined when CASSANDRA_LOG_DIR
> -
>
> Key: CASSANDRA-16199
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16199
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Config
>Reporter: Cyril Scetbon
>Priority: Normal
>
> When ${cassandra.logdir} is used in logback.xml nodetool doesn’t use the env 
> variable CASSANDRA_LOG_DIR. and complains
> {noformat}
> 03:07:27,387 |-ERROR in 
> ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - Failed to create 
> parent directories for [/cassandra.logdir_IS_UNDEFINED/debug.log]03:07:27,387 
> |-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - Failed 
> to create parent directories for 
> [/cassandra.logdir_IS_UNDEFINED/debug.log]03:07:27,388 |-ERROR in 
> ch.qos.logback.core.rolling.RollingFileAppender[DEBUGLOG] - 
> openFile(cassandra.logdir_IS_UNDEFINED/debug.log,true) call failed. 
> java.io.FileNotFoundException: cassandra.logdir_IS_UNDEFINED/debug.log (No 
> such file or directory) at java.io.FileNotFoundException: 
> cassandra.logdir_IS_UNDEFINED/debug.log (No such file or directory)
> ...{noformat}
> It’s different for cassandra for instance 
> [https://github.com/apache/cassandra/blob/324267b3c0676ad31bd4f2fac0e2e673a9257a37/bin/cassandra#L186].
>  I feel like it should be added to 
> [https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/bin/nodetool],
>  or that it should call cassandra-env.sh.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14992) Authenticating Jolokia using Cassandra

2019-01-21 Thread Cyril Scetbon (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-14992:
--
Reproduced In: 3.11.4  (was: 3.11.2)

> Authenticating Jolokia using Cassandra
> --
>
> Key: CASSANDRA-14992
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14992
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Core
> Environment: Cassandra 3.11.3
> Ubuntu Xenial
> Jolokia 1.3.7
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
>
> Following 
> [guide|https://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/secureJmxAuthentication.html]
>  (AUTHENTICATION AND AUTHORIZATION WITH CASSANDRA INTERNALS - CASSANDRA 3.6 
> AND LATER) does not work. I also don't understand  why the guide  says to 
> comment out lines having `/etc/cassandra/jmxremote` in it. It should not need 
> them. I expect jaas to take credentials passed in the http connection and use 
> them to authenticate  against Cassandra. 
> I have the following set of options :
> {code:java}
> -javaagent:/usr/local/share/jolokia-agent.jar=host=0.0.0.0,executor=fixed,authMode=jaas
>  -Dcom.sun.management.jmxremote.authenticate=true, 
> -Dcassandra.jmx.remote.login.config=CassandraLogin, 
> -Djava.security.auth.login.config=/etc/cassandra/cassandra-jaas.config, 
> -Dcassandra.jmx.authorizer=org.apache.cassandra.auth.jmx.AuthorizationProxy, 
> -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.ssl=false, 
> -Dcom.sun.management.jmxremote.local.only=false, 
> -Dcassandra.jmx.remote.port=7199, 
> -Dcom.sun.management.jmxremote.rmi.port=7199, -Djava.rmi.server.hostname= 
> 2a1d064ce844{code}
> And I get an HTTP error 401 when I try to query Jolokia with no credentials 
> and an empty response otherwise :
> {code:java}
> $ echo '{"mbean": "org.apache.cassandra.db:type=StorageService", "attribute": 
> "OperationMode", "type": "read"}' | http POST http://localhost:8778/jolokia/
> HTTP/1.1 401 Unauthorized
> Content-length: 0
> Date: Mon, 21 Jan 2019 18:31:35 GMT
> Www-authenticate: Basic realm="jolokia"{code}
> If I then create jmxremote files on disk, I only get empty  responses :
> {code:java}
> $ curl -v -u monitorRoleUser:cassie http://localhost:8778/jolokia/list/
> * Trying 127.0.0.1...
> * TCP_NODELAY set
> * Connected to localhost (127.0.0.1) port 8778 (#0)
> * Server auth using Basic with user 'monitorRoleUser'
> > GET /jolokia/list/ HTTP/1.1
> > Host: localhost:8778
> > Authorization: Basic bW9uaXRvclJvbGVVc2VyOmNhc3NpZQ==
> > User-Agent: curl/7.63.0-88
> > Accept: */*
> >
> * Empty reply from server
> * Connection #0 to host localhost left intact
> curl: (52) Empty reply from server{code}
>  
> What is missing ? Is it really functional ?
>  
> I tried to ping the author of the Jolokia project but did not get any 
> response neither on the GitHub project nor on the support forum ...
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14992) Authenticating Jolokia using Cassandra

2019-01-21 Thread Cyril Scetbon (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-14992:
--
Reproduced In: 3.11.3  (was: 3.11.4)

> Authenticating Jolokia using Cassandra
> --
>
> Key: CASSANDRA-14992
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14992
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Core
> Environment: Cassandra 3.11.3
> Ubuntu Xenial
> Jolokia 1.3.7
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
>
> Following 
> [guide|https://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/secureJmxAuthentication.html]
>  (AUTHENTICATION AND AUTHORIZATION WITH CASSANDRA INTERNALS - CASSANDRA 3.6 
> AND LATER) does not work. I also don't understand  why the guide  says to 
> comment out lines having `/etc/cassandra/jmxremote` in it. It should not need 
> them. I expect jaas to take credentials passed in the http connection and use 
> them to authenticate  against Cassandra. 
> I have the following set of options :
> {code:java}
> -javaagent:/usr/local/share/jolokia-agent.jar=host=0.0.0.0,executor=fixed,authMode=jaas
>  -Dcom.sun.management.jmxremote.authenticate=true, 
> -Dcassandra.jmx.remote.login.config=CassandraLogin, 
> -Djava.security.auth.login.config=/etc/cassandra/cassandra-jaas.config, 
> -Dcassandra.jmx.authorizer=org.apache.cassandra.auth.jmx.AuthorizationProxy, 
> -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.ssl=false, 
> -Dcom.sun.management.jmxremote.local.only=false, 
> -Dcassandra.jmx.remote.port=7199, 
> -Dcom.sun.management.jmxremote.rmi.port=7199, -Djava.rmi.server.hostname= 
> 2a1d064ce844{code}
> And I get an HTTP error 401 when I try to query Jolokia with no credentials 
> and an empty response otherwise :
> {code:java}
> $ echo '{"mbean": "org.apache.cassandra.db:type=StorageService", "attribute": 
> "OperationMode", "type": "read"}' | http POST http://localhost:8778/jolokia/
> HTTP/1.1 401 Unauthorized
> Content-length: 0
> Date: Mon, 21 Jan 2019 18:31:35 GMT
> Www-authenticate: Basic realm="jolokia"{code}
> If I then create jmxremote files on disk, I only get empty  responses :
> {code:java}
> $ curl -v -u monitorRoleUser:cassie http://localhost:8778/jolokia/list/
> * Trying 127.0.0.1...
> * TCP_NODELAY set
> * Connected to localhost (127.0.0.1) port 8778 (#0)
> * Server auth using Basic with user 'monitorRoleUser'
> > GET /jolokia/list/ HTTP/1.1
> > Host: localhost:8778
> > Authorization: Basic bW9uaXRvclJvbGVVc2VyOmNhc3NpZQ==
> > User-Agent: curl/7.63.0-88
> > Accept: */*
> >
> * Empty reply from server
> * Connection #0 to host localhost left intact
> curl: (52) Empty reply from server{code}
>  
> What is missing ? Is it really functional ?
>  
> I tried to ping the author of the Jolokia project but did not get any 
> response neither on the GitHub project nor on the support forum ...
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14992) Authenticating Jolokia using Cassandra

2019-01-21 Thread Cyril Scetbon (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-14992:
--
Description: 
Following 
[guide|https://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/secureJmxAuthentication.html]
 (AUTHENTICATION AND AUTHORIZATION WITH CASSANDRA INTERNALS - CASSANDRA 3.6 AND 
LATER) does not work. I also don't understand  why the guide  says to comment 
out lines having `/etc/cassandra/jmxremote` in it. It should not need them. I 
expect jaas to take credentials passed in the http connection and use them to 
authenticate  against Cassandra. 

I have the following set of options :
{code:java}
-javaagent:/usr/local/share/jolokia-agent.jar=host=0.0.0.0,executor=fixed,authMode=jaas
 -Dcom.sun.management.jmxremote.authenticate=true, 
-Dcassandra.jmx.remote.login.config=CassandraLogin, 
-Djava.security.auth.login.config=/etc/cassandra/cassandra-jaas.config, 
-Dcassandra.jmx.authorizer=org.apache.cassandra.auth.jmx.AuthorizationProxy, 
-Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.ssl=false, 
-Dcom.sun.management.jmxremote.local.only=false, 
-Dcassandra.jmx.remote.port=7199, -Dcom.sun.management.jmxremote.rmi.port=7199, 
-Djava.rmi.server.hostname= 2a1d064ce844{code}
And I get an HTTP error 401 when I try to query Jolokia with no credentials and 
an empty response otherwise :
{code:java}
$ echo '{"mbean": "org.apache.cassandra.db:type=StorageService", "attribute": 
"OperationMode", "type": "read"}' | http POST http://localhost:8778/jolokia/
HTTP/1.1 401 Unauthorized
Content-length: 0
Date: Mon, 21 Jan 2019 18:31:35 GMT
Www-authenticate: Basic realm="jolokia"{code}
If I then create jmxremote files on disk, I only get empty  responses :
{code:java}
$ curl -v -u monitorRoleUser:cassie http://localhost:8778/jolokia/list/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8778 (#0)
* Server auth using Basic with user 'monitorRoleUser'
> GET /jolokia/list/ HTTP/1.1
> Host: localhost:8778
> Authorization: Basic bW9uaXRvclJvbGVVc2VyOmNhc3NpZQ==
> User-Agent: curl/7.63.0-88
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server{code}
 

What is missing ? Is it really functional ?

 

I tried to ping the author of the Jolokia project but did not get any response 
neither on the GitHub project nor on the support forum ...

 

  was:
I've noticed that when I run a long operation like a rebuild using Jolokia, I 
can no longer query Jolokia and get a timeout error even when trying to read a 
simple attribute like the Java version in use : 
{code:java}
jmx4perl http://cassandra-3.11.2:8778/jolokia read java.lang:type=Runtime 
SpecVersion
ERROR: Error while fetching http:// 
cassandra-3.11.2:8778/jolokia/read/java.lang%3Atype%3DRuntime/SpecVersion :
408 Got timeout in 180s
{code}
I also removed the default flag 
[-XX:+PerfDisableSharedMem|https://github.com/apache/cassandra/blob/cassandra-3.11/NEWS.txt#L769-L771]
 but did not get more luck.


> Authenticating Jolokia using Cassandra
> --
>
> Key: CASSANDRA-14992
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14992
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Core
> Environment: Cassandra 3.11.3
> Ubuntu Xenial
> Jolokia 1.3.7
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
>
> Following 
> [guide|https://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/secureJmxAuthentication.html]
>  (AUTHENTICATION AND AUTHORIZATION WITH CASSANDRA INTERNALS - CASSANDRA 3.6 
> AND LATER) does not work. I also don't understand  why the guide  says to 
> comment out lines having `/etc/cassandra/jmxremote` in it. It should not need 
> them. I expect jaas to take credentials passed in the http connection and use 
> them to authenticate  against Cassandra. 
> I have the following set of options :
> {code:java}
> -javaagent:/usr/local/share/jolokia-agent.jar=host=0.0.0.0,executor=fixed,authMode=jaas
>  -Dcom.sun.management.jmxremote.authenticate=true, 
> -Dcassandra.jmx.remote.login.config=CassandraLogin, 
> -Djava.security.auth.login.config=/etc/cassandra/cassandra-jaas.config, 
> -Dcassandra.jmx.authorizer=org.apache.cassandra.auth.jmx.AuthorizationProxy, 
> -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.ssl=false, 
> -Dcom.sun.management.jmxremote.local.only=false, 
> -Dcassandra.jmx.remote.port=7199, 
> -Dcom.sun.management.jmxremote.rmi.port=7199, -Djava.rmi.server.hostname= 
> 2a1d064ce844{code}
> And I get an HTTP error 401 when I try to query Jolokia with no credentials 
> and an empty response otherwise :
> {code:java}
> $ echo '{"mbean": "org.apache.cassandra.db:type=StorageService", "attribute": 
> "OperationMode", "type": 

[jira] [Updated] (CASSANDRA-14992) Authenticating Jolokia using Cassandra

2019-01-21 Thread Cyril Scetbon (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-14992:
--
Environment: 
Cassandra 3.11.3

Ubuntu Xenial

Jolokia 1.3.7

  was:
Cassandra 3.11.2

Ubuntu Xenial

Jolokia 1.3.7


> Authenticating Jolokia using Cassandra
> --
>
> Key: CASSANDRA-14992
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14992
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Core
> Environment: Cassandra 3.11.3
> Ubuntu Xenial
> Jolokia 1.3.7
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
>
> I've noticed that when I run a long operation like a rebuild using Jolokia, I 
> can no longer query Jolokia and get a timeout error even when trying to read 
> a simple attribute like the Java version in use : 
> {code:java}
> jmx4perl http://cassandra-3.11.2:8778/jolokia read java.lang:type=Runtime 
> SpecVersion
> ERROR: Error while fetching http:// 
> cassandra-3.11.2:8778/jolokia/read/java.lang%3Atype%3DRuntime/SpecVersion :
> 408 Got timeout in 180s
> {code}
> I also removed the default flag 
> [-XX:+PerfDisableSharedMem|https://github.com/apache/cassandra/blob/cassandra-3.11/NEWS.txt#L769-L771]
>  but did not get more luck.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14992) Authenticating Jolokia using Cassandra

2019-01-21 Thread Cyril Scetbon (JIRA)
Cyril Scetbon created CASSANDRA-14992:
-

 Summary: Authenticating Jolokia using Cassandra
 Key: CASSANDRA-14992
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14992
 Project: Cassandra
  Issue Type: Bug
  Components: Legacy/Core
 Environment: Cassandra 3.11.2

Ubuntu Xenial

Jolokia 1.3.7
Reporter: Cyril Scetbon
Assignee: Cyril Scetbon


I've noticed that when I run a long operation like a rebuild using Jolokia, I 
can no longer query Jolokia and get a timeout error even when trying to read a 
simple attribute like the Java version in use : 
{code:java}
jmx4perl http://cassandra-3.11.2:8778/jolokia read java.lang:type=Runtime 
SpecVersion
ERROR: Error while fetching http:// 
cassandra-3.11.2:8778/jolokia/read/java.lang%3Atype%3DRuntime/SpecVersion :
408 Got timeout in 180s
{code}
I also removed the default flag 
[-XX:+PerfDisableSharedMem|https://github.com/apache/cassandra/blob/cassandra-3.11/NEWS.txt#L769-L771]
 but did not get more luck.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14916) Add missing commands to nodetool_completion

2018-12-06 Thread Cyril Scetbon (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16712042#comment-16712042
 ] 

Cyril Scetbon commented on CASSANDRA-14916:
---

Hey. [~carlo_4002], happy you're updating that stuff ;) 

For what I've checked, I can tell that the new option *viewbuildstatus* should 
look for existing views in the chosen keyspace. So I would add a function to 
get views for a specific keyspace and would give the option to complete the 
command using it.

> Add missing commands to nodetool_completion
> ---
>
> Key: CASSANDRA-14916
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14916
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: jean carlo rivera ura
>Assignee: jean carlo rivera ura
>Priority: Trivial
> Fix For: 4.0
>
> Attachments: 
> 0001-adding-missing-nodetool-s-commands-to-the-file-nodet.patch
>
>
> Since [CASSANDRA-6421|https://issues.apache.org/jira/browse/CASSANDRA-6421], 
> the file nodetool_completion haven't been modified in order to add the new 
> features of nodetool command.
> I propose this patch to add those missing features.
> I tried to follow the logic of the code, I hope I did not miss anything. 
> [~cscetbon] , I would be happy if you have a look to the patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-14686) Jolokia agent not accepting requests during an operation

2018-09-04 Thread Cyril Scetbon (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon resolved CASSANDRA-14686.
---
Resolution: Not A Problem
  Assignee: Cyril Scetbon

Looking in the cassandra core code, I didn't find any bridge code between the 
Jolokia agent and the JMX. I think the issue comes from the agent itself. I'm 
gonna look at it and try to find what's going on.

> Jolokia agent not accepting requests during an operation
> 
>
> Key: CASSANDRA-14686
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14686
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra 3.11.2
> Ubuntu Xenial
> Jolokia 1.3.7
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
>
> I've noticed that when I run a long operation like a rebuild using Jolokia, I 
> can no longer query Jolokia and get a timeout error even when trying to read 
> a simple attribute like the Java version in use : 
> {code:java}
> jmx4perl http://cassandra-3.11.2:8778/jolokia read java.lang:type=Runtime 
> SpecVersion
> ERROR: Error while fetching http:// 
> cassandra-3.11.2:8778/jolokia/read/java.lang%3Atype%3DRuntime/SpecVersion :
> 408 Got timeout in 180s
> {code}
> I also removed the default flag 
> [-XX:+PerfDisableSharedMem|https://github.com/apache/cassandra/blob/cassandra-3.11/NEWS.txt#L769-L771]
>  but did not get more luck.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14686) Jolokia agent not accepting requests during an operation

2018-09-03 Thread Cyril Scetbon (JIRA)
Cyril Scetbon created CASSANDRA-14686:
-

 Summary: Jolokia agent not accepting requests during an operation
 Key: CASSANDRA-14686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 3.11.2

Ubuntu Xenial

Jolokia 1.3.7
Reporter: Cyril Scetbon


I've noticed that when I run a long operation like a rebuild using Jolokia, I 
can no longer query Jolokia and get a timeout error even when trying to read a 
simple attribute like the Java version in use : 
{code:java}
jmx4perl http://cassandra-3.11.2:8778/jolokia read java.lang:type=Runtime 
SpecVersion
ERROR: Error while fetching http:// 
cassandra-3.11.2:8778/jolokia/read/java.lang%3Atype%3DRuntime/SpecVersion :
408 Got timeout in 180s
{code}
I also removed the default flag 
[-XX:+PerfDisableSharedMem|https://github.com/apache/cassandra/blob/cassandra-3.11/NEWS.txt#L769-L771]
 but did not get more luck.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11335) Make nodetool scrub/cleanup/verify/upgradesstable to use JMX notification

2018-08-23 Thread Cyril Scetbon (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-11335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590278#comment-16590278
 ] 

Cyril Scetbon commented on CASSANDRA-11335:
---

What about rebuild ?

> Make nodetool scrub/cleanup/verify/upgradesstable to use JMX notification
> -
>
> Key: CASSANDRA-11335
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11335
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Tools
>Reporter: Yuki Morishita
>Assignee: Eduard Tudenhoefner
>Priority: Minor
> Fix For: 4.x
>
>
> Use asynchronous operation we change in CASSANDRA-11334 to make nodetool 
> scrub/verify/cleanup/upgradesstable to use JMX notification and print out 
> progress.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-06-14 Thread Cyril Scetbon (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513111#comment-16513111
 ] 

Cyril Scetbon commented on CASSANDRA-10751:
---

[~michaelsembwever], It's a problem on 2.1.14. I haven't check on another 
version

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Fix For: 4.0, 2.2.13, 3.0.17, 3.11.3
>
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:

[jira] [Comment Edited] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-06-14 Thread Cyril Scetbon (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513111#comment-16513111
 ] 

Cyril Scetbon edited comment on CASSANDRA-10751 at 6/14/18 11:07 PM:
-

[~michaelsembwever], It's a problem on 2.1.14. I haven't checked on another 
version


was (Author: cscetbon):
[~michaelsembwever], It's a problem on 2.1.14. I haven't check on another 
version

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Fix For: 4.0, 2.2.13, 3.0.17, 3.11.3
>
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> com.datastax.driver.core.except

[jira] [Commented] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-06-14 Thread Cyril Scetbon (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16512948#comment-16512948
 ] 

Cyril Scetbon commented on CASSANDRA-10751:
---

Hey [~jjordan] [~michaelsembwever] Are you saying that even before it wasn't 
needed ? I can guarantee that it was. That's been running for almost 2 years 
now on production. If it's not needed anymore, then great!

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Fix For: 4.0, 2.2.13, 3.0.17, 3.11.3
>
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.

[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2018-06-05 Thread Cyril Scetbon (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502546#comment-16502546
 ] 

Cyril Scetbon commented on CASSANDRA-13929:
---

Hey guys, any news on that issue ?

> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
>Assignee: Jay Zhuang
>Priority: Major
> Fix For: 3.11.x
>
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_snapshot_heaputilization.png, 
> cassandra_3.11.1_vs_3.11.2recyclernullingpatch.png, 
> cassandra_heapcpu_memleak_patching_test_30d.png, 
> dtest_example_80_request.png, dtest_example_80_request_fix.png, 
> dtest_example_heap.png, memleak_heapdump_recyclerstack.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-05-01 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16460355#comment-16460355
 ] 

Cyril Scetbon commented on CASSANDRA-10751:
---

[~michaelsembwever] I got it in my cassandra package and happy that now it's in 
the mainstream !

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> com.datastax.driver.c

[jira] [Commented] (CASSANDRA-14381) nodetool listsnapshots is missing local system keyspace snapshots

2018-04-24 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451501#comment-16451501
 ] 

Cyril Scetbon commented on CASSANDRA-14381:
---

I'd say the patch is so simple let's push it to 2.1 and 3.0 (I still have 2.1 
nodes running)

> nodetool listsnapshots is missing local system keyspace snapshots
> -
>
> Key: CASSANDRA-14381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14381
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: MacOs 10.12.5
> Java 1.8.0_144
> Cassandra 3.11.2 (brew install)
>Reporter: Cyril Scetbon
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>
> The output of *nodetool listsnapshots* is inconsistent with the snapshots 
> created :
> {code:java}
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ nodetool snapshot -t tag1 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag1] and 
> options {skipFlush=false}
> Snapshot directory: tag1
> $ nodetool snapshot -t tag2 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag2] and 
> options {skipFlush=false}
> Snapshot directory: tag2
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ ls 
> /usr/local/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/snapshots/
> tag1 tag2{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14381) nodetool listsnapshots is missing snapshots

2018-04-19 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444914#comment-16444914
 ] 

Cyril Scetbon commented on CASSANDRA-14381:
---

+1 on [https://github.com/apache/cassandra/commit/a0ceb3]

> nodetool listsnapshots is missing snapshots
> ---
>
> Key: CASSANDRA-14381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14381
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: MacOs 10.12.5
> Java 1.8.0_144
> Cassandra 3.11.2 (brew install)
>Reporter: Cyril Scetbon
>Assignee: Ariel Weisberg
>Priority: Major
>
> The output of *nodetool listsnapshots* is inconsistent with the snapshots 
> created :
> {code:java}
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ nodetool snapshot -t tag1 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag1] and 
> options {skipFlush=false}
> Snapshot directory: tag1
> $ nodetool snapshot -t tag2 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag2] and 
> options {skipFlush=false}
> Snapshot directory: tag2
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ ls 
> /usr/local/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/snapshots/
> tag1 tag2{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14381) nodetool listsnapshots is missing snapshots

2018-04-17 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16441199#comment-16441199
 ] 

Cyril Scetbon commented on CASSANDRA-14381:
---

Thanks [~aweisberg]

> nodetool listsnapshots is missing snapshots
> ---
>
> Key: CASSANDRA-14381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14381
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: MacOs 10.12.5
> Java 1.8.0_144
> Cassandra 3.11.2 (brew install)
>Reporter: Cyril Scetbon
>Assignee: Ariel Weisberg
>Priority: Major
>
> The output of *nodetool listsnapshots* is inconsistent with the snapshots 
> created :
> {code:java}
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ nodetool snapshot -t tag1 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag1] and 
> options {skipFlush=false}
> Snapshot directory: tag1
> $ nodetool snapshot -t tag2 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag2] and 
> options {skipFlush=false}
> Snapshot directory: tag2
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ ls 
> /usr/local/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/snapshots/
> tag1 tag2{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14381) nodetool listsnapshots is missing snapshots

2018-04-17 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16440995#comment-16440995
 ] 

Cyril Scetbon commented on CASSANDRA-14381:
---

Okay that's what I thought. Whenever I find some time, I should be able to 
remove that piece of code that skips the system keyspace

> nodetool listsnapshots is missing snapshots
> ---
>
> Key: CASSANDRA-14381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14381
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: MacOs 10.12.5
> Java 1.8.0_144
> Cassandra 3.11.2 (brew install)
>Reporter: Cyril Scetbon
>Priority: Major
>
> The output of *nodetool listsnapshots* is inconsistent with the snapshots 
> created :
> {code:java}
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ nodetool snapshot -t tag1 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag1] and 
> options {skipFlush=false}
> Snapshot directory: tag1
> $ nodetool snapshot -t tag2 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag2] and 
> options {skipFlush=false}
> Snapshot directory: tag2
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ ls 
> /usr/local/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/snapshots/
> tag1 tag2{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14381) nodetool listsnapshots is missing snapshots

2018-04-13 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16437580#comment-16437580
 ] 

Cyril Scetbon edited comment on CASSANDRA-14381 at 4/13/18 5:00 PM:


what if the table was corrupted locally ? why then a global snapshot includes 
it, but it's not listed with that command ?


was (Author: cscetbon):
what if the table was corrupted locally ? why then a global snapshot includes 
it, but it's not listed but that command ?

> nodetool listsnapshots is missing snapshots
> ---
>
> Key: CASSANDRA-14381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14381
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: MacOs 10.12.5
> Java 1.8.0_144
> Cassandra 3.11.2 (brew install)
>Reporter: Cyril Scetbon
>Priority: Major
>
> The output of *nodetool listsnapshots* is inconsistent with the snapshots 
> created :
> {code:java}
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ nodetool snapshot -t tag1 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag1] and 
> options {skipFlush=false}
> Snapshot directory: tag1
> $ nodetool snapshot -t tag2 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag2] and 
> options {skipFlush=false}
> Snapshot directory: tag2
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ ls 
> /usr/local/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/snapshots/
> tag1 tag2{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14381) nodetool listsnapshots is missing snapshots

2018-04-13 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16437580#comment-16437580
 ] 

Cyril Scetbon commented on CASSANDRA-14381:
---

what if the table was corrupted locally ? why then a global snapshot includes 
it, but it's not listed but that command ?

> nodetool listsnapshots is missing snapshots
> ---
>
> Key: CASSANDRA-14381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14381
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: MacOs 10.12.5
> Java 1.8.0_144
> Cassandra 3.11.2 (brew install)
>Reporter: Cyril Scetbon
>Priority: Major
>
> The output of *nodetool listsnapshots* is inconsistent with the snapshots 
> created :
> {code:java}
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ nodetool snapshot -t tag1 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag1] and 
> options {skipFlush=false}
> Snapshot directory: tag1
> $ nodetool snapshot -t tag2 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag2] and 
> options {skipFlush=false}
> Snapshot directory: tag2
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ ls 
> /usr/local/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/snapshots/
> tag1 tag2{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14381) nodetool listsnapshots is missing snapshots

2018-04-12 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436640#comment-16436640
 ] 

Cyril Scetbon commented on CASSANDRA-14381:
---

hmm, and it's there in 2.1 too 
[https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/StorageService.java#L2629-L2630]
 and it's been there for 4 years 
[https://github.com/apache/cassandra/commit/719103b649c1c5459683a8ffd1c013664f1ffbb6]

I really don't know why it's there. What if we need to restore the whole 
node/cluster for some reason ??

 

> nodetool listsnapshots is missing snapshots
> ---
>
> Key: CASSANDRA-14381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14381
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: MacOs 10.12.5
> Java 1.8.0_144
> Cassandra 3.11.2 (brew install)
>Reporter: Cyril Scetbon
>Priority: Major
>
> The output of *nodetool listsnapshots* is inconsistent with the snapshots 
> created :
> {code:java}
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ nodetool snapshot -t tag1 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag1] and 
> options {skipFlush=false}
> Snapshot directory: tag1
> $ nodetool snapshot -t tag2 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag2] and 
> options {skipFlush=false}
> Snapshot directory: tag2
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ ls 
> /usr/local/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/snapshots/
> tag1 tag2{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14381) nodetool listsnapshots is missing snapshots

2018-04-12 Thread Cyril Scetbon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-14381:
--
Description: 
The output of *nodetool listsnapshots* is inconsistent with the snapshots 
created :
{code:java}
$ nodetool listsnapshots
Snapshot Details:
There are no snapshots

$ nodetool snapshot -t tag1 --table local system
Requested creating snapshot(s) for [system] with snapshot name [tag1] and 
options {skipFlush=false}
Snapshot directory: tag1

$ nodetool snapshot -t tag2 --table local system
Requested creating snapshot(s) for [system] with snapshot name [tag2] and 
options {skipFlush=false}
Snapshot directory: tag2

$ nodetool listsnapshots
Snapshot Details:
There are no snapshots

$ ls 
/usr/local/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/snapshots/
tag1 tag2{code}
 

 

  was:
the output of `nodetool listsnapshots` is inconsistent with the snapshots 
created :

```

$ nodetool listsnapshots
Snapshot Details:
There are no snapshots

$ nodetool snapshot -t tag1 --table local system
Requested creating snapshot(s) for [system] with snapshot name [tag1] and 
options \{skipFlush=false}
Snapshot directory: tag1

$ nodetool snapshot -t tag2 --table local system
Requested creating snapshot(s) for [system] with snapshot name [tag2] and 
options \{skipFlush=false}
Snapshot directory: tag2

$ nodetool listsnapshots
Snapshot Details:
There are no snapshots

$ ls 
/usr/local/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/snapshots/
tag1 tag2

```


> nodetool listsnapshots is missing snapshots
> ---
>
> Key: CASSANDRA-14381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14381
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: MacOs 10.12.5
> Java 1.8.0_144
> Cassandra 3.11.2 (brew install)
>Reporter: Cyril Scetbon
>Priority: Major
>
> The output of *nodetool listsnapshots* is inconsistent with the snapshots 
> created :
> {code:java}
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ nodetool snapshot -t tag1 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag1] and 
> options {skipFlush=false}
> Snapshot directory: tag1
> $ nodetool snapshot -t tag2 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag2] and 
> options {skipFlush=false}
> Snapshot directory: tag2
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ ls 
> /usr/local/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/snapshots/
> tag1 tag2{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14381) nodetool listsnapshots is missing snapshots

2018-04-12 Thread Cyril Scetbon (JIRA)
Cyril Scetbon created CASSANDRA-14381:
-

 Summary: nodetool listsnapshots is missing snapshots
 Key: CASSANDRA-14381
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14381
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: MacOs 10.12.5

Java 1.8.0_144

Cassandra 3.11.2 (brew install)
Reporter: Cyril Scetbon


the output of `nodetool listsnapshots` is inconsistent with the snapshots 
created :

```

$ nodetool listsnapshots
Snapshot Details:
There are no snapshots

$ nodetool snapshot -t tag1 --table local system
Requested creating snapshot(s) for [system] with snapshot name [tag1] and 
options \{skipFlush=false}
Snapshot directory: tag1

$ nodetool snapshot -t tag2 --table local system
Requested creating snapshot(s) for [system] with snapshot name [tag2] and 
options \{skipFlush=false}
Snapshot directory: tag2

$ nodetool listsnapshots
Snapshot Details:
There are no snapshots

$ ls 
/usr/local/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/snapshots/
tag1 tag2

```



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-9200) Sequences

2017-04-07 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15960986#comment-15960986
 ] 

Cyril Scetbon edited comment on CASSANDRA-9200 at 4/7/17 3:45 PM:
--

Hey [~iamaleksey], [~slebresne], if I look at the related issues they are all 
resolved. Does it mean you're going to reopen it ? I'm confused by Sylvain's 
previous comment.


was (Author: cscetbon):
Hey [~iamaleksey], [~slebresne], if I look at the related issues they are all 
resolved. Does it mean you're going to reopen it ? I'm confused because of 
Sylvain's previous comment.

> Sequences
> -
>
> Key: CASSANDRA-9200
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9200
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jonathan Ellis
>
> UUIDs are usually the right choice for surrogate keys, but sometimes 
> application constraints dictate an increasing numeric value.
> We could do this by using LWT to reserve "blocks" of the sequence for each 
> member of the cluster, which would eliminate paxos contention at the cost of 
> not being strictly increasing.
> PostgreSQL syntax: 
> http://www.postgresql.org/docs/9.4/static/sql-createsequence.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-9200) Sequences

2017-04-07 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15960986#comment-15960986
 ] 

Cyril Scetbon commented on CASSANDRA-9200:
--

Hey [~iamaleksey], [~slebresne], if I look at the related issues they are all 
resolved. Does it mean you're going to reopen it ? I'm confused because of 
Sylvain's previous comment.

> Sequences
> -
>
> Key: CASSANDRA-9200
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9200
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jonathan Ellis
>
> UUIDs are usually the right choice for surrogate keys, but sometimes 
> application constraints dictate an increasing numeric value.
> We could do this by using LWT to reserve "blocks" of the sequence for each 
> member of the cluster, which would eliminate paxos contention at the cost of 
> not being strictly increasing.
> PostgreSQL syntax: 
> http://www.postgresql.org/docs/9.4/static/sql-createsequence.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13087) Not enough bytes exception during compaction

2017-01-27 Thread Cyril Scetbon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-13087:
--
Attachment: CASSANDRA-13087.patch

I attach the proposed patch

> Not enough bytes exception during compaction
> 
>
> Key: CASSANDRA-13087
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13087
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Ubuntu 14.04.3 LTS, Cassandra 2.1.14
>Reporter: FACORAT
> Attachments: CASSANDRA-13087.patch
>
>
> After a repair we have compaction exceptions on some nodes and its spreading
> {noformat}
> ERROR [CompactionExecutor:14065] 2016-12-30 14:45:07,245 
> CassandraDaemon.java:229 - Exception in thread 
> Thread[CompactionExecutor:14065,1,main]
> java.lang.IllegalArgumentException: Not enough bytes. Offset: 5. Length: 
> 20275. Buffer size: 12594
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:378)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:100)
>  ~[apache-cassandra-2.1.14.ja
> r:2.1.14]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:398)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:382)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:171)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:202)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:645) 
> ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(ColumnIndex.java:166)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:121)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:193) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:127)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:197)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264)
>  ~[apache-cassandra-2.1.14.jar:2
> .1.14]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_60]
> at java

[jira] [Comment Edited] (CASSANDRA-13087) Not enough bytes exception during compaction

2017-01-27 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15842512#comment-15842512
 ] 

Cyril Scetbon edited comment on CASSANDRA-13087 at 1/27/17 10:26 AM:
-

I attached the proposed patch


was (Author: cscetbon):
I attach the proposed patch

> Not enough bytes exception during compaction
> 
>
> Key: CASSANDRA-13087
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13087
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Ubuntu 14.04.3 LTS, Cassandra 2.1.14
>Reporter: FACORAT
> Attachments: CASSANDRA-13087.patch
>
>
> After a repair we have compaction exceptions on some nodes and its spreading
> {noformat}
> ERROR [CompactionExecutor:14065] 2016-12-30 14:45:07,245 
> CassandraDaemon.java:229 - Exception in thread 
> Thread[CompactionExecutor:14065,1,main]
> java.lang.IllegalArgumentException: Not enough bytes. Offset: 5. Length: 
> 20275. Buffer size: 12594
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:378)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:100)
>  ~[apache-cassandra-2.1.14.ja
> r:2.1.14]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:398)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:382)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:171)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:202)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:645) 
> ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(ColumnIndex.java:166)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:121)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:193) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:127)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:197)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264)
>  ~[apache-cassandra-2.1.14.jar:2
> .1.14]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_60]
>   

[jira] [Updated] (CASSANDRA-13087) Not enough bytes exception during compaction

2017-01-10 Thread Cyril Scetbon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-13087:
--
Reproduced In: 2.1.14

> Not enough bytes exception during compaction
> 
>
> Key: CASSANDRA-13087
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13087
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Ubuntu 14.04.3 LTS, Cassandra 2.1.14
>Reporter: FACORAT
>
> After a repair we have compaction exceptions on some nodes and its spreading
> {noformat}
> ERROR [CompactionExecutor:14065] 2016-12-30 14:45:07,245 
> CassandraDaemon.java:229 - Exception in thread 
> Thread[CompactionExecutor:14065,1,main]
> java.lang.IllegalArgumentException: Not enough bytes. Offset: 5. Length: 
> 20275. Buffer size: 12594
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:378)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:100)
>  ~[apache-cassandra-2.1.14.ja
> r:2.1.14]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:398)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:382)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:171)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:202)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:645) 
> ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(ColumnIndex.java:166)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:121)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:193) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:127)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:197)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264)
>  ~[apache-cassandra-2.1.14.jar:2
> .1.14]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_60]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> {noformat}
> nodetool scrub will disca

[jira] [Commented] (CASSANDRA-13087) Not enough bytes exception during compaction

2017-01-10 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15816206#comment-15816206
 ] 

Cyril Scetbon commented on CASSANDRA-13087:
---

Hey [~pauloricardomg], did you see that one ? It looks pretty similar to the 
one you fixed.

> Not enough bytes exception during compaction
> 
>
> Key: CASSANDRA-13087
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13087
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Ubuntu 14.04.3 LTS, Cassandra 2.1.14
>Reporter: FACORAT
>
> After a repair we have compaction exceptions on some nodes and its spreading
> {noformat}
> ERROR [CompactionExecutor:14065] 2016-12-30 14:45:07,245 
> CassandraDaemon.java:229 - Exception in thread 
> Thread[CompactionExecutor:14065,1,main]
> java.lang.IllegalArgumentException: Not enough bytes. Offset: 5. Length: 
> 20275. Buffer size: 12594
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:378)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:100)
>  ~[apache-cassandra-2.1.14.ja
> r:2.1.14]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:398)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:382)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:171)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:202)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:645) 
> ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(ColumnIndex.java:166)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:121)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:193) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:127)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:197)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.14.jar:2.1.14]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264)
>  ~[apache-cassandra-2.1.14.jar:2
> .1.14]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_60]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:61

[jira] [Commented] (CASSANDRA-11933) Cache local ranges when calculating repair neighbors

2016-06-14 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330502#comment-15330502
 ] 

Cyril Scetbon commented on CASSANDRA-11933:
---

Good job [~mahdix],[~pauloricardomg]. 1:30 hours spent on *computing ranges* 
reduced to 8 minutes !!! 

> Cache local ranges when calculating repair neighbors
> 
>
> Key: CASSANDRA-11933
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11933
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Mahdi Mohammadi
>
> During  a full repair on a ~ 60 nodes cluster, I've been able to see that 
> this stage can be significant (up to 60 percent of the whole time) :
> https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/StorageService.java#L2983-L2997
> It's merely caused by the fact that 
> https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/ActiveRepairService.java#L189
>  calls {code}ss.getLocalRanges(keyspaceName){code} everytime and that it 
> takes more than 99% of the time. This call takes 600ms when there is no load 
> on the cluster and more if there is. So for 10k ranges, you can imagine that 
> it takes at least 1.5 hours just to compute ranges. 
> Underneath it calls 
> [ReplicationStrategy.getAddressRanges|https://github.com/apache/cassandra/blob/3dcbe90e02440e6ee534f643c7603d50ca08482b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java#L170]
>  which can get pretty inefficient ([~jbellis]'s 
> [words|https://github.com/apache/cassandra/blob/3dcbe90e02440e6ee534f643c7603d50ca08482b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java#L165])
> *ss.getLocalRanges(keyspaceName)* should be cached to avoid having to spend 
> hours on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11933) Cache local ranges when calculating repair neighbors

2016-06-14 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330502#comment-15330502
 ] 

Cyril Scetbon edited comment on CASSANDRA-11933 at 6/14/16 8:20 PM:


Good job [~mahdix], [~pauloricardomg]. 1:30 hours spent on *computing ranges* 
reduced to 8 minutes !!! 


was (Author: cscetbon):
Good job [~mahdix],[~pauloricardomg]. 1:30 hours spent on *computing ranges* 
reduced to 8 minutes !!! 

> Cache local ranges when calculating repair neighbors
> 
>
> Key: CASSANDRA-11933
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11933
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Mahdi Mohammadi
>
> During  a full repair on a ~ 60 nodes cluster, I've been able to see that 
> this stage can be significant (up to 60 percent of the whole time) :
> https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/StorageService.java#L2983-L2997
> It's merely caused by the fact that 
> https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/ActiveRepairService.java#L189
>  calls {code}ss.getLocalRanges(keyspaceName){code} everytime and that it 
> takes more than 99% of the time. This call takes 600ms when there is no load 
> on the cluster and more if there is. So for 10k ranges, you can imagine that 
> it takes at least 1.5 hours just to compute ranges. 
> Underneath it calls 
> [ReplicationStrategy.getAddressRanges|https://github.com/apache/cassandra/blob/3dcbe90e02440e6ee534f643c7603d50ca08482b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java#L170]
>  which can get pretty inefficient ([~jbellis]'s 
> [words|https://github.com/apache/cassandra/blob/3dcbe90e02440e6ee534f643c7603d50ca08482b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java#L165])
> *ss.getLocalRanges(keyspaceName)* should be cached to avoid having to spend 
> hours on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-10404) Node to Node encryption transitional mode

2016-06-13 Thread Cyril Scetbon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-10404:
--
Comment: was deleted

(was: [~jasobrown] ok, would be great to know if others have the bandwidth (I 
can't check it) or not to be able to plan it. )

> Node to Node encryption transitional mode
> -
>
> Key: CASSANDRA-10404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10404
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Tom Lewis
>Assignee: Jason Brown
>
> Create a transitional mode for encryption that allows encrypted and 
> unencrypted traffic node-to-node during a change over to encryption from 
> unencrypted. This alleviates downtime during the switch.
>  This is similar to https://issues.apache.org/jira/browse/CASSANDRA-8803 
> which is intended for client-to-node



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11933) Improve Repair performance

2016-05-31 Thread Cyril Scetbon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-11933:
--
Description: 
During  a full repair on a ~ 60 nodes cluster, I've been able to see that this 
stage can be significant (up to 60 percent of the whole time) :

https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/StorageService.java#L2983-L2997

It's merely caused by the fact that 
https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/ActiveRepairService.java#L189
 calls {code}ss.getLocalRanges(keyspaceName){code} everytime and that it takes 
more than 99% of the time. This call takes 600ms when there is no load on the 
cluster and more if there is. So for 10k ranges, you can imagine that it takes 
at least 1.5 hours just to compute ranges. 

Underneath it calls 
[ReplicationStrategy.getAddressRanges|https://github.com/apache/cassandra/blob/3dcbe90e02440e6ee534f643c7603d50ca08482b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java#L170]
 which can get pretty inefficient ([~jbellis]'s 
[words|https://github.com/apache/cassandra/blob/3dcbe90e02440e6ee534f643c7603d50ca08482b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java#L165])

*ss.getLocalRanges(keyspaceName)* should be cached to avoid having to spend 
hours on it.

  was:
During  a full repair on a ~ 60 nodes cluster, I've been able to see that this 
stage can be significant (up to 60 percent of the whole time) :

https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/StorageService.java#L2983-L2997

It's merely caused by the fact that 
https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/ActiveRepairService.java#L189
 calls {code}ss.getLocalRanges(keyspaceName){code} everytime and that it takes 
more than 99% of the time. This call takes 600ms when there is no load on the 
cluster and more if there is. So for 10k ranges, you can imagine that it takes 
at least 1.5 hours just to compute ranges. 

Underneath it calls 
[ReplicationStrategy.getAddressRanges|https://github.com/apache/cassandra/blob/3dcbe90e02440e6ee534f643c7603d50ca08482b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java#L170]
 which can get pretty inefficient.

*ss.getLocalRanges(keyspaceName)* should be cached to avoid having to spend 
hours on it.


> Improve Repair performance
> --
>
> Key: CASSANDRA-11933
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11933
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>
> During  a full repair on a ~ 60 nodes cluster, I've been able to see that 
> this stage can be significant (up to 60 percent of the whole time) :
> https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/StorageService.java#L2983-L2997
> It's merely caused by the fact that 
> https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/ActiveRepairService.java#L189
>  calls {code}ss.getLocalRanges(keyspaceName){code} everytime and that it 
> takes more than 99% of the time. This call takes 600ms when there is no load 
> on the cluster and more if there is. So for 10k ranges, you can imagine that 
> it takes at least 1.5 hours just to compute ranges. 
> Underneath it calls 
> [ReplicationStrategy.getAddressRanges|https://github.com/apache/cassandra/blob/3dcbe90e02440e6ee534f643c7603d50ca08482b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java#L170]
>  which can get pretty inefficient ([~jbellis]'s 
> [words|https://github.com/apache/cassandra/blob/3dcbe90e02440e6ee534f643c7603d50ca08482b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java#L165])
> *ss.getLocalRanges(keyspaceName)* should be cached to avoid having to spend 
> hours on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11933) Improve Repair performance

2016-05-31 Thread Cyril Scetbon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-11933:
--
Description: 
During  a full repair on a ~ 60 nodes cluster, I've been able to see that this 
stage can be significant (up to 60 percent of the whole time) :

https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/StorageService.java#L2983-L2997

It's merely caused by the fact that 
https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/ActiveRepairService.java#L189
 calls {code}ss.getLocalRanges(keyspaceName){code} everytime and that it takes 
more than 99% of the time. This call takes 600ms when there is no load on the 
cluster and more if there is. So for 10k ranges, you can imagine that it takes 
at least 1.5 hours just to compute ranges. 

Underneath it calls 
[ReplicationStrategy.getAddressRanges|https://github.com/apache/cassandra/blob/3dcbe90e02440e6ee534f643c7603d50ca08482b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java#L170]
 which can get pretty inefficient.

*ss.getLocalRanges(keyspaceName)* should be cached to avoid having to spend 
hours on it.

  was:
During  a full repair on a ~ 60 nodes cluster, I've been able to see that this 
stage can be significant (up to 60 percent of) :

https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/StorageService.java#L2983-L2997

It's merely caused by the fact that 
https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/ActiveRepairService.java#L189
 calls {code}ss.getLocalRanges(keyspaceName){code} everytime and that it takes 
more than 99% of the time. This call takes 600ms when there is no load on the 
cluster and more if there is. So for 10k ranges, you can imagine that it takes 
at least 1.5 hours just to compute ranges. 

Underneath it calls 
[ReplicationStrategy.getAddressRanges|https://github.com/apache/cassandra/blob/3dcbe90e02440e6ee534f643c7603d50ca08482b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java#L170]
 which can get pretty inefficient.

*ss.getLocalRanges(keyspaceName)* should be cached to avoid having to spend 
hours on it.


> Improve Repair performance
> --
>
> Key: CASSANDRA-11933
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11933
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>
> During  a full repair on a ~ 60 nodes cluster, I've been able to see that 
> this stage can be significant (up to 60 percent of the whole time) :
> https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/StorageService.java#L2983-L2997
> It's merely caused by the fact that 
> https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/ActiveRepairService.java#L189
>  calls {code}ss.getLocalRanges(keyspaceName){code} everytime and that it 
> takes more than 99% of the time. This call takes 600ms when there is no load 
> on the cluster and more if there is. So for 10k ranges, you can imagine that 
> it takes at least 1.5 hours just to compute ranges. 
> Underneath it calls 
> [ReplicationStrategy.getAddressRanges|https://github.com/apache/cassandra/blob/3dcbe90e02440e6ee534f643c7603d50ca08482b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java#L170]
>  which can get pretty inefficient.
> *ss.getLocalRanges(keyspaceName)* should be cached to avoid having to spend 
> hours on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11933) Improve Repair performance

2016-05-31 Thread Cyril Scetbon (JIRA)
Cyril Scetbon created CASSANDRA-11933:
-

 Summary: Improve Repair performance
 Key: CASSANDRA-11933
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11933
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Cyril Scetbon


During  a full repair on a ~ 60 nodes cluster, I've been able to see that this 
stage can be significant (up to 60 percent of) :

https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/StorageService.java#L2983-L2997

It's merely caused by the fact that 
https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/service/ActiveRepairService.java#L189
 calls {code}ss.getLocalRanges(keyspaceName){code} everytime and that it takes 
more than 99% of the time. This call takes 600ms when there is no load on the 
cluster and more if there is. So for 10k ranges, you can imagine that it takes 
at least 1.5 hours just to compute ranges. 

Underneath it calls 
[ReplicationStrategy.getAddressRanges|https://github.com/apache/cassandra/blob/3dcbe90e02440e6ee534f643c7603d50ca08482b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java#L170]
 which can get pretty inefficient.

*ss.getLocalRanges(keyspaceName)* should be cached to avoid having to spend 
hours on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9126) java.lang.RuntimeException: Last written key DecoratedKey >= current key DecoratedKey

2016-05-27 Thread Cyril Scetbon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-9126:
-
Reproduced In: 2.1.12, 2.0.14  (was: 2.0.14, 2.1.11)

> java.lang.RuntimeException: Last written key DecoratedKey >= current key 
> DecoratedKey
> -
>
> Key: CASSANDRA-9126
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9126
> Project: Cassandra
>  Issue Type: Bug
>Reporter: srinivasu gottipati
>Priority: Critical
> Attachments: cassandra-system.log
>
>
> Cassandra V: 2.0.14,
> Getting the following exceptions while trying to compact (I see this issue 
> was raised in earlier versions and marked as closed. However it still appears 
> in 2.0.14). In our case, compaction is not getting succeeded and keep failing 
> with this error.:
> {code}java.lang.RuntimeException: Last written key 
> DecoratedKey(3462767860784856708, 
> 354038323137333038305f3330325f31355f474d4543454f) >= current key 
> DecoratedKey(3462334604624154281, 
> 354036333036353334315f3336315f31355f474d4543454f) writing into {code}
> ...
> Stacktrace:{code}
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:143)
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:166)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:167)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745){code}
> Any help is greatly appreciated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9126) java.lang.RuntimeException: Last written key DecoratedKey >= current key DecoratedKey

2016-05-27 Thread Cyril Scetbon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-9126:
-
Reproduced In: 2.1.11, 2.0.14  (was: 2.0.14, 2.1.11)

> java.lang.RuntimeException: Last written key DecoratedKey >= current key 
> DecoratedKey
> -
>
> Key: CASSANDRA-9126
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9126
> Project: Cassandra
>  Issue Type: Bug
>Reporter: srinivasu gottipati
>Priority: Critical
> Attachments: cassandra-system.log
>
>
> Cassandra V: 2.0.14,
> Getting the following exceptions while trying to compact (I see this issue 
> was raised in earlier versions and marked as closed. However it still appears 
> in 2.0.14). In our case, compaction is not getting succeeded and keep failing 
> with this error.:
> {code}java.lang.RuntimeException: Last written key 
> DecoratedKey(3462767860784856708, 
> 354038323137333038305f3330325f31355f474d4543454f) >= current key 
> DecoratedKey(3462334604624154281, 
> 354036333036353334315f3336315f31355f474d4543454f) writing into {code}
> ...
> Stacktrace:{code}
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:143)
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:166)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:167)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745){code}
> Any help is greatly appreciated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-9126) java.lang.RuntimeException: Last written key DecoratedKey >= current key DecoratedKey

2016-05-27 Thread Cyril Scetbon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon reopened CASSANDRA-9126:
--
Reproduced In: 2.1.11, 2.0.14  (was: 2.0.14)

> java.lang.RuntimeException: Last written key DecoratedKey >= current key 
> DecoratedKey
> -
>
> Key: CASSANDRA-9126
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9126
> Project: Cassandra
>  Issue Type: Bug
>Reporter: srinivasu gottipati
>Priority: Critical
> Attachments: cassandra-system.log
>
>
> Cassandra V: 2.0.14,
> Getting the following exceptions while trying to compact (I see this issue 
> was raised in earlier versions and marked as closed. However it still appears 
> in 2.0.14). In our case, compaction is not getting succeeded and keep failing 
> with this error.:
> {code}java.lang.RuntimeException: Last written key 
> DecoratedKey(3462767860784856708, 
> 354038323137333038305f3330325f31355f474d4543454f) >= current key 
> DecoratedKey(3462334604624154281, 
> 354036333036353334315f3336315f31355f474d4543454f) writing into {code}
> ...
> Stacktrace:{code}
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:143)
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:166)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:167)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745){code}
> Any help is greatly appreciated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7056) Add RAMP transactions

2016-05-04 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15270846#comment-15270846
 ] 

Cyril Scetbon commented on CASSANDRA-7056:
--

[~iamaleksey] Thanks

> Add RAMP transactions
> -
>
> Key: CASSANDRA-7056
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7056
> Project: Cassandra
>  Issue Type: Wish
>Reporter: Tupshin Harper
>Priority: Minor
> Fix For: 3.x
>
>
> We should take a look at 
> [RAMP|http://www.bailis.org/blog/scalable-atomic-visibility-with-ramp-transactions/]
>  transactions, and figure out if they can be used to provide more efficient 
> LWT (or LWT-like) operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7056) Add RAMP transactions

2016-05-04 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15270409#comment-15270409
 ] 

Cyril Scetbon commented on CASSANDRA-7056:
--

What's the current status of this ticket ? Won't implement ?

> Add RAMP transactions
> -
>
> Key: CASSANDRA-7056
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7056
> Project: Cassandra
>  Issue Type: Wish
>Reporter: Tupshin Harper
>Priority: Minor
> Fix For: 3.x
>
>
> We should take a look at 
> [RAMP|http://www.bailis.org/blog/scalable-atomic-visibility-with-ramp-transactions/]
>  transactions, and figure out if they can be used to provide more efficient 
> LWT (or LWT-like) operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10404) Node to Node encryption transitional mode

2016-03-21 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15204187#comment-15204187
 ] 

Cyril Scetbon commented on CASSANDRA-10404:
---

[~jasobrown] ok, would be great to know if others have the bandwidth (I can't 
check it) or not to be able to plan it. 

> Node to Node encryption transitional mode
> -
>
> Key: CASSANDRA-10404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10404
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Tom Lewis
>Assignee: Jason Brown
>
> Create a transitional mode for encryption that allows encrypted and 
> unencrypted traffic node-to-node during a change over to encryption from 
> unencrypted. This alleviates downtime during the switch.
>  This is similar to https://issues.apache.org/jira/browse/CASSANDRA-8803 
> which is intended for client-to-node



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10404) Node to Node encryption transitional mode

2016-03-21 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15204120#comment-15204120
 ] 

Cyril Scetbon commented on CASSANDRA-10404:
---

[~jasobrown] :( how much work is needed to implement it ? Why not before 4.0 ? 
Because of tik tok development cycle ? I could understand that as 
CASSANDRA-8457 changes the network code it would be easier to do it after it

> Node to Node encryption transitional mode
> -
>
> Key: CASSANDRA-10404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10404
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Tom Lewis
>Assignee: Jason Brown
>
> Create a transitional mode for encryption that allows encrypted and 
> unencrypted traffic node-to-node during a change over to encryption from 
> unencrypted. This alleviates downtime during the switch.
>  This is similar to https://issues.apache.org/jira/browse/CASSANDRA-8803 
> which is intended for client-to-node



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2016-03-20 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15200936#comment-15200936
 ] 

Cyril Scetbon commented on CASSANDRA-10751:
---

[~alexliu68] ?

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:214)
>   at 
> com.datastax.driver

[jira] [Commented] (CASSANDRA-10404) Node to Node encryption transitional mode

2016-03-19 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15200927#comment-15200927
 ] 

Cyril Scetbon commented on CASSANDRA-10404:
---

Any update on this ? This ticket seems really important to me.

> Node to Node encryption transitional mode
> -
>
> Key: CASSANDRA-10404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10404
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Tom Lewis
>
> Create a transitional mode for encryption that allows encrypted and 
> unencrypted traffic node-to-node during a change over to encryption from 
> unencrypted. This alleviates downtime during the switch.
>  This is similar to https://issues.apache.org/jira/browse/CASSANDRA-8803 
> which is intended for client-to-node



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9633) Add ability to encrypt sstables

2016-03-05 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15181951#comment-15181951
 ] 

Cyril Scetbon commented on CASSANDRA-9633:
--

thank you [~jasobrown] for the heads up. Hoping to see it soon. 

> Add ability to encrypt sstables
> ---
>
> Key: CASSANDRA-9633
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9633
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jason Brown
>Assignee: Jason Brown
>  Labels: encryption, security, sstable
> Fix For: 3.x
>
>
> Add option to allow encrypting of sstables.
> I have a version of this functionality built on cassandra 2.0 that 
> piggy-backs on the existing sstable compression functionality and ICompressor 
> interface (similar in nature to what DataStax Enterprise does). However, if 
> we're adding the feature to the main OSS product, I'm not sure if we want to 
> use the pluggable compression framework or if it's worth investigating a 
> different path. I think there's a lot of upside in reusing the sstable 
> compression scheme, but perhaps add a new component in cqlsh for table 
> encryption and a corresponding field in CFMD.
> Encryption configuration in the yaml can use the same mechanism as 
> CASSANDRA-6018 (which is currently pending internal review).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9633) Add ability to encrypt sstables

2016-03-05 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15181933#comment-15181933
 ] 

Cyril Scetbon commented on CASSANDRA-9633:
--

Hey [~bdeggleston] any news on this ?

> Add ability to encrypt sstables
> ---
>
> Key: CASSANDRA-9633
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9633
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jason Brown
>Assignee: Jason Brown
>  Labels: encryption, security, sstable
> Fix For: 3.x
>
>
> Add option to allow encrypting of sstables.
> I have a version of this functionality built on cassandra 2.0 that 
> piggy-backs on the existing sstable compression functionality and ICompressor 
> interface (similar in nature to what DataStax Enterprise does). However, if 
> we're adding the feature to the main OSS product, I'm not sure if we want to 
> use the pluggable compression framework or if it's worth investigating a 
> different path. I think there's a lot of upside in reusing the sstable 
> compression scheme, but perhaps add a new component in cqlsh for table 
> encryption and a corresponding field in CFMD.
> Encryption configuration in the yaml can use the same mechanism as 
> CASSANDRA-6018 (which is currently pending internal review).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2015-11-26 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15029330#comment-15029330
 ] 

Cyril Scetbon commented on CASSANDRA-10751:
---

[~alexliu68] did you get any chance to have a look at it ?

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture

[jira] [Updated] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2015-11-24 Thread Cyril Scetbon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-10751:
--
Attachment: CASSANDRA-10751-2.2.patch
CASSANDRA-10751-3.0.patch

[~alexliu68] Here are the patches foreach versions

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Alex Liu
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultR

[jira] [Comment Edited] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2015-11-24 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15026002#comment-15026002
 ] 

Cyril Scetbon edited comment on CASSANDRA-10751 at 11/25/15 2:10 AM:
-

[~alexliu68] Here are the patches for each version


was (Author: cscetbon):
[~alexliu68] Here are the patches foreach versions

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Alex Liu
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowabl

[jira] [Comment Edited] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2015-11-24 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025776#comment-15025776
 ] 

Cyril Scetbon edited comment on CASSANDRA-10751 at 11/25/15 12:13 AM:
--

[~alexliu68], so the issue is that CqlRecord calls cluster.connect using a 
[quoted keyspace 
string|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java#L138]
 and that the java driver quotes it too when it accesses the keyspace using 
[CQL|https://github.com/datastax/java-driver/blob/2.1.8/driver-core/src/main/java/com/datastax/driver/core/Connection.java#L477].
 If I remove the first quote my job runs.


was (Author: cscetbon):
[~alexliu68], so the issue is that CqlRecord calls cluster.connect using a 
[quoted keyspace 
string|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java#L138]
 and that the java driver quotes it too 
https://github.com/datastax/java-driver/blob/2.1.8/driver-core/src/main/java/com/datastax/driver/core/Connection.java#L477.
 If I remove the first quote my job runs.

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Alex Liu
> Attachments: output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSpl

[jira] [Commented] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2015-11-24 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025776#comment-15025776
 ] 

Cyril Scetbon commented on CASSANDRA-10751:
---

[~alexliu68], so the issue is that CqlRecord calls cluster.connect using a 
[quoted keyspace 
string|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java#L138]
 and that the java driver quotes it too 
https://github.com/datastax/java-driver/blob/2.1.8/driver-core/src/main/java/com/datastax/driver/core/Connection.java#L477.
 If I remove the first quote my job runs.

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Alex Liu
> Attachments: output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 

[jira] [Comment Edited] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2015-11-24 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024784#comment-15024784
 ] 

Cyril Scetbon edited comment on CASSANDRA-10751 at 11/24/15 4:34 PM:
-

[~alexliu68] The number of splits has not changed and is the same as with 
2.0.12. The strange errors are :
{code}
15/11/24 15:26:32 [cluster2-blocking-task-worker-1]: DEBUG 
core.HostConnectionPool: Creating new connection on busy pool to 
/10.234.62.20:9042
15/11/24 15:26:32 [cluster2-blocking-task-worker-0]: DEBUG 
core.HostConnectionPool: Creating new connection on busy pool to 
/10.234.62.20:9042
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-5, inFlight=0, closed=false] Connection opened 
successfully
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-6, inFlight=0, closed=false] Connection opened 
successfully
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG Host.STATES: 
[/10.234.62.20:9042] new connection created, total = 1
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG Host.STATES: 
[/10.234.62.20:9042] new connection created, total = 2
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-6, inFlight=0, closed=true] closing connection
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-5, inFlight=0, closed=true] closing connection
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG Host.STATES: 
[/10.234.62.20:9042] connection closed, remaining = 1
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG Host.STATES: 
[/10.234.62.20:9042] connection closed, remaining = 0
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-6, inFlight=0, closed=true] has already terminated
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-5, inFlight=0, closed=true] has already terminated
15/11/24 15:26:32 [cluster2-blocking-task-worker-0]: DEBUG Host.STATES: 
Defuncting Connection[/10.234.62.20:9042-6, inFlight=0, closed=true] because: 
[/10.234.62.20:9042] Error while setting keyspace
{code} 
I can see that the error "Error while setting keyspace" is displayed when there 
is a ExecutionException. It just weird that it happens so quickly ...


was (Author: cscetbon):
[~alexliu68] The number of splits has not changed and is the same as with 
2.0.12. The strange errors are :
{code}
15/11/24 15:26:32 [cluster2-blocking-task-worker-1]: DEBUG 
core.HostConnectionPool: Creating new connection on busy pool to 
/10.234.62.20:9042
15/11/24 15:26:32 [cluster2-blocking-task-worker-0]: DEBUG 
core.HostConnectionPool: Creating new connection on busy pool to 
/10.234.62.20:9042
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-5, inFlight=0, closed=false] Connection opened 
successfully
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-6, inFlight=0, closed=false] Connection opened 
successfully
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG Host.STATES: 
[/10.234.62.20:9042] new connection created, total = 1
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG Host.STATES: 
[/10.234.62.20:9042] new connection created, total = 2
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-6, inFlight=0, closed=true] closing connection
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-5, inFlight=0, closed=true] closing connection
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG Host.STATES: 
[/10.234.62.20:9042] connection closed, remaining = 1
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG Host.STATES: 
[/10.234.62.20:9042] connection closed, remaining = 0
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-6, inFlight=0, closed=true] has already terminated
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-5, inFlight=0, closed=true] has already terminated
15/11/24 15:26:32 [cluster2-blocking-task-worker-0]: DEBUG Host.STATES: 
Defuncting Connection[/10.234.62.20:9042-6, inFlight=0, closed=true] because: 
[/10.234.62.20:9042] Error while setting keyspace
{code} 
I can see that the error "Error while setting keyspace" is displayed when there 
is a BusyConnectionException. It just weird that it happens so quickly ...

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassand

[jira] [Comment Edited] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2015-11-24 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024784#comment-15024784
 ] 

Cyril Scetbon edited comment on CASSANDRA-10751 at 11/24/15 4:34 PM:
-

[~alexliu68] The number of splits has not changed and is the same as with 
2.0.12. The strange errors are :
{code}
15/11/24 15:26:32 [cluster2-blocking-task-worker-1]: DEBUG 
core.HostConnectionPool: Creating new connection on busy pool to 
/10.234.62.20:9042
15/11/24 15:26:32 [cluster2-blocking-task-worker-0]: DEBUG 
core.HostConnectionPool: Creating new connection on busy pool to 
/10.234.62.20:9042
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-5, inFlight=0, closed=false] Connection opened 
successfully
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-6, inFlight=0, closed=false] Connection opened 
successfully
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG Host.STATES: 
[/10.234.62.20:9042] new connection created, total = 1
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG Host.STATES: 
[/10.234.62.20:9042] new connection created, total = 2
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-6, inFlight=0, closed=true] closing connection
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-5, inFlight=0, closed=true] closing connection
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG Host.STATES: 
[/10.234.62.20:9042] connection closed, remaining = 1
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG Host.STATES: 
[/10.234.62.20:9042] connection closed, remaining = 0
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-6, inFlight=0, closed=true] has already terminated
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-5, inFlight=0, closed=true] has already terminated
15/11/24 15:26:32 [cluster2-blocking-task-worker-0]: DEBUG Host.STATES: 
Defuncting Connection[/10.234.62.20:9042-6, inFlight=0, closed=true] because: 
[/10.234.62.20:9042] Error while setting keyspace
{code} 
I can see that the error "Error while setting keyspace" is displayed when there 
is a *ExecutionException*. It just weird that it happens so quickly ...


was (Author: cscetbon):
[~alexliu68] The number of splits has not changed and is the same as with 
2.0.12. The strange errors are :
{code}
15/11/24 15:26:32 [cluster2-blocking-task-worker-1]: DEBUG 
core.HostConnectionPool: Creating new connection on busy pool to 
/10.234.62.20:9042
15/11/24 15:26:32 [cluster2-blocking-task-worker-0]: DEBUG 
core.HostConnectionPool: Creating new connection on busy pool to 
/10.234.62.20:9042
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-5, inFlight=0, closed=false] Connection opened 
successfully
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-6, inFlight=0, closed=false] Connection opened 
successfully
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG Host.STATES: 
[/10.234.62.20:9042] new connection created, total = 1
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG Host.STATES: 
[/10.234.62.20:9042] new connection created, total = 2
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-6, inFlight=0, closed=true] closing connection
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-5, inFlight=0, closed=true] closing connection
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG Host.STATES: 
[/10.234.62.20:9042] connection closed, remaining = 1
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG Host.STATES: 
[/10.234.62.20:9042] connection closed, remaining = 0
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-6, inFlight=0, closed=true] has already terminated
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-5, inFlight=0, closed=true] has already terminated
15/11/24 15:26:32 [cluster2-blocking-task-worker-0]: DEBUG Host.STATES: 
Defuncting Connection[/10.234.62.20:9042-6, inFlight=0, closed=true] because: 
[/10.234.62.20:9042] Error while setting keyspace
{code} 
I can see that the error "Error while setting keyspace" is displayed when there 
is a ExecutionException. It just weird that it happens so quickly ...

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 

[jira] [Commented] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2015-11-24 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024784#comment-15024784
 ] 

Cyril Scetbon commented on CASSANDRA-10751:
---

[~alexliu68] The number of splits has not changed and is the same as with 
2.0.12. The strange errors are :
{code}
15/11/24 15:26:32 [cluster2-blocking-task-worker-1]: DEBUG 
core.HostConnectionPool: Creating new connection on busy pool to 
/10.234.62.20:9042
15/11/24 15:26:32 [cluster2-blocking-task-worker-0]: DEBUG 
core.HostConnectionPool: Creating new connection on busy pool to 
/10.234.62.20:9042
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-5, inFlight=0, closed=false] Connection opened 
successfully
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-6, inFlight=0, closed=false] Connection opened 
successfully
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG Host.STATES: 
[/10.234.62.20:9042] new connection created, total = 1
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG Host.STATES: 
[/10.234.62.20:9042] new connection created, total = 2
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-6, inFlight=0, closed=true] closing connection
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-5, inFlight=0, closed=true] closing connection
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG Host.STATES: 
[/10.234.62.20:9042] connection closed, remaining = 1
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG Host.STATES: 
[/10.234.62.20:9042] connection closed, remaining = 0
15/11/24 15:26:32 [cluster2-nio-worker-35]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-6, inFlight=0, closed=true] has already terminated
15/11/24 15:26:32 [cluster2-nio-worker-34]: DEBUG core.Connection: 
Connection[/10.234.62.20:9042-5, inFlight=0, closed=true] has already terminated
15/11/24 15:26:32 [cluster2-blocking-task-worker-0]: DEBUG Host.STATES: 
Defuncting Connection[/10.234.62.20:9042-6, inFlight=0, closed=true] because: 
[/10.234.62.20:9042] Error while setting keyspace
{code} 
I can see that the error "Error while setting keyspace" is displayed when there 
is a BusyConnectionException. It just weird that it happens so quickly ...

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Alex Liu
> Attachments: output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java

[jira] [Commented] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2015-11-23 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15023612#comment-15023612
 ] 

Cyril Scetbon commented on CASSANDRA-10751:
---

Cassandra 2.1.11. It was working fine with 2.0.12

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Alex Liu
> Attachments: output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:214)
>   at 
> com.datastax.driver.core.AbstractSession.execu

[jira] [Commented] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2015-11-23 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15023401#comment-15023401
 ] 

Cyril Scetbon commented on CASSANDRA-10751:
---

Hey [~alexliu68], I've assigned it to you as I know you're the one who can 
easily guess what happens. Don't hesitate to assign it to someone else if you 
think I'm wrong. Thanks

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Alex Liu
> Attachments: output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> com.datastax.driver.core.Defaul

[jira] [Updated] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2015-11-22 Thread Cyril Scetbon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-10751:
--
Description: 
Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's internal 
code. It seems that connections are shutdown but we can't understand why ...
Here is a subtract of the errors. I also add a file with the complete debug 
logs.
{code}
15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
node006.internal.net/192.168.12.22:9042, trying next host (error is: 
com.datastax.driver.core.ConnectionException: 
[node006.internal.net/192.168.12.22:9042] Pool is shutdown)
Failed with exception java.io.IOException:java.io.IOException: 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: node006.internal.net/192.168.12.22:9042 
(com.datastax.driver.core.ConnectionException: 
[node006.internal.net/192.168.12.22:9042] Pool is shutdown))
15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
java.io.IOException:java.io.IOException: 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: node006.internal.net/192.168.12.22:9042 
(com.datastax.driver.core.ConnectionException: 
[node006.internal.net/192.168.12.22:9042] Pool is shutdown))
java.io.IOException: java.io.IOException: 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: node006.internal.net/192.168.12.22:9042 
(com.datastax.driver.core.ConnectionException: 
[node006.internal.net/192.168.12.22:9042] Pool is shutdown))
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.io.IOException: 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: node006.internal.net/192.168.12.22:9042 
(com.datastax.driver.core.ConnectionException: 
[node006.internal.net/192.168.12.22:9042] Pool is shutdown))
at 
org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
at 
org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
... 15 more
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
host(s) tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
(com.datastax.driver.core.ConnectionException: 
[node006.internal.net/192.168.12.22:9042] Pool is shutdown))
at 
com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at 
com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at 
com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:214)
at 
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at 
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:36)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordReader.fetchKeys(CqlRecordReader.java:578)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordReader.buildQuery(CqlRecordReader.java:526)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordReader.initialize(CqlRecordReader.java:148)
{code}

  was:
Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's internal 
code. It seems that connections are shutdown but we can't understand why ...
Here are a subtract of the errors. I also add a file with the complete debug 
logs.
{

[jira] [Updated] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2015-11-22 Thread Cyril Scetbon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-10751:
--
Description: 
Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's internal 
code. It seems that connections are shutdown but we can't understand why ...
Here are a subtract of the errors. I also add a file with the complete debug 
logs.
{code}
15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
node006.internal.net/192.168.12.22:9042, trying next host (error is: 
com.datastax.driver.core.ConnectionException: 
[node006.internal.net/192.168.12.22:9042] Pool is shutdown)
Failed with exception java.io.IOException:java.io.IOException: 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: node006.internal.net/192.168.12.22:9042 
(com.datastax.driver.core.ConnectionException: 
[node006.internal.net/192.168.12.22:9042] Pool is shutdown))
15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
java.io.IOException:java.io.IOException: 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: node006.internal.net/192.168.12.22:9042 
(com.datastax.driver.core.ConnectionException: 
[node006.internal.net/192.168.12.22:9042] Pool is shutdown))
java.io.IOException: java.io.IOException: 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: node006.internal.net/192.168.12.22:9042 
(com.datastax.driver.core.ConnectionException: 
[node006.internal.net/192.168.12.22:9042] Pool is shutdown))
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.io.IOException: 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: node006.internal.net/192.168.12.22:9042 
(com.datastax.driver.core.ConnectionException: 
[node006.internal.net/192.168.12.22:9042] Pool is shutdown))
at 
org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
at 
org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
... 15 more
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
host(s) tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
(com.datastax.driver.core.ConnectionException: 
[node006.internal.net/192.168.12.22:9042] Pool is shutdown))
at 
com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at 
com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at 
com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:214)
at 
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at 
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:36)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordReader.fetchKeys(CqlRecordReader.java:578)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordReader.buildQuery(CqlRecordReader.java:526)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordReader.initialize(CqlRecordReader.java:148)
{code}

  was:
Trying to execute a simple CQL3 query in an Hadoop job that runs on Yarn, I get 
errors from Cassandra's internal code. It seems that connections are shutdown 
but we can't understand why ...
Here are a subtract of the errors. I also add a fi

[jira] [Created] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2015-11-22 Thread Cyril Scetbon (JIRA)
Cyril Scetbon created CASSANDRA-10751:
-

 Summary: "Pool is shutdown" error when running Hadoop jobs on Yarn
 Key: CASSANDRA-10751
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
 Project: Cassandra
  Issue Type: Bug
 Environment: Hadoop 2.7.1 (HDP 2.3.2)
Cassandra 2.1.11
Reporter: Cyril Scetbon
Assignee: Alex Liu
 Attachments: output.log

Trying to execute a simple CQL3 query in an Hadoop job that runs on Yarn, I get 
errors from Cassandra's internal code. It seems that connections are shutdown 
but we can't understand why ...
Here are a subtract of the errors. I also add a file with the complete debug 
logs.
{code}
15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
node006.internal.net/192.168.12.22:9042, trying next host (error is: 
com.datastax.driver.core.ConnectionException: 
[node006.internal.net/192.168.12.22:9042] Pool is shutdown)
Failed with exception java.io.IOException:java.io.IOException: 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: node006.internal.net/192.168.12.22:9042 
(com.datastax.driver.core.ConnectionException: 
[node006.internal.net/192.168.12.22:9042] Pool is shutdown))
15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
java.io.IOException:java.io.IOException: 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: node006.internal.net/192.168.12.22:9042 
(com.datastax.driver.core.ConnectionException: 
[node006.internal.net/192.168.12.22:9042] Pool is shutdown))
java.io.IOException: java.io.IOException: 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: node006.internal.net/192.168.12.22:9042 
(com.datastax.driver.core.ConnectionException: 
[node006.internal.net/192.168.12.22:9042] Pool is shutdown))
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.io.IOException: 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: node006.internal.net/192.168.12.22:9042 
(com.datastax.driver.core.ConnectionException: 
[node006.internal.net/192.168.12.22:9042] Pool is shutdown))
at 
org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
at 
org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
... 15 more
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
host(s) tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
(com.datastax.driver.core.ConnectionException: 
[node006.internal.net/192.168.12.22:9042] Pool is shutdown))
at 
com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at 
com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at 
com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:214)
at 
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at 
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:36)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordReader.fetchKeys(CqlRecordReader.java:578)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordReader.buildQuery(CqlRecordReader.java:526)
at 
org.apache.cassandra.hadoop.cql3.CqlRec

[jira] [Commented] (CASSANDRA-7731) Get max values for live/tombstone cells per slice

2015-08-09 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14679315#comment-14679315
 ] 

Cyril Scetbon commented on CASSANDRA-7731:
--

[~snazy] Thank you for your work

> Get max values for live/tombstone cells per slice
> -
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: 7731-2.0.txt, 7731-2.1.txt
>
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8594) pidfile is never filled by cassandra

2015-01-15 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278742#comment-14278742
 ] 

Cyril Scetbon commented on CASSANDRA-8594:
--

[~JoshuaMcKenzie] you're right, the line you referenced creates the pid. 
However, I just discovered that my debian script is using the foreground option 
:
{code}
( nohup start-stop-daemon -S -c cassandra -a /usr/sbin/cassandra -p "$PIDFILE" 
--umask "30" -- \
-f -p "$PIDFILE" -H "$heap_dump_f" -E "$error_log_f" > 
$CASSANDRA_LOG_DIR/output.log 2>&1 || return 2 ) &
{code}
and that's why the pid is not created. I need to track why mine is using this 
option and where it comes from. Thank you guys

> pidfile is never filled by cassandra
> 
>
> Key: CASSANDRA-8594
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8594
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
> Fix For: 2.0.13
>
>
> The pid file is never filled by cassandra. there is only a File object that 
> is created with [those 
> lines|https://github.com/cscetbon/cassandra/blob/cassandra-2.0.10/src/java/org/apache/cassandra/service/CassandraDaemon.java#L498-L501]
> Here is a 
> [fix|https://github.com/cscetbon/cassandra/commit/d0c5e0c9be00e48e6d0cd0de208c53274f1919c0.patch]
>  that writes the current PID into the pidfile



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8594) pidfile is never filled by cassandra

2015-01-12 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274570#comment-14274570
 ] 

Cyril Scetbon commented on CASSANDRA-8594:
--

bin/cassandra only sets the dedicated cassandra parameter 
[here|https://github.com/cscetbon/cassandra/blob/cassandra-2.0.10/bin/cassandra#L139]
 which doesn't write the pid at all. In 1.2, jsvc was used and it was this tool 
that [was setting the 
pid|https://github.com/apache/cassandra/blob/cassandra-1.2.13/debian/init#L136-L139].
 But now that we don't use it (2.0+)  we use 
[start-stop-daemon|https://github.com/cscetbon/cassandra/blob/cassandra-2.0.10/debian/init#L96]
 without asking it to create the pidfile. And as bin/cassandra says that it 
[stores the 
pid|https://github.com/cscetbon/cassandra/blob/cassandra-2.0.10/bin/cassandra#L21]
 if we use -p pidfile, I think it's time to implement it in cassandra. 

> pidfile is never filled by cassandra
> 
>
> Key: CASSANDRA-8594
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8594
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
> Fix For: 2.0.13
>
>
> The pid file is never filled by cassandra. there is only a File object that 
> is created with [those 
> lines|https://github.com/cscetbon/cassandra/blob/cassandra-2.0.10/src/java/org/apache/cassandra/service/CassandraDaemon.java#L498-L501]
> Here is a 
> [fix|https://github.com/cscetbon/cassandra/commit/d0c5e0c9be00e48e6d0cd0de208c53274f1919c0.patch]
>  that writes the current PID into the pidfile



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8594) pidfile is never filled by cassandra

2015-01-10 Thread Cyril Scetbon (JIRA)
Cyril Scetbon created CASSANDRA-8594:


 Summary: pidfile is never filled by cassandra
 Key: CASSANDRA-8594
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8594
 Project: Cassandra
  Issue Type: Bug
Reporter: Cyril Scetbon


The pid file is never filled by cassandra. there is only a File object that is 
created with [those 
lines|https://github.com/cscetbon/cassandra/blob/cassandra-2.0.10/src/java/org/apache/cassandra/service/CassandraDaemon.java#L498-L501]
Here is a 
[fix|https://github.com/cscetbon/cassandra/commit/d0c5e0c9be00e48e6d0cd0de208c53274f1919c0.patch]
 that writes the current PID into the pidfile



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8486) Can't authenticate using CqlRecordReader

2014-12-15 Thread Cyril Scetbon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon resolved CASSANDRA-8486.
--
Resolution: Not a Problem

I was wrong. We can fix it by using CqlConfigHelper

> Can't authenticate using CqlRecordReader
> 
>
> Key: CASSANDRA-8486
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8486
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Reporter: Cyril Scetbon
>Assignee: Alex Liu
>
> Using CqlPagingRecordReader, it was possible to use authentification to 
> connect to the cassandra cluster, but now that we only have CqlRecordReader 
> we can't anymore.
> We should put [this 
> code|https://github.com/apache/cassandra/blob/cassandra-2.0.9/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java#L140-L153]
>   back in CqlRecordReader



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8486) Can't authenticate using CqlRecordReader

2014-12-15 Thread Cyril Scetbon (JIRA)
Cyril Scetbon created CASSANDRA-8486:


 Summary: Can't authenticate using CqlRecordReader
 Key: CASSANDRA-8486
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8486
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Cyril Scetbon


Using CqlPagingRecordReader, it was possible to use authentification to connect 
to the cassandra cluster, but now that we only have CqlRecordReader we can't 
anymore.

We should put [this 
code|https://github.com/apache/cassandra/blob/cassandra-2.0.9/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java#L140-L153]
  back in CqlRecordReader



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8486) Can't authenticate using CqlRecordReader

2014-12-15 Thread Cyril Scetbon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon updated CASSANDRA-8486:
-
Assignee: Alex Liu

> Can't authenticate using CqlRecordReader
> 
>
> Key: CASSANDRA-8486
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8486
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Reporter: Cyril Scetbon
>Assignee: Alex Liu
>
> Using CqlPagingRecordReader, it was possible to use authentification to 
> connect to the cassandra cluster, but now that we only have CqlRecordReader 
> we can't anymore.
> We should put [this 
> code|https://github.com/apache/cassandra/blob/cassandra-2.0.9/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java#L140-L153]
>   back in CqlRecordReader



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7491) Incorrect thrift-server dependency in 2.0 poms

2014-10-29 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14188155#comment-14188155
 ] 

Cyril Scetbon commented on CASSANDRA-7491:
--

[~brandon.williams] Here is a [patch|http://pastebin.com/akTkS0dx] that shows 
the way I fix the thrift-server jar version. I think we shouldn't have to embed 
available packages and should use this way to get/install them when building 
Cassandra's sources.

> Incorrect thrift-server dependency in 2.0 poms
> --
>
> Key: CASSANDRA-7491
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7491
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Sam Tunnicliffe
> Fix For: 2.0.12
>
>
> On the 2.0 branch we recently replaced thrift-server-0.3.3.jar with 
> thrift-server-internal-only-0.3.3.jar (commit says CASSANDRA-6545, but I 
> don't think that's right), but didn't update the generated pom that gets 
> deployed to mvn central. The upshot is that the poms on maven central for 
> 2.0.8 & 2.0.9 specify their dependencies incorrectly. So any project pulling 
> in those versions of cassandra-all as a dependency will incorrectly include 
> the old jar.
> However, on 2.1 & trunk the internal-only jar was subsequently replaced by 
> thrift-server-0.3.5.jar (CASSANDRA-6285), which *is* available in mvn 
> central. build.xml has also been updated correctly on these branches.
> [~xedin], is there any reason for not switching 2.0 to 
> thrift-server-0.3.5.jar ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7491) Incorrect thrift-server dependency in 2.0 poms

2014-10-27 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14185310#comment-14185310
 ] 

Cyril Scetbon commented on CASSANDRA-7491:
--

You mean we can't fix it in the build.xml and get it from an external 
repository ? Using something like a copyFile command ?

> Incorrect thrift-server dependency in 2.0 poms
> --
>
> Key: CASSANDRA-7491
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7491
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Sam Tunnicliffe
> Fix For: 2.0.12
>
>
> On the 2.0 branch we recently replaced thrift-server-0.3.3.jar with 
> thrift-server-internal-only-0.3.3.jar (commit says CASSANDRA-6545, but I 
> don't think that's right), but didn't update the generated pom that gets 
> deployed to mvn central. The upshot is that the poms on maven central for 
> 2.0.8 & 2.0.9 specify their dependencies incorrectly. So any project pulling 
> in those versions of cassandra-all as a dependency will incorrectly include 
> the old jar.
> However, on 2.1 & trunk the internal-only jar was subsequently replaced by 
> thrift-server-0.3.5.jar (CASSANDRA-6285), which *is* available in mvn 
> central. build.xml has also been updated correctly on these branches.
> [~xedin], is there any reason for not switching 2.0 to 
> thrift-server-0.3.5.jar ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7491) Incorrect thrift-server dependency in 2.0 poms

2014-10-27 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14185293#comment-14185293
 ] 

Cyril Scetbon commented on CASSANDRA-7491:
--

Any news about it ? I can't understand why we still embed a thrift (3.7) jar in 
source when we can get it from maven repositories ?


> Incorrect thrift-server dependency in 2.0 poms
> --
>
> Key: CASSANDRA-7491
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7491
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Sam Tunnicliffe
> Fix For: 2.0.12
>
>
> On the 2.0 branch we recently replaced thrift-server-0.3.3.jar with 
> thrift-server-internal-only-0.3.3.jar (commit says CASSANDRA-6545, but I 
> don't think that's right), but didn't update the generated pom that gets 
> deployed to mvn central. The upshot is that the poms on maven central for 
> 2.0.8 & 2.0.9 specify their dependencies incorrectly. So any project pulling 
> in those versions of cassandra-all as a dependency will incorrectly include 
> the old jar.
> However, on 2.1 & trunk the internal-only jar was subsequently replaced by 
> thrift-server-0.3.5.jar (CASSANDRA-6285), which *is* available in mvn 
> central. build.xml has also been updated correctly on these branches.
> [~xedin], is there any reason for not switching 2.0 to 
> thrift-server-0.3.5.jar ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7731) Get max values for live/tombstone cells per slice

2014-09-28 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14151011#comment-14151011
 ] 

Cyril Scetbon commented on CASSANDRA-7731:
--

yeah but displaying depends on the way it's calculated if it's actually wrong 
... I glanced at other tickets and it has been opened and not updated since 
december 2013, so I'm not confident it will be fixed soon. I really thinks it's 
an important subject.

> Get max values for live/tombstone cells per slice
> -
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.1.1
>
> Attachments: 7731-2.0.txt, 7731-2.1.txt
>
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7731) Get max values for live/tombstone cells per slice

2014-09-27 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150793#comment-14150793
 ] 

Cyril Scetbon commented on CASSANDRA-7731:
--

I can't agree. Monitoring the max value to be alerted is something largely 
spread. And knowing asap that it has been fixed (i.e the max is reduced) is 
also needed. Currently, AFAIK the only way to know it in real time is by 
enabling debug mode and tweaking a parameter in the configuration file 

> Get max values for live/tombstone cells per slice
> -
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.1.1
>
> Attachments: 7731-2.0.txt, 7731-2.1.txt
>
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7731) Get max values for live/tombstone cells per slice

2014-09-15 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14135053#comment-14135053
 ] 

Cyril Scetbon commented on CASSANDRA-7731:
--

Thank you Robert. I'm eager to read your conclusions :)

> Get max values for live/tombstone cells per slice
> -
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.1.1
>
> Attachments: 7731-2.0.txt, 7731-2.1.txt
>
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7731) Get max values for live/tombstone cells per slice

2014-09-15 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134150#comment-14134150
 ] 

Cyril Scetbon commented on CASSANDRA-7731:
--

you're right for the merge, I read too fast. [~snazy] what do you think about 
this issue ?

> Get max values for live/tombstone cells per slice
> -
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.1.1
>
> Attachments: 7731-2.0.txt, 7731-2.1.txt
>
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7731) Get max values for live/tombstone cells per slice

2014-09-15 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134124#comment-14134124
 ] 

Cyril Scetbon commented on CASSANDRA-7731:
--

I hope it's not that one because the bug is still opened :(

> Get max values for live/tombstone cells per slice
> -
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.1.1
>
> Attachments: 7731-2.0.txt, 7731-2.1.txt
>
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-7731) Get max values for live/tombstone cells per slice

2014-09-15 Thread Cyril Scetbon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyril Scetbon reopened CASSANDRA-7731:
--
Reproduced In: 2.0.9, 1.2.18  (was: 1.2.18, 2.0.9)

> Get max values for live/tombstone cells per slice
> -
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.1.1
>
> Attachments: 7731-2.0.txt, 7731-2.1.txt
>
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7731) Get max values for live/tombstone cells per slice

2014-09-15 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134088#comment-14134088
 ] 

Cyril Scetbon edited comment on CASSANDRA-7731 at 9/15/14 4:34 PM:
---

For the 2.1 patch I understand that it could not work as expected as it's not 
using a percentile when it calls 
[HistogramMBean.getMax|https://github.com/dropwizard/metrics/blob/v2.2.0/metrics-core/src/main/java/com/yammer/metrics/reporting/JmxReporter.java#L210-L212]
 and you said that non percentile functions return values since the application 
started. However, I'm using the [2.0 
patch|https://issues.apache.org/jira/secure/attachment/12661546/7731-2.0.txt] 
which internally uses metric.liveScannedHistogram.cf.getSnapshot().getValue(1d) 
which gets the maximum from a percentile. However, as you saw in my logs, it 
doesn't work better and returns an old maximum



was (Author: cscetbon):
For the 2.1 patch I understand that it could not work as expected as it's not 
using a percentile when it calls 
[HistogramMBean.getMax|https://github.com/dropwizard/metrics/blob/v2.2.0/metrics-core/src/main/java/com/yammer/metrics/reporting/JmxReporter.java#L210-L212].
 However, I'm using the [2.0 
patch|https://issues.apache.org/jira/secure/attachment/12661546/7731-2.0.txt] 
which internally uses metric.liveScannedHistogram.cf.getSnapshot().getValue(1d) 
which gets the maximum from a percentile. However, as you saw in my logs, it 
doesn't work better and returns an old maximum


> Get max values for live/tombstone cells per slice
> -
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.1.1
>
> Attachments: 7731-2.0.txt, 7731-2.1.txt
>
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7731) Get max values for live/tombstone cells per slice

2014-09-15 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134088#comment-14134088
 ] 

Cyril Scetbon commented on CASSANDRA-7731:
--

For the 2.1 patch I understand that it could not work as expected as it's not 
using a percentile when it calls 
[HistogramMBean.getMax|https://github.com/dropwizard/metrics/blob/v2.2.0/metrics-core/src/main/java/com/yammer/metrics/reporting/JmxReporter.java#L210-L212].
 However, I'm using the [2.0 
patch|https://issues.apache.org/jira/secure/attachment/12661546/7731-2.0.txt] 
which internally uses metric.liveScannedHistogram.cf.getSnapshot().getValue(1d) 
which gets the maximum from a percentile. However, as you saw in my logs, it 
doesn't work better and returns an old maximum


> Get max values for live/tombstone cells per slice
> -
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.1.1
>
> Attachments: 7731-2.0.txt, 7731-2.1.txt
>
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7731) Get max values for live/tombstone cells per slice

2014-09-15 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133997#comment-14133997
 ] 

Cyril Scetbon edited comment on CASSANDRA-7731 at 9/15/14 4:12 PM:
---

bq. the weighted one is the default but you can specify it to length of 
application
what do you mean ? we don't want to change it to be the maximum value since the 
application started. The only matter concerns the fact that in some cases it's 
not the maximum for the last 5 minutes but can be for the last 20 minutes like 
in my case. Do you think, the last version of metrics could enforce it to 
corresponds to the last 5 minutes ? AFAIU the documentation it says that as 
[exponentially decaying reservoirs| 
https://dropwizard.github.io/metrics/2.2.0/manual/core/#biased-histograms] use 
a forward-decaying priority reservoir it should represent the recent data


was (Author: cscetbon):
bq. the weighted one is the default but you can specify it to length of 
application
what do you mean ? we don't want to change it to be the maximum value since the 
application started. The only matter concerns the fact that in some cases it's 
not the maximum for the last 5 minutes but can be for the last 20 minutes like 
in my case. Do you think, the last version of metrics could enforce it to 
corresponds to the last 5 minutes ? AFAIU the documentation it says that as 
[exponentially decaying reservoirs| 
https://dropwizard.github.io/metrics/3.1.0/manual/core/#exponentially-decaying-reservoirs]
 use a forward-decaying priority reservoir it should represent the recent data

> Get max values for live/tombstone cells per slice
> -
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.1.1
>
> Attachments: 7731-2.0.txt, 7731-2.1.txt
>
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7731) Get max values for live/tombstone cells per slice

2014-09-15 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133997#comment-14133997
 ] 

Cyril Scetbon edited comment on CASSANDRA-7731 at 9/15/14 4:06 PM:
---

bq. the weighted one is the default but you can specify it to length of 
application
what do you mean ? we don't want to change it to be the maximum value since the 
application started. The only matter concerns the fact that in some cases it's 
not the maximum for the last 5 minutes but can be for the last 20 minutes like 
in my case. Do you think, the last version of metrics could enforce it to 
corresponds to the last 5 minutes ? AFAIU the documentation it says that as 
[exponentially decaying reservoirs| 
https://dropwizard.github.io/metrics/3.1.0/manual/core/#exponentially-decaying-reservoirs]
 use a forward-decaying priority reservoir it should represent the recent data


was (Author: cscetbon):
bq. the weighted one is the default but you can specify it to length of 
application
what do you mean ? we don't want to change it to be the maximum value since the 
application started. The only matter concerns the fact that in some cases it's 
not the maximum for the last 5 minutes but can be for the last 20 minutes like 
in my case. Do you think, the last version of metrics could enforce it to 
corresponds to the last 5 minutes ? 

> Get max values for live/tombstone cells per slice
> -
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.1.1
>
> Attachments: 7731-2.0.txt, 7731-2.1.txt
>
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7731) Get max values for live/tombstone cells per slice

2014-09-15 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133997#comment-14133997
 ] 

Cyril Scetbon commented on CASSANDRA-7731:
--

bq. the weighted one is the default but you can specify it to length of 
application
what do you mean ? we don't want to change it to be the maximum value since the 
application started. The only matter concerns the fact that in some cases it's 
not the maximum for the last 5 minutes but can be for the last 20 minutes like 
in my case. Do you think, the last version of metrics could enforce it to 
corresponds to the last 5 minutes ? 

> Get max values for live/tombstone cells per slice
> -
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.1.1
>
> Attachments: 7731-2.0.txt, 7731-2.1.txt
>
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7731) Get max values for live/tombstone cells per slice

2014-09-15 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133715#comment-14133715
 ] 

Cyril Scetbon commented on CASSANDRA-7731:
--

[~snazy] Your link "exponentially-decaying-reservoirs" is outdated. It seems 
that the project has been removed from github or maybe renamed..
My tests are showing that the maximum value collected can be persistent for far 
more than 5 minutes. In the following example, I'm executing one CQL query that 
scans 2 tombstones and after that 1 CQL query each second that scan 0 
tombstones. After more than 1300 queries, I still have the same max value. When 
I check the list of values, it doesn't seem to change, even if the mean changes.
{code}
val 545 = 0.0
val 546 = 2.0
count = 1330
max = 2.0
pmax = 2.0
mean = 0.0015037593984962407
min = 0.0
Median = 0.0
99p = 0.0
{code}
So even if the mean is well calculated, I can't understand why the max value is 
still the same after 20 minutes of queries scanning 0 tombstones.
I have to confess that after 30 minutes, I get the expected behavior :
{code}
val 142 = 0.0
val 143 = 0.0
val 144 = 0.0
count = 1473
max = 2.0
pmax = 0.0
mean = 0.0013577732518669382
min = 0.0
Median = 0.0
99p = 0.0
{code}
However, I need to be sure that the problem is solved and that it lasts only 5 
minutes and not 30 minutes ...

> Get max values for live/tombstone cells per slice
> -
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.1.1
>
> Attachments: 7731-2.0.txt, 7731-2.1.txt
>
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6367) Enable purge of local hints

2014-09-13 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14132952#comment-14132952
 ] 

Cyril Scetbon commented on CASSANDRA-6367:
--

The issue was raised on 1.2 and I see that this new command (I didn't know 
about) has arrived in branch 2.0. You could just say fixed in 2.x :)

> Enable purge of local hints
> ---
>
> Key: CASSANDRA-6367
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6367
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tools
>Reporter: Cyril Scetbon
>Priority: Minor
>  Labels: lhf
>
> We should have a new nodetool command (purgelocalhints as a suggestion) to 
> locally truncate the system.hints cf and not on all nodes as it currently 
> works if we use the TRUNCATE DDL command. We could have access to this new 
> functionality through JMX too
> see thread 
> http://mail-archives.apache.org/mod_mbox/cassandra-dev/201311.mbox/%3c8e5f2112-8d98-4f6b-aa49-08ba3ff00...@free.fr%3e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-7856) Xss in test

2014-09-01 Thread Cyril Scetbon (JIRA)
Cyril Scetbon created CASSANDRA-7856:


 Summary: Xss in test
 Key: CASSANDRA-7856
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7856
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Cyril Scetbon
Priority: Minor


Xss parameter need to be changed in test/cassandra.in.sh (180k -> 256k) as it 
is in conf/cassandra-env.sh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7731) Get max values for live/tombstone cells per slice

2014-08-13 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14096674#comment-14096674
 ] 

Cyril Scetbon commented on CASSANDRA-7731:
--

Ok, I usually use Deprecated only when there's already in the current code 
another way to get the same thing done with a new code version :)

> Get max values for live/tombstone cells per slice
> -
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.0.11, 2.1.1
>
> Attachments: 7731-2.0.txt, 7731-2.1.txt
>
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-7731) Get max values for live/tombstone cells per slice

2014-08-13 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14096640#comment-14096640
 ] 

Cyril Scetbon edited comment on CASSANDRA-7731 at 8/14/14 6:35 AM:
---

{quote}
In fact these do not reset the backing timers and counters but use the previous 
timer+counter values as an offset. (Hope it's clear that way)
{quote}
If I understand you well, launching the command each day or 1 day a week, at 
the end of the week we would get same values, right ? 

Cool to see that the patch is really short ! Can you tell me why you add 
Deprecated annotation to all of those new methods ?


was (Author: cscetbon):
Cool ! Can you tell me why you add Deprecated annotation to all of those new 
methods ?

> Get max values for live/tombstone cells per slice
> -
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.0.11, 2.1.1
>
> Attachments: 7731-2.0.txt, 7731-2.1.txt
>
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7731) Get max values for live/tombstone cells per slice

2014-08-13 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14096640#comment-14096640
 ] 

Cyril Scetbon commented on CASSANDRA-7731:
--

Cool ! Can you tell me why you add Deprecated annotation to all of those new 
methods ?

> Get max values for live/tombstone cells per slice
> -
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.0.11, 2.1.1
>
> Attachments: 7731-2.0.txt, 7731-2.1.txt
>
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7731) Get max values for live/tombstone cells per slice

2014-08-13 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14095263#comment-14095263
 ] 

Cyril Scetbon commented on CASSANDRA-7731:
--

bq. I think that calling cfstat does not reset statistics - I'll investigate 
that when adding the new max value calls. 
Cool, let me know
bq. I tend not add a separate histogram but use the existing instead and 
extract the right value - will see what the yammer stuff offers.
you're right, using the same histograms sounds good. 
bq. I'm not a fan of changing API method names when they are used "in the wild"
I totally agree :)

> Get max values for live/tombstone cells per slice
> -
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-7731) Average live/tombstone cells per slice

2014-08-12 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14095171#comment-14095171
 ] 

Cyril Scetbon edited comment on CASSANDRA-7731 at 8/13/14 6:49 AM:
---

[~snazy] in the first part, you're totally right about the use of exponentially 
decaying reservoirs. I didn't see that at first. Cool. Yeah, you could use a 
better name for variables but as long as it does what it should that's fine for 
me :) The message is clear in CFSTAT about this. Renaming them to be clearer 
could help developers too.
For the second part, that's a yes. I think we really need to know the last max 
for live and tombstone cells number of reads. We hit 2 development bugs related 
to this and monitoring that could really help ! So using 2 more histograms 
(with biased=true) for those max values should help and is a must have.

Can you also confirm that calling CFSTAT does not reset internal counters at 
the end of the call ? I understand that for the histograms above it doesn't, 
but what about the others ?


was (Author: cscetbon):
[~snazy] in the first part, you're totally right about the use of exponentially 
decaying reservoirs. I didn't see that at first. Cool. Yeah, you could use a 
better name for variables but as long as it does what it should that's fine for 
me :) The message is clear in CFSTAT about this. Renaming them to be clearer 
could help developers too.
For the second part, that's a yes. I think we really need to know the last max 
for live and tombstone cells number of reads. We hit 2 development bugs related 
to this and monitoring that could really help ! So using 2 more histograms 
(with biased=true) for those max values should help and is a must have.

Can you just confirm that calling CFSTAT does not reset internal counters at 
the end of the call ? I understand that for the histograms above it doesn't, 
but what about the others ?

> Average live/tombstone cells per slice
> --
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-7731) Average live/tombstone cells per slice

2014-08-12 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14095171#comment-14095171
 ] 

Cyril Scetbon edited comment on CASSANDRA-7731 at 8/13/14 6:49 AM:
---

[~snazy] in the first part, you're totally right about the use of exponentially 
decaying reservoirs. I didn't see that at first. Cool. Yeah, you could use a 
better name for variables but as long as it does what it should that's fine for 
me :) The message is clear in CFSTAT about this. Renaming them to be clearer 
could help developers too.
For the second part, that's a yes. I think we really need to know the last max 
for live and tombstone cells number of reads. We hit 2 development bugs related 
to this and monitoring that could really help ! So using 2 more histograms 
(with biased=true) for those max values should help and is a must have.

Can you just confirm that calling CFSTAT does not reset internal counters at 
the end of the call ? I understand that for the histograms above it doesn't, 
but what about the others ?


was (Author: cscetbon):
[~snazy] in the first part, you're totally right about the use of exponentially 
decaying reservoirs. I didn't see that at first. Cool. Yeah, you could use a 
better name for variables but as long as it does what it should that's fine for 
me :) The message is clear in CFSTAT about this. Renaming them to be clearer 
could help developers too.
For the second part, that's a yes. I think we really need to know the last max 
for live and tombstone cells number of reads. We hit 2 development bugs related 
to this and monitoring that could really help ! So using 2 more histograms 
(with biased=true) for those counters  should help and is a must have.

Can you just confirm that calling CFSTAT does not reset internal counters at 
the end of the call ? I understand that for the histograms above it doesn't, 
but what about the others ?

> Average live/tombstone cells per slice
> --
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7731) Average live/tombstone cells per slice

2014-08-12 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14095171#comment-14095171
 ] 

Cyril Scetbon commented on CASSANDRA-7731:
--

[~snazy] in the first part, you're totally right about the use of exponentially 
decaying reservoirs. I didn't see that at first. Cool. Yeah, you could use a 
better name for variables but as long as it does what it should that's fine for 
me :) The message is clear in CFSTAT about this. Renaming them to be clearer 
could help developers too.
For the second part, that's a yes. I think we really need to know the last max 
for live and tombstone cells number of reads. We hit 2 development bugs related 
to this and monitoring that could really help ! So using 2 more histograms 
(with biased=true) for those counters  should help and is a must have.

Can you just confirm that calling CFSTAT does not reset internal counters at 
the end of the call ? I understand that for the histograms above it doesn't, 
but what about the others ?

> Average live/tombstone cells per slice
> --
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7731) Average live/tombstone cells per slice

2014-08-09 Thread Cyril Scetbon (JIRA)
Cyril Scetbon created CASSANDRA-7731:


 Summary: Average live/tombstone cells per slice
 Key: CASSANDRA-7731
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Cyril Scetbon
Priority: Minor


I think you should not say that slice statistics are valid for the [last five 
minutes 
|https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
 in CFSTATS command of nodetool. I've read the documentation from yammer for 
Histograms and there is no way to force values to expire after x minutes except 
by 
[clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
 it . The only thing I can see is that the last snapshot used to provide the 
median (or whatever you'd used instead) value is based on 1028 values.

I think we should also be able to detect that some requests are accessing a lot 
of live/tombstone cells per query and that's not possible for now without 
activating DEBUG for SliceQueryFilter for example and by tweaking the 
threshold. Currently as nodetool cfstats returns the median if a low part of 
the queries are scanning a lot of live/tombstone cells we miss it !




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6614) 2 hours loop flushing+compacting system/{schema_keyspaces,schema_columnfamilies,schema_columns} when upgrading

2014-07-30 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14079070#comment-14079070
 ] 

Cyril Scetbon commented on CASSANDRA-6614:
--

As told before, I only met it when I was upgrading my cluster from 1.2.2 to 
1.2.13. Now that it's done I don't work anymore on it. I'll be able to get 
information about the next upgrade from 1.2.13 to 2.0.9+

> 2 hours loop flushing+compacting 
> system/{schema_keyspaces,schema_columnfamilies,schema_columns} when upgrading
> --
>
> Key: CASSANDRA-6614
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6614
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: ubuntu 12.04
>Reporter: Cyril Scetbon
>
> It happens when we upgrade one node to 1.2.13 on a 1.2.2 cluster 
> see http://pastebin.com/YZKUQLXz
> If I grep for only InternalResponseStage logs I get 
> http://pastebin.com/htnXZCiT which always displays same account of ops and 
> serialized/live bytes per column family.
> When I upgrade one node from 1.2.2 to 1.2.13, for 2h I get the previous 
> messages with a raise of CPU (as it flushes and compacts continually) on all 
> nodes 
> http://picpaste.com/pics/Screen_Shot_2014-01-24_at_09.18.50-ggcCDVqd.1390587562.png
> After that, everything is fine and I can upgrade other nodes without any 
> raise of cpus load. when I start the upgrade, the more nodes I upgrade at the 
> same time (at the beginning), the higher the cpu load is 
> http://picpaste.com/pics/Screen_Shot_2014-01-23_at_17.45.56-I3fdEQ2T.1390587597.png



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5220) Repair improvements when using vnodes

2014-06-04 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14018504#comment-14018504
 ] 

Cyril Scetbon commented on CASSANDRA-5220:
--

[~SchnickDaddy] It's not fixed yet. We just hope it'll be fixed in version 
2.1.1, and currently guys are digging to find where is located the overhead 
that slows the repair 

> Repair improvements when using vnodes
> -
>
> Key: CASSANDRA-5220
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5220
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.2.0 beta 1
>Reporter: Brandon Williams
>Assignee: Yuki Morishita
>  Labels: performance, repair
> Fix For: 2.1.1
>
> Attachments: 5220-yourkit.png, 5220-yourkit.tar.bz2
>
>
> Currently when using vnodes, repair takes much longer to complete than 
> without them.  This appears at least in part because it's using a session per 
> range and processing them sequentially.  This generates a lot of log spam 
> with vnodes, and while being gentler and lighter on hard disk deployments, 
> ssd-based deployments would often prefer that repair be as fast as possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6421) Add bash completion to nodetool

2014-05-22 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14005931#comment-14005931
 ] 

Cyril Scetbon commented on CASSANDRA-6421:
--

Okay to open a new jira if needed, and I'm happy that others use it too !

> Add bash completion to nodetool
> ---
>
> Key: CASSANDRA-6421
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6421
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tools
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Trivial
> Fix For: 2.1 rc1
>
> Attachments: 6421-2.1.txt, 6421.txt
>
>
> You can find the bash-completion file at 
> https://raw.github.com/cscetbon/cassandra/nodetool-completion/etc/bash_completion.d/nodetool
> it uses cqlsh to get keyspaces and namespaces and could use an environment 
> variable (not implemented) to get access which cqlsh if authentification is 
> needed. But I think that's really a good start :)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   3   >