[jira] [Commented] (CASSANDRA-5431) cassandra-shuffle with JMX usernames and passwords
[ https://issues.apache.org/jira/browse/CASSANDRA-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13625607#comment-13625607 ] Eric Dong commented on CASSANDRA-5431: -- Hi Michał, Thanks for the information--- I have not started work on this, so please feel free to submit your patch! > cassandra-shuffle with JMX usernames and passwords > --- > > Key: CASSANDRA-5431 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5431 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.2.3 >Reporter: Eric Dong > Attachments: 5431-v2.txt, CASSANDRA-5431-whitespace.patch > > > Unlike nodetool, cassandra-shuffle doesn't allow passing in a JMX username > and password. This stops those who want to switch to vnodes from doing so if > JMX access requires a username and a password. > Patch to follow. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5431) cassandra-shuffle with JMX usernames and passwords
[ https://issues.apache.org/jira/browse/CASSANDRA-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Dong updated CASSANDRA-5431: - Attachment: CASSANDRA-5431-whitespace.patch My intended changes touch files that have formatting issues relative to the Cassandra formatter settings for Eclipse, so I'm posting a whitespace patch first. > cassandra-shuffle with JMX usernames and passwords > --- > > Key: CASSANDRA-5431 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5431 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.2.3 >Reporter: Eric Dong > Attachments: CASSANDRA-5431-whitespace.patch > > > Unlike nodetool, cassandra-shuffle doesn't allow passing in a JMX username > and password. This stops those who want to switch to vnodes from doing so if > JMX access requires a username and a password. > Patch to follow. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5431) cassandra-shuffle with JMX usernames and passwords
Eric Dong created CASSANDRA-5431: Summary: cassandra-shuffle with JMX usernames and passwords Key: CASSANDRA-5431 URL: https://issues.apache.org/jira/browse/CASSANDRA-5431 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.3 Reporter: Eric Dong Unlike nodetool, cassandra-shuffle doesn't allow passing in a JMX username and password. This stops those who want to switch to vnodes from doing so if JMX access requires a username and a password. Patch to follow. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5339) YAML network topology snitch supporting preferred addresses
[ https://issues.apache.org/jira/browse/CASSANDRA-5339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13600539#comment-13600539 ] Eric Dong commented on CASSANDRA-5339: -- The formatting in the pull request / GitHub branch didn't have the right Java style formatting, so I've fixed that in CASSANDRA-5339-1.patch. > YAML network topology snitch supporting preferred addresses > --- > > Key: CASSANDRA-5339 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5339 > Project: Cassandra > Issue Type: New Feature >Reporter: Eric Dong >Priority: Minor > Attachments: CASSANDRA-5339-1.patch > > > In order to support having a Cassandra cluster spanning multiple data > centers, some in Amazon EC2 and some not, I'm submitting a YAML network > topology snitch that allows one to configure 'preferred addresses' such as a > data-center-local address. The new snitch reconnects to the node via the > preferred address using the same reconnection trick present in > Ec2MultiRegionSnitch. > I chose a new YAML format instead of trying to extend > cassandra-topology.properties because it is easier to read and allows for > future extensibility. > Pull request to follow. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5339) YAML network topology snitch supporting preferred addresses
[ https://issues.apache.org/jira/browse/CASSANDRA-5339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Dong updated CASSANDRA-5339: - Attachment: CASSANDRA-5339-1.patch > YAML network topology snitch supporting preferred addresses > --- > > Key: CASSANDRA-5339 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5339 > Project: Cassandra > Issue Type: New Feature >Reporter: Eric Dong >Priority: Minor > Attachments: CASSANDRA-5339-1.patch > > > In order to support having a Cassandra cluster spanning multiple data > centers, some in Amazon EC2 and some not, I'm submitting a YAML network > topology snitch that allows one to configure 'preferred addresses' such as a > data-center-local address. The new snitch reconnects to the node via the > preferred address using the same reconnection trick present in > Ec2MultiRegionSnitch. > I chose a new YAML format instead of trying to extend > cassandra-topology.properties because it is easier to read and allows for > future extensibility. > Pull request to follow. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5339) YAML network topology snitch supporting preferred addresses
[ https://issues.apache.org/jira/browse/CASSANDRA-5339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13600526#comment-13600526 ] Eric Dong commented on CASSANDRA-5339: -- Thanks for the information! Will submit a patch instead. (Would be great if this were noted in [GitTransition|http://wiki.apache.org/cassandra/GitTransition].) > YAML network topology snitch supporting preferred addresses > --- > > Key: CASSANDRA-5339 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5339 > Project: Cassandra > Issue Type: New Feature >Reporter: Eric Dong >Priority: Minor > > In order to support having a Cassandra cluster spanning multiple data > centers, some in Amazon EC2 and some not, I'm submitting a YAML network > topology snitch that allows one to configure 'preferred addresses' such as a > data-center-local address. The new snitch reconnects to the node via the > preferred address using the same reconnection trick present in > Ec2MultiRegionSnitch. > I chose a new YAML format instead of trying to extend > cassandra-topology.properties because it is easier to read and allows for > future extensibility. > Pull request to follow. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5339) YAML network topology snitch supporting preferred addresses
[ https://issues.apache.org/jira/browse/CASSANDRA-5339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13600505#comment-13600505 ] Eric Dong commented on CASSANDRA-5339: -- Pull request in GitHub: https://github.com/apache/cassandra/pull/14 > YAML network topology snitch supporting preferred addresses > --- > > Key: CASSANDRA-5339 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5339 > Project: Cassandra > Issue Type: New Feature >Reporter: Eric Dong >Priority: Minor > > In order to support having a Cassandra cluster spanning multiple data > centers, some in Amazon EC2 and some not, I'm submitting a YAML network > topology snitch that allows one to configure 'preferred addresses' such as a > data-center-local address. The new snitch reconnects to the node via the > preferred address using the same reconnection trick present in > Ec2MultiRegionSnitch. > I chose a new YAML format instead of trying to extend > cassandra-topology.properties because it is easier to read and allows for > future extensibility. > Pull request to follow. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5339) YAML network topology snitch supporting preferred addresses
Eric Dong created CASSANDRA-5339: Summary: YAML network topology snitch supporting preferred addresses Key: CASSANDRA-5339 URL: https://issues.apache.org/jira/browse/CASSANDRA-5339 Project: Cassandra Issue Type: New Feature Reporter: Eric Dong Priority: Minor In order to support having a Cassandra cluster spanning multiple data centers, some in Amazon EC2 and some not, I'm submitting a YAML network topology snitch that allows one to configure 'preferred addresses' such as a data-center-local address. The new snitch reconnects to the node via the preferred address using the same reconnection trick present in Ec2MultiRegionSnitch. I chose a new YAML format instead of trying to extend cassandra-topology.properties because it is easier to read and allows for future extensibility. Pull request to follow. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5121) system.peers.tokens is empty after node restart
[ https://issues.apache.org/jira/browse/CASSANDRA-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13558075#comment-13558075 ] Eric Dong commented on CASSANDRA-5121: -- Updated; test now passes, thanks! > system.peers.tokens is empty after node restart > --- > > Key: CASSANDRA-5121 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5121 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 1.2.0 > Environment: Windows 8 / Java 1.6.0_37-b06 >Reporter: Pierre Chalamet >Assignee: Sylvain Lebresne >Priority: Minor > Fix For: 1.2.1 > > Attachments: 5121.txt > > > Using a 2 nodes fresh cluster (127.0.0.1 & 127.0.0.2) running latest 1.2, I’m > querying system.peers to get the nodes of the cluster and their respective > token. But it seems there is a problem after either node restart. > When both node starts up, querying system.peers seems ok: > {code} > 127.0.0.1> select * from system.peers; > +-+--+---+---+-+-+--+---+ > | data_center | host_id | peer > | rack | release_version | rpc_address | schema_version > | tokens| > +=+==+===+===+=+=+==+===+ > | datacenter1 | 4819cbb0-9741-4fe0-8d7d-95941b0247bf | 127.0.0.2 > | rack1 | 1.2.0 | 127.0.0.2 | > 59adb24e-f3cd-3e02-97f0-5b395827453f | > 56713727820156410577229101238628035242| > +-+--+---+---+-+-+--+---+ > {code} > But as soon as one node is restarted (let’s say 127.0.0.2), tokens column is > then empty: > {code} > 127.0.0.1> select * from system.peers; > +-+--+---+---+-+-+--+-+ > | data_center | host_id | peer > | rack | release_version | rpc_address | schema_version > | tokens | > +=+==+===+===+=+=+==+=+ > | datacenter1 | 4819cbb0-9741-4fe0-8d7d-95941b0247bf | 127.0.0.2 > | rack1 | 1.2.0 | 127.0.0.2 | > 59adb24e-f3cd-3e02-97f0-5b395827453f | | > +-+--+---+---+-+-+--+-+ > {code} > {code} > Log server side: > DEBUG 22:08:01,608 Responding: ROWS [peer(system, peers), > org.apache.cassandra.db.marshal.InetAddressType][data_center(system, peers), > org.apache.cassandra.db.marshal.UTF8Type][host_id(system, peers), > org.apache.cassandra.db.marshal.UUIDType][rack(system, peers), > org.apache.cassandra.db.marshal.UTF8Type][release_version(system, peers), > org.apache.cassandra.db.marshal.UTF8Type][rpc_address(system, peers), > org.apache.cassandra.db.marshal.InetAddressType][schema_version(system, > peers), org.apache.cassandra.db.marshal.UUIDType][tokens(system, peers), > org.apache.cassandra.db.marshal.SetType(org.apache.cassandra.db.marshal.UTF8Type)] > | 127.0.0.2 | datacenter1 | 4819cbb0-9741-4fe0-8d7d-95941b0247bf | rack1 | > 1.2.0 | 127.0.0.2 | 59adb24e-f3cd-3e02-97f0-5b395827453f | null > {code} > Restarting the other node (127.0.0.1) restore back the tokens column. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5121) system.peers.tokens is empty after node restart
[ https://issues.apache.org/jira/browse/CASSANDRA-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557933#comment-13557933 ] Eric Dong commented on CASSANDRA-5121: -- Hi, commit ec35427fdfbc46a8adeafc042651f552b9bcc1a0 breaks RelocateTest: {noformat} $ ant clean build test -Dtest.name=RelocateTest ... [junit] Testsuite: org.apache.cassandra.service.RelocateTest [junit] Tests run: 2, Failures: 2, Errors: 0, Time elapsed: 6.215 sec [junit] [junit] Testcase: testWriteEndpointsDuringRelocate(org.apache.cassandra.service.RelocateTest): FAILED [junit] removeTokens should be used instead [junit] junit.framework.AssertionFailedError: removeTokens should be used instead [junit] at org.apache.cassandra.db.SystemTable.updateTokens(SystemTable.java:324) [junit] at org.apache.cassandra.db.SystemTable.updateLocalTokens(SystemTable.java:342) [junit] at org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:1393) [junit] at org.apache.cassandra.service.StorageService.onChange(StorageService.java:1166) [junit] at org.apache.cassandra.service.RelocateTest.createInitialRing(RelocateTest.java:106) [junit] at org.apache.cassandra.service.RelocateTest.testWriteEndpointsDuringRelocate(RelocateTest.java:128) [junit] [junit] [junit] Testcase: testRelocationSuccess(org.apache.cassandra.service.RelocateTest): FAILED [junit] removeTokens should be used instead [junit] junit.framework.AssertionFailedError: removeTokens should be used instead [junit] at org.apache.cassandra.db.SystemTable.updateTokens(SystemTable.java:324) [junit] at org.apache.cassandra.db.SystemTable.updateLocalTokens(SystemTable.java:342) [junit] at org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:1393) [junit] at org.apache.cassandra.service.StorageService.onChange(StorageService.java:1166) [junit] at org.apache.cassandra.service.RelocateTest.createInitialRing(RelocateTest.java:106) [junit] at org.apache.cassandra.service.RelocateTest.testRelocationSuccess(RelocateTest.java:177) [junit] [junit] [junit] Test org.apache.cassandra.service.RelocateTest FAILED BUILD FAILED ... {noformat} After commit e6b6eaa583e8fc15f03c3e27664bf7fc06b3af0a, testWriteEndpointsDuringRelocate passes but testRelocationSuccess still fails: {noformat} $ ant clean build test -Dtest.name=RelocateTest ... [junit] Testcase: testRelocationSuccess(org.apache.cassandra.service.RelocateTest): FAILED [junit] removeEndpoint should be used instead [junit] junit.framework.AssertionFailedError: removeEndpoint should be used instead [junit] at org.apache.cassandra.db.SystemTable.updateTokens(SystemTable.java:316) [junit] at org.apache.cassandra.db.SystemTable.updateLocalTokens(SystemTable.java:334) [junit] at org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:1394) [junit] at org.apache.cassandra.service.StorageService.onChange(StorageService.java:1166) [junit] at org.apache.cassandra.service.RelocateTest.testRelocationSuccess(RelocateTest.java:193) [junit] [junit] [junit] Test org.apache.cassandra.service.RelocateTest FAILED ... {noformat} > system.peers.tokens is empty after node restart > --- > > Key: CASSANDRA-5121 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5121 > Project: Cassandra > Issue Type: Bug > Components: Core >Affects Versions: 1.2.0 > Environment: Windows 8 / Java 1.6.0_37-b06 >Reporter: Pierre Chalamet >Assignee: Sylvain Lebresne >Priority: Minor > Fix For: 1.2.1 > > Attachments: 5121.txt > > > Using a 2 nodes fresh cluster (127.0.0.1 & 127.0.0.2) running latest 1.2, I’m > querying system.peers to get the nodes of the cluster and their respective > token. But it seems there is a problem after either node restart. > When both node starts up, querying system.peers seems ok: > {code} > 127.0.0.1> select * from system.peers; > +-+--+---+---+-+-+--+---+ > | data_center | host_id | peer > | rack | release_version | rpc_address | schema_version > | tokens| > +=+==+===+===+=+=+==+===
[jira] [Commented] (CASSANDRA-4479) JMX attribute setters not consistent with cassandra.yaml
[ https://issues.apache.org/jira/browse/CASSANDRA-4479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13427012#comment-13427012 ] Eric Dong commented on CASSANDRA-4479: -- A quick-and-dirty laundry list of MBean setters: {noformat} $ find . -name '*MBean.java' -exec grep 'void set' {} + ./src/java/org/apache/cassandra/concurrent/JMXConfigurableThreadPoolExecutorMBean.java: void setCorePoolSize(int n); ./src/java/org/apache/cassandra/db/ColumnFamilyStoreMBean.java:public void setMinimumCompactionThreshold(int threshold); ./src/java/org/apache/cassandra/db/ColumnFamilyStoreMBean.java:public void setMaximumCompactionThreshold(int threshold); ./src/java/org/apache/cassandra/db/ColumnFamilyStoreMBean.java:public void setCompactionStrategyClass(String className) throws ConfigurationException; ./src/java/org/apache/cassandra/db/ColumnFamilyStoreMBean.java:public void setCompressionParameters(Map opts) throws ConfigurationException; ./src/java/org/apache/cassandra/gms/FailureDetectorMBean.java:public void setPhiConvictThreshold(int phi); ./src/java/org/apache/cassandra/service/CacheServiceMBean.java:public void setRowCacheSavePeriodInSeconds(int rcspis); ./src/java/org/apache/cassandra/service/CacheServiceMBean.java:public void setKeyCacheSavePeriodInSeconds(int kcspis); ./src/java/org/apache/cassandra/service/CacheServiceMBean.java:public void setRowCacheCapacityInMB(long capacity); ./src/java/org/apache/cassandra/service/CacheServiceMBean.java:public void setKeyCacheCapacityInMB(long capacity); ./src/java/org/apache/cassandra/service/StorageProxyMBean.java:public void setHintedHandoffEnabled(boolean b); ./src/java/org/apache/cassandra/service/StorageProxyMBean.java:public void setMaxHintWindow(int ms); ./src/java/org/apache/cassandra/service/StorageProxyMBean.java:public void setMaxHintsInProgress(int qs); ./src/java/org/apache/cassandra/service/StorageProxyMBean.java:public void setRpcTimeout(Long timeoutInMillis); ./src/java/org/apache/cassandra/service/StorageServiceMBean.java:public void setLog4jLevel(String classQualifier, String level); ./src/java/org/apache/cassandra/service/StorageServiceMBean.java:public void setStreamThroughputMbPerSec(int value); ./src/java/org/apache/cassandra/service/StorageServiceMBean.java:public void setCompactionThroughputMbPerSec(int value); ./src/java/org/apache/cassandra/service/StorageServiceMBean.java:public void setIncrementalBackupsEnabled(boolean value); {noformat} DatabaseDescriptor setters; according to [ArchitectureInternals|http://wiki.apache.org/cassandra/ArchitectureInternals], all node configuration parameters should be in here: {noformat} $ grep 'void set' src/java/org/apache/cassandra/config/DatabaseDescriptor.java public static void setPartitioner(IPartitioner newPartitioner) public static void setEndpointSnitch(IEndpointSnitch eps) public static void setRpcTimeout(Long timeOutInMillis) public static void setInMemoryCompactionLimit(int sizeInMB) public static void setCompactionThroughputMbPerSec(int value) public static void setStreamThroughputOutboundMegabitsPerSec(int value) public static void setBroadcastAddress(InetAddress broadcastAdd) public static void setDynamicUpdateInterval(Integer dynamicUpdateInterval) public static void setDynamicResetInterval(Integer dynamicResetInterval) public static void setDynamicBadnessThreshold(Double dynamicBadnessThreshold) public static void setIncrementalBackupsEnabled(boolean value) {noformat} > JMX attribute setters not consistent with cassandra.yaml > > > Key: CASSANDRA-4479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4479 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.1.2 >Reporter: Eric Dong > > If a setting is configurable both via cassandra.yaml and JMX, the two should > be consistent. If that doesn't hold, then the JMX setter can't be trusted. > Here I present the example of phi_convict_threshold. > I'm trying to set phi_convict_threshold via JMX, which sets > FailureDetector.phiConvictThreshold_, but this doesn't update > Config.phi_convict_threshold, which gets its value from cassandra.yaml when > starting up. > Some places, such as FailureDetector.interpret(InetAddress), use > FailureDetector.phiConvictThreshold_; others, such as AntiEntropyService.line > 813 in cassandra-1.1.2, use Config.phi_convict_threshold: > {code} > // We want a higher confidence in the failure detection than > usual because failing a repair wrongly has a high cost. > if (phi < 2 * DatabaseDescriptor.getPhiConvictThreshold()) > return; > {code} > where DatabaseDescriptor.getPhiConvictThreshold() returns
[jira] [Updated] (CASSANDRA-4479) JMX attribute setters not consistent with cassandra.yaml
[ https://issues.apache.org/jira/browse/CASSANDRA-4479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Dong updated CASSANDRA-4479: - Description: If a setting is configurable both via cassandra.yaml and JMX, the two should be consistent. If that doesn't hold, then the JMX setter can't be trusted. Here I present the example of phi_convict_threshold. I'm trying to set phi_convict_threshold via JMX, which sets FailureDetector.phiConvictThreshold_, but this doesn't update Config.phi_convict_threshold, which gets its value from cassandra.yaml when starting up. Some places, such as FailureDetector.interpret(InetAddress), use FailureDetector.phiConvictThreshold_; others, such as AntiEntropyService.line 813 in cassandra-1.1.2, use Config.phi_convict_threshold: {code} // We want a higher confidence in the failure detection than usual because failing a repair wrongly has a high cost. if (phi < 2 * DatabaseDescriptor.getPhiConvictThreshold()) return; {code} where DatabaseDescriptor.getPhiConvictThreshold() returns Conf.phi_convict_threshold. So, it looks like there are cases where a value is stored in multiple places, and setting the value via JMX doesn't set all of them. I'd say there should only be a single place where a configuration parameter is stored, and that single field: * should read in the value from cassandra.yaml, optionally falling back to a sane default * should be the field that the JMX attribute reads and sets, and * any place that needs the current global setting should get it from that field. However, there could be cases where you read in a global value at the start of a task and keep that value locally until the end of the task. Also, anything settable via JMX should be volatile or set via a synchronized setter, or else according to the Java memory model other threads may be stuck with the old setting. So, I'm requesting the following: * Setting up guidelines for how to expose a configuration parameter both via cassandra.yaml and JMX, based on what I've mentioned above * Going through the list of configuration parameters and fixing any that don't match those guidelines I'd also recommend logging any changes to configuration parameters. was: If a setting is configurable both via cassandra.yaml and JMX, the two should be consistent, but that is not the case for phi_convict_threshold. I'm trying to set phi_convict_threshold via JMX, which sets FailureDetector.phiConvictThreshold_, but this doesn't update Config.phi_convict_threshold, which gets its value from cassandra.yaml when starting up. Some places, such as FailureDetector.interpret(InetAddress), use FailureDetector.phiConvictThreshold_; others, such as AntiEntropyService.line 813 in cassandra-1.1.2, use Config.phi_convict_threshold: {code} // We want a higher confidence in the failure detection than usual because failing a repair wrongly has a high cost. if (phi < 2 * DatabaseDescriptor.getPhiConvictThreshold()) return; {code} where DatabaseDescriptor.getPhiConvictThreshold() returns Conf.phi_convict_threshold. So, it looks like there are cases where a value is stored in multiple places, and setting the value via JMX doesn't set all of them. I'd say there should only be a single place where a configuration parameter is stored, and that single field: * should read in the value from cassandra.yaml, optionally falling back to a sane default * should be the field that the JMX attribute reads and sets, and * any place that needs the current global setting should get it from that field. However, there could be cases where you read in a global value at the start of a task and keep that value locally until the end of the task. Also, anything settable via JMX should be volatile or set via a synchronized setter, or else according to the Java memory model other threads may be stuck with the old setting. Summary: JMX attribute setters not consistent with cassandra.yaml (was: Multiple phi_convict_threshold fields not all settable via JMX) > JMX attribute setters not consistent with cassandra.yaml > > > Key: CASSANDRA-4479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4479 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.1.2 >Reporter: Eric Dong > > If a setting is configurable both via cassandra.yaml and JMX, the two should > be consistent. If that doesn't hold, then the JMX setter can't be trusted. > Here I present the example of phi_convict_threshold. > I'm trying to set phi_convict_threshold via JMX, which sets > FailureDetector.phiConvictThreshold_, but this doesn't update > Config.phi_convict_threshold, which gets its value from cassandra.yaml when > starting up. > Some places, such
[jira] [Commented] (CASSANDRA-4479) Multiple phi_convict_threshold fields not all settable via JMX
[ https://issues.apache.org/jira/browse/CASSANDRA-4479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13426824#comment-13426824 ] Eric Dong commented on CASSANDRA-4479: -- For comparison, rpc_timeout_in_ms is settable through JMX via StorageProxy[MBean], but StorageProxy doesn't have its own rpc_timeout_in_ms field, it calls DatabaseDescriptor.setRpcTimeout(Long), which sets Conf.rpc_timeout_in_ms. However, Conf.rpc_timeout_in_ms is neither volatile nor set via a synchronized method, which is still bad. > Multiple phi_convict_threshold fields not all settable via JMX > -- > > Key: CASSANDRA-4479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4479 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.1.2 >Reporter: Eric Dong > > If a setting is configurable both via cassandra.yaml and JMX, the two should > be consistent, but that is not the case for phi_convict_threshold. > I'm trying to set phi_convict_threshold via JMX, which sets > FailureDetector.phiConvictThreshold_, but this doesn't update > Config.phi_convict_threshold, which gets its value from cassandra.yaml when > starting up. > Some places, such as FailureDetector.interpret(InetAddress), use > FailureDetector.phiConvictThreshold_; others, such as AntiEntropyService.line > 813 in cassandra-1.1.2, use Config.phi_convict_threshold: > {code} > // We want a higher confidence in the failure detection than > usual because failing a repair wrongly has a high cost. > if (phi < 2 * DatabaseDescriptor.getPhiConvictThreshold()) > return; > {code} > where DatabaseDescriptor.getPhiConvictThreshold() returns > Conf.phi_convict_threshold. > So, it looks like there are cases where a value is stored in multiple places, > and setting the value via JMX doesn't set all of them. I'd say there should > only be a single place where a configuration parameter is stored, and that > single field: > * should read in the value from cassandra.yaml, optionally falling back to a > sane default > * should be the field that the JMX attribute reads and sets, and > * any place that needs the current global setting should get it from that > field. However, there could be cases where you read in a global value at the > start of a task and keep that value locally until the end of the task. > Also, anything settable via JMX should be volatile or set via a synchronized > setter, or else according to the Java memory model other threads may be stuck > with the old setting. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-4479) Multiple phi_convict_threshold fields not all settable via JMX
Eric Dong created CASSANDRA-4479: Summary: Multiple phi_convict_threshold fields not all settable via JMX Key: CASSANDRA-4479 URL: https://issues.apache.org/jira/browse/CASSANDRA-4479 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.2 Reporter: Eric Dong If a setting is configurable both via cassandra.yaml and JMX, the two should be consistent, but that is not the case for phi_convict_threshold. I'm trying to set phi_convict_threshold via JMX, which sets FailureDetector.phiConvictThreshold_, but this doesn't update Config.phi_convict_threshold, which gets its value from cassandra.yaml when starting up. Some places, such as FailureDetector.interpret(InetAddress), use FailureDetector.phiConvictThreshold_; others, such as AntiEntropyService.line 813 in cassandra-1.1.2, use Config.phi_convict_threshold: {code} // We want a higher confidence in the failure detection than usual because failing a repair wrongly has a high cost. if (phi < 2 * DatabaseDescriptor.getPhiConvictThreshold()) return; {code} where DatabaseDescriptor.getPhiConvictThreshold() returns Conf.phi_convict_threshold. So, it looks like there are cases where a value is stored in multiple places, and setting the value via JMX doesn't set all of them. I'd say there should only be a single place where a configuration parameter is stored, and that single field: * should read in the value from cassandra.yaml, optionally falling back to a sane default * should be the field that the JMX attribute reads and sets, and * any place that needs the current global setting should get it from that field. However, there could be cases where you read in a global value at the start of a task and keep that value locally until the end of the task. Also, anything settable via JMX should be volatile or set via a synchronized setter, or else according to the Java memory model other threads may be stuck with the old setting. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira