Github user ilganeli commented on a diff in the pull request: https://github.com/apache/spark/pull/5236#discussion_r28250502 --- Diff: core/src/main/scala/org/apache/spark/HeartbeatReceiver.scala --- @@ -62,14 +62,17 @@ private[spark] class HeartbeatReceiver(sc: SparkContext) // "spark.network.timeout" uses "seconds", while `spark.storage.blockManagerSlaveTimeoutMs` uses // "milliseconds" - private val executorTimeoutMs = sc.conf.getOption("spark.network.timeout").map(_.toLong * 1000). - getOrElse(sc.conf.getLong("spark.storage.blockManagerSlaveTimeoutMs", 120000)) - + private val slaveTimeoutMs = + sc.conf.getTimeAsMs("spark.storage.blockManagerSlaveTimeoutMs", "120s") + private val executorTimeoutMs = + sc.conf.getTimeAsSec("spark.network.timeout", s"${slaveTimeoutMs}ms") * 1000 + // "spark.network.timeoutInterval" uses "seconds", while // "spark.storage.blockManagerTimeoutIntervalMs" uses "milliseconds" - private val checkTimeoutIntervalMs = - sc.conf.getOption("spark.network.timeoutInterval").map(_.toLong * 1000). - getOrElse(sc.conf.getLong("spark.storage.blockManagerTimeoutIntervalMs", 60000)) + private val timeoutIntervalMs = + sc.conf.getTimeAsMs("spark.storage.blockManagerTimeoutIntervalMs", "60s") + private val checkTimeoutIntervalMs = + sc.conf.getTimeAsSec("spark.network.timeoutInterval", s"${timeoutIntervalMs}ms") * 1000 --- End diff -- Sean. It can't because then the default unit is assumed to be ms when it's really seconds. Sent with Good (www.good.com) -----Original Message----- From: Sean Owen [notificati...@github.com<mailto:notificati...@github.com>] Sent: Saturday, April 11, 2015 12:51 PM Eastern Standard Time To: apache/spark Cc: Ganelin, Ilya Subject: Re: [spark] [SPARK-5931][CORE] Use consistent naming for time properties (#5236) In core/src/main/scala/org/apache/spark/HeartbeatReceiver.scala<https://github.com/apache/spark/pull/5236#discussion_r28196499>: > // "spark.network.timeoutInterval" uses "seconds", while > // "spark.storage.blockManagerTimeoutIntervalMs" uses "milliseconds" > - private val checkTimeoutIntervalMs = > - sc.conf.getOption("spark.network.timeoutInterval").map(_.toLong * 1000). > - getOrElse(sc.conf.getLong("spark.storage.blockManagerTimeoutIntervalMs", 60000)) > + private val timeoutIntervalMs = > + sc.conf.getTimeAsMs("spark.storage.blockManagerTimeoutIntervalMs", "60s") > + private val checkTimeoutIntervalMs = > + sc.conf.getTimeAsSec("spark.network.timeoutInterval", s"${timeoutIntervalMs}ms") * 1000 Same, can go straight to ms? â Reply to this email directly or view it on GitHub<https://github.com/apache/spark/pull/5236/files#r28196499>. ________________________________________________________ The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org