Github user srowen commented on a diff in the pull request: https://github.com/apache/spark/pull/13422#discussion_r65268817 --- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala --- @@ -357,13 +359,22 @@ class SparkContext(config: SparkConf) extends Logging with ExecutorAllocationCli */ def setLogLevel(logLevel: String) { val validLevels = Seq("ALL", "DEBUG", "ERROR", "FATAL", "INFO", "OFF", "TRACE", "WARN") - if (!validLevels.contains(logLevel)) { + // let's allow lowcase or mixed case too + if (!validLevels.contains(logLevel.toUpperCase)) { --- End diff -- Can `validLevels` just become a `Set` in a `SparkContext` `object`? You can call `toUpperCase(Locale.ENGLISH)`, once, and save it in a val, and use the same value when checking the value and passing it to log4j, just for tidiness. I don't think the `resetLogLevel` method is worth adding.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org