Github user tdas commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15285#discussion_r83748039
  
    --- Diff: 
core/src/main/scala/org/apache/spark/util/logging/RollingFileAppender.scala ---
    @@ -142,6 +172,12 @@ private[spark] object RollingFileAppender {
       val SIZE_DEFAULT = (1024 * 1024).toString
       val RETAINED_FILES_PROPERTY = 
"spark.executor.logs.rolling.maxRetainedFiles"
       val DEFAULT_BUFFER_SIZE = 8192
    +  val ENABLE_COMPRESSION = "spark.executor.logs.rolling.enableCompression"
    +  val FILE_UNCOMPRESSED_LENGTH_CACHE_SIZE =
    +    "spark.executor.logs.rolling.fileUncompressedLengthCacheSize"
    --- End diff --
    
    This is not a configuration inside executor. Its inside the worker. So why 
is this named "spark.executor"?
    
    Its nothing to do with executor. The worker process (that manages 
executors) runs this code, and is independent of the application specific 
configuration in the executor. 
    
    Spark worker configurations are named as "spark.worker.*". See 
http://spark.apache.org/docs/latest/spark-standalone.html
    
    So how about renaming it to "spark.worker.ui. 
fileUncompressedLengthCacheSize"



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to