This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new d4a16f4  [SPARK-27419][FOLLOWUP][DOCS] Add note about 
spark.executor.heartbeatInterval change to migration guide
d4a16f4 is described below

commit d4a16f46f71021178bfc7dca511e47390986197d
Author: Sean Owen <sean.o...@databricks.com>
AuthorDate: Mon Apr 22 12:02:16 2019 +0800

    [SPARK-27419][FOLLOWUP][DOCS] Add note about 
spark.executor.heartbeatInterval change to migration guide
    
    ## What changes were proposed in this pull request?
    
    Add note about spark.executor.heartbeatInterval change to migration guide
    See also https://github.com/apache/spark/pull/24329
    
    ## How was this patch tested?
    
    N/A
    
    Closes #24432 from srowen/SPARK-27419.2.
    
    Authored-by: Sean Owen <sean.o...@databricks.com>
    Signed-off-by: Wenchen Fan <wenc...@databricks.com>
---
 docs/sql-migration-guide-upgrade.md | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/docs/sql-migration-guide-upgrade.md 
b/docs/sql-migration-guide-upgrade.md
index b193522..90a7d8d 100644
--- a/docs/sql-migration-guide-upgrade.md
+++ b/docs/sql-migration-guide-upgrade.md
@@ -124,6 +124,14 @@ license: |
 
   - In Spark version 2.4, when a spark session is created via 
`cloneSession()`, the newly created spark session inherits its configuration 
from its parent `SparkContext` even though the same configuration may exist 
with a different value in its parent spark session. Since Spark 3.0, the 
configurations of a parent `SparkSession` have a higher precedence over the 
parent `SparkContext`.
 
+## Upgrading from Spark SQL 2.4 to 2.4.1
+
+  - The value of `spark.executor.heartbeatInterval`, when specified without 
units like "30" rather than "30s", was
+    inconsistently interpreted as both seconds and milliseconds in Spark 2.4.0 
in different parts of the code.
+    Unitless values are now consistently interpreted as milliseconds. 
Applications that set values like "30"
+    need to specify a value with units like "30s" now, to avoid being 
interpreted as milliseconds; otherwise, 
+    the extremely short interval that results will likely cause applications 
to fail.
+
 ## Upgrading From Spark SQL 2.3 to 2.4
 
   - In Spark version 2.3 and earlier, the second parameter to array_contains 
function is implicitly promoted to the element type of first array type 
parameter. This type promotion can be lossy and may cause `array_contains` 
function to return wrong result. This problem has been addressed in 2.4 by 
employing a safer type promotion mechanism. This can cause some change in 
behavior and are illustrated in the table below.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to