Repository: spark
Updated Branches:
  refs/heads/master 1e6c1d8bf -> c8f7691c6


[MINOR][DOC] Spacing items in migration guide for readability and consistency

## What changes were proposed in this pull request?

Currently, migration guide has no space between each item which looks too 
compact and hard to read. Some of items already had some spaces between them in 
the migration guide. This PR suggest to format them consistently for 
readability.

Before:

![screen shot 2018-10-18 at 10 00 04 
am](https://user-images.githubusercontent.com/6477701/47126768-9e84fb80-d2bc-11e8-9211-84703486c553.png)

After:

![screen shot 2018-10-18 at 9 53 55 
am](https://user-images.githubusercontent.com/6477701/47126708-4fd76180-d2bc-11e8-9aa5-546f0622ca20.png)

## How was this patch tested?

Manually tested:

Closes #22761 from HyukjinKwon/minor-migration-doc.

Authored-by: hyukjinkwon <gurwls...@apache.org>
Signed-off-by: hyukjinkwon <gurwls...@apache.org>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/c8f7691c
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/c8f7691c
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/c8f7691c

Branch: refs/heads/master
Commit: c8f7691c64a28174a54e8faa159b50a3836a7225
Parents: 1e6c1d8
Author: hyukjinkwon <gurwls...@apache.org>
Authored: Fri Oct 19 13:55:27 2018 +0800
Committer: hyukjinkwon <gurwls...@apache.org>
Committed: Fri Oct 19 13:55:27 2018 +0800

----------------------------------------------------------------------
 docs/sql-migration-guide-upgrade.md | 54 ++++++++++++++++++++++++++++++++
 1 file changed, 54 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/c8f7691c/docs/sql-migration-guide-upgrade.md
----------------------------------------------------------------------
diff --git a/docs/sql-migration-guide-upgrade.md 
b/docs/sql-migration-guide-upgrade.md
index 7faf8bd..7871a49 100644
--- a/docs/sql-migration-guide-upgrade.md
+++ b/docs/sql-migration-guide-upgrade.md
@@ -74,26 +74,47 @@ displayTitle: Spark SQL Upgrading Guide
   </table>
 
   - Since Spark 2.4, when there is a struct field in front of the IN operator 
before a subquery, the inner query must contain a struct field as well. In 
previous versions, instead, the fields of the struct were compared to the 
output of the inner query. Eg. if `a` is a `struct(a string, b int)`, in Spark 
2.4 `a in (select (1 as a, 'a' as b) from range(1))` is a valid query, while `a 
in (select 1, 'a' from range(1))` is not. In previous version it was the 
opposite.
+
   - In versions 2.2.1+ and 2.3, if `spark.sql.caseSensitive` is set to true, 
then the `CURRENT_DATE` and `CURRENT_TIMESTAMP` functions incorrectly became 
case-sensitive and would resolve to columns (unless typed in lower case). In 
Spark 2.4 this has been fixed and the functions are no longer case-sensitive.
+
   - Since Spark 2.4, Spark will evaluate the set operations referenced in a 
query by following a precedence rule as per the SQL standard. If the order is 
not specified by parentheses, set operations are performed from left to right 
with the exception that all INTERSECT operations are performed before any 
UNION, EXCEPT or MINUS operations. The old behaviour of giving equal precedence 
to all the set operations are preserved under a newly added configuration 
`spark.sql.legacy.setopsPrecedence.enabled` with a default value of `false`. 
When this property is set to `true`, spark will evaluate the set operators from 
left to right as they appear in the query given no explicit ordering is 
enforced by usage of parenthesis.
+
   - Since Spark 2.4, Spark will display table description column Last Access 
value as UNKNOWN when the value was Jan 01 1970.
+
   - Since Spark 2.4, Spark maximizes the usage of a vectorized ORC reader for 
ORC files by default. To do that, `spark.sql.orc.impl` and 
`spark.sql.orc.filterPushdown` change their default values to `native` and 
`true` respectively.
+
   - In PySpark, when Arrow optimization is enabled, previously `toPandas` just 
failed when Arrow optimization is unable to be used whereas `createDataFrame` 
from Pandas DataFrame allowed the fallback to non-optimization. Now, both 
`toPandas` and `createDataFrame` from Pandas DataFrame allow the fallback by 
default, which can be switched off by 
`spark.sql.execution.arrow.fallback.enabled`.
+
   - Since Spark 2.4, writing an empty dataframe to a directory launches at 
least one write task, even if physically the dataframe has no partition. This 
introduces a small behavior change that for self-describing file formats like 
Parquet and Orc, Spark creates a metadata-only file in the target directory 
when writing a 0-partition dataframe, so that schema inference can still work 
if users read that directory later. The new behavior is more reasonable and 
more consistent regarding writing empty dataframe.
+
   - Since Spark 2.4, expression IDs in UDF arguments do not appear in column 
names. For example, a column name in Spark 2.4 is not `UDF:f(col0 AS colA#28)` 
but ``UDF:f(col0 AS `colA`)``.
+
   - Since Spark 2.4, writing a dataframe with an empty or nested empty schema 
using any file formats (parquet, orc, json, text, csv etc.) is not allowed. An 
exception is thrown when attempting to write dataframes with empty schema.
+
   - Since Spark 2.4, Spark compares a DATE type with a TIMESTAMP type after 
promotes both sides to TIMESTAMP. To set `false` to 
`spark.sql.legacy.compareDateTimestampInTimestamp` restores the previous 
behavior. This option will be removed in Spark 3.0.
+
   - Since Spark 2.4, creating a managed table with nonempty location is not 
allowed. An exception is thrown when attempting to create a managed table with 
nonempty location. To set `true` to 
`spark.sql.legacy.allowCreatingManagedTableUsingNonemptyLocation` restores the 
previous behavior. This option will be removed in Spark 3.0.
+
   - Since Spark 2.4, renaming a managed table to existing location is not 
allowed. An exception is thrown when attempting to rename a managed table to 
existing location.
+
   - Since Spark 2.4, the type coercion rules can automatically promote the 
argument types of the variadic SQL functions (e.g., IN/COALESCE) to the widest 
common type, no matter how the input arguments order. In prior Spark versions, 
the promotion could fail in some specific orders (e.g., TimestampType, 
IntegerType and StringType) and throw an exception.
+
   - Since Spark 2.4, Spark has enabled non-cascading SQL cache invalidation in 
addition to the traditional cache invalidation mechanism. The non-cascading 
cache invalidation mechanism allows users to remove a cache without impacting 
its dependent caches. This new cache invalidation mechanism is used in 
scenarios where the data of the cache to be removed is still valid, e.g., 
calling unpersist() on a Dataset, or dropping a temporary view. This allows 
users to free up memory and keep the desired caches valid at the same time.
+
   - In version 2.3 and earlier, Spark converts Parquet Hive tables by default 
but ignores table properties like `TBLPROPERTIES (parquet.compression 'NONE')`. 
This happens for ORC Hive table properties like `TBLPROPERTIES (orc.compress 
'NONE')` in case of `spark.sql.hive.convertMetastoreOrc=true`, too. Since Spark 
2.4, Spark respects Parquet/ORC specific table properties while converting 
Parquet/ORC Hive tables. As an example, `CREATE TABLE t(id int) STORED AS 
PARQUET TBLPROPERTIES (parquet.compression 'NONE')` would generate Snappy 
parquet files during insertion in Spark 2.3, and in Spark 2.4, the result would 
be uncompressed parquet files.
+
   - Since Spark 2.0, Spark converts Parquet Hive tables by default for better 
performance. Since Spark 2.4, Spark converts ORC Hive tables by default, too. 
It means Spark uses its own ORC support by default instead of Hive SerDe. As an 
example, `CREATE TABLE t(id int) STORED AS ORC` would be handled with Hive 
SerDe in Spark 2.3, and in Spark 2.4, it would be converted into Spark's ORC 
data source table and ORC vectorization would be applied. To set `false` to 
`spark.sql.hive.convertMetastoreOrc` restores the previous behavior.
+
   - In version 2.3 and earlier, CSV rows are considered as malformed if at 
least one column value in the row is malformed. CSV parser dropped such rows in 
the DROPMALFORMED mode or outputs an error in the FAILFAST mode. Since Spark 
2.4, CSV row is considered as malformed only when it contains malformed column 
values requested from CSV datasource, other values can be ignored. As an 
example, CSV file contains the "id,name" header and one row "1234". In Spark 
2.4, selection of the id column consists of a row with one column value 1234 
but in Spark 2.3 and earlier it is empty in the DROPMALFORMED mode. To restore 
the previous behavior, set `spark.sql.csv.parser.columnPruning.enabled` to 
`false`.
+
   - Since Spark 2.4, File listing for compute statistics is done in parallel 
by default. This can be disabled by setting 
`spark.sql.statistics.parallelFileListingInStatsComputation.enabled` to `False`.
+
   - Since Spark 2.4, Metadata files (e.g. Parquet summary files) and temporary 
files are not counted as data files when calculating table size during 
Statistics computation.
+
   - Since Spark 2.4, empty strings are saved as quoted empty strings `""`. In 
version 2.3 and earlier, empty strings are equal to `null` values and do not 
reflect to any characters in saved CSV files. For example, the row of `"a", 
null, "", 1` was writted as `a,,,1`. Since Spark 2.4, the same row is saved as 
`a,,"",1`. To restore the previous behavior, set the CSV option `emptyValue` to 
empty (not quoted) string.  
+
   - Since Spark 2.4, The LOAD DATA command supports wildcard `?` and `*`, 
which match any one character, and zero or more characters, respectively. 
Example: `LOAD DATA INPATH '/tmp/folder*/'` or `LOAD DATA INPATH 
'/tmp/part-?'`. Special Characters like `space` also now work in paths. 
Example: `LOAD DATA INPATH '/tmp/folder name/'`.
+
   - In Spark version 2.3 and earlier, HAVING without GROUP BY is treated as 
WHERE. This means, `SELECT 1 FROM range(10) HAVING true` is executed as `SELECT 
1 FROM range(10) WHERE true`  and returns 10 rows. This violates SQL standard, 
and has been fixed in Spark 2.4. Since Spark 2.4, HAVING without GROUP BY is 
treated as a global aggregate, which means `SELECT 1 FROM range(10) HAVING 
true` will return only one row. To restore the previous behavior, set 
`spark.sql.legacy.parser.havingWithoutGroupByAsWhere` to `true`.
 
 ## Upgrading From Spark SQL 2.3.0 to 2.3.1 and above
@@ -103,8 +124,11 @@ displayTitle: Spark SQL Upgrading Guide
 ## Upgrading From Spark SQL 2.2 to 2.3
 
   - Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when 
the referenced columns only include the internal corrupt record column (named 
`_corrupt_record` by default). For example, 
`spark.read.schema(schema).json(file).filter($"_corrupt_record".isNotNull).count()`
 and `spark.read.schema(schema).json(file).select("_corrupt_record").show()`. 
Instead, you can cache or save the parsed results and then send the same query. 
For example, `val df = spark.read.schema(schema).json(file).cache()` and then 
`df.filter($"_corrupt_record".isNotNull).count()`.
+
   - The `percentile_approx` function previously accepted numeric type input 
and output double type results. Now it supports date type, timestamp type and 
numeric types as input types. The result type is also changed to be the same as 
the input type, which is more reasonable for percentiles.
+
   - Since Spark 2.3, the Join/Filter's deterministic predicates that are after 
the first non-deterministic predicates are also pushed down/through the child 
operators, if possible. In prior Spark versions, these filters are not eligible 
for predicate pushdown.
+
   - Partition column inference previously found incorrect common type for 
different inferred types, for example, previously it ended up with double type 
as the common type for double type and date type. Now it finds the correct 
common type for such conflicts. The conflict resolution follows the table below:
     <table class="table">
       <tr>
@@ -243,18 +267,29 @@ displayTitle: Spark SQL Upgrading Guide
     </table>
 
     Note that, for <b>DecimalType(38,0)*</b>, the table above intentionally 
does not cover all other combinations of scales and precisions because 
currently we only infer decimal type like `BigInteger`/`BigInt`. For example, 
1.1 is inferred as double type.
+
   - In PySpark, now we need Pandas 0.19.2 or upper if you want to use Pandas 
related functionalities, such as `toPandas`, `createDataFrame` from Pandas 
DataFrame, etc.
+
   - In PySpark, the behavior of timestamp values for Pandas related 
functionalities was changed to respect session timezone. If you want to use the 
old behavior, you need to set a configuration 
`spark.sql.execution.pandas.respectSessionTimeZone` to `False`. See 
[SPARK-22395](https://issues.apache.org/jira/browse/SPARK-22395) for details.
+
   - In PySpark, `na.fill()` or `fillna` also accepts boolean and replaces 
nulls with booleans. In prior Spark versions, PySpark just ignores it and 
returns the original Dataset/DataFrame.
+
   - Since Spark 2.3, when either broadcast hash join or broadcast nested loop 
join is applicable, we prefer to broadcasting the table that is explicitly 
specified in a broadcast hint. For details, see the section [Broadcast 
Hint](sql-performance-turing.html#broadcast-hint-for-sql-queries) and 
[SPARK-22489](https://issues.apache.org/jira/browse/SPARK-22489).
+
   - Since Spark 2.3, when all inputs are binary, `functions.concat()` returns 
an output as binary. Otherwise, it returns as a string. Until Spark 2.3, it 
always returns as a string despite of input types. To keep the old behavior, 
set `spark.sql.function.concatBinaryAsString` to `true`.
+
   - Since Spark 2.3, when all inputs are binary, SQL `elt()` returns an output 
as binary. Otherwise, it returns as a string. Until Spark 2.3, it always 
returns as a string despite of input types. To keep the old behavior, set 
`spark.sql.function.eltOutputAsString` to `true`.
 
  - Since Spark 2.3, by default arithmetic operations between decimals return a 
rounded value if an exact representation is not possible (instead of returning 
NULL). This is compliant with SQL ANSI 2011 specification and Hive's new 
behavior introduced in Hive 2.2 (HIVE-15331). This involves the following 
changes
+
     - The rules to determine the result type of an arithmetic operation have 
been updated. In particular, if the precision / scale needed are out of the 
range of available values, the scale is reduced up to 6, in order to prevent 
the truncation of the integer part of the decimals. All the arithmetic 
operations are affected by the change, ie. addition (`+`), subtraction (`-`), 
multiplication (`*`), division (`/`), remainder (`%`) and positive module 
(`pmod`).
+
     - Literal values used in SQL operations are converted to DECIMAL with the 
exact precision and scale needed by them.
+
     - The configuration `spark.sql.decimalOperations.allowPrecisionLoss` has 
been introduced. It defaults to `true`, which means the new behavior described 
here; if set to `false`, Spark uses previous rules, ie. it doesn't adjust the 
needed scale to represent the values and it returns NULL if an exact 
representation of the value is not possible.
+
   - In PySpark, `df.replace` does not allow to omit `value` when `to_replace` 
is not a dictionary. Previously, `value` could be omitted in the other cases 
and had `None` by default, which is counterintuitive and error-prone.
+
   - Un-aliased subquery's semantic has not been well defined with confusing 
behaviors. Since Spark 2.3, we invalidate such confusing cases, for example: 
`SELECT v.i from (SELECT i FROM v)`, Spark will throw an analysis exception in 
this case because users should not be able to use the qualifier inside a 
subquery. See [SPARK-20690](https://issues.apache.org/jira/browse/SPARK-20690) 
and [SPARK-21335](https://issues.apache.org/jira/browse/SPARK-21335) for more 
details.
 
   - When creating a `SparkSession` with `SparkSession.builder.getOrCreate()`, 
if there is an existing `SparkContext`, the builder was trying to update the 
`SparkConf` of the existing `SparkContext` with configurations specified to the 
builder, but the `SparkContext` is shared by all `SparkSession`s, so we should 
not update them. Since 2.3, the builder comes to not update the configurations. 
If you want to update them, you need to update them prior to creating a 
`SparkSession`.
@@ -268,15 +303,20 @@ displayTitle: Spark SQL Upgrading Guide
 ## Upgrading From Spark SQL 2.0 to 2.1
 
  - Datasource tables now store partition metadata in the Hive metastore. This 
means that Hive DDLs such as `ALTER TABLE PARTITION ... SET LOCATION` are now 
available for tables created with the Datasource API.
+
     - Legacy datasource tables can be migrated to this format via the `MSCK 
REPAIR TABLE` command. Migrating legacy tables is recommended to take advantage 
of Hive DDL support and improved planning performance.
+
     - To determine if a table has been migrated, look for the 
`PartitionProvider: Catalog` attribute when issuing `DESCRIBE FORMATTED` on the 
table.
  - Changes to `INSERT OVERWRITE TABLE ... PARTITION ...` behavior for 
Datasource tables.
+
     - In prior Spark versions `INSERT OVERWRITE` overwrote the entire 
Datasource table, even when given a partition specification. Now only 
partitions matching the specification are overwritten.
+
     - Note that this still differs from the behavior of Hive tables, which is 
to overwrite only partitions overlapping with newly inserted data.
 
 ## Upgrading From Spark SQL 1.6 to 2.0
 
  - `SparkSession` is now the new entry point of Spark that replaces the old 
`SQLContext` and
+
    `HiveContext`. Note that the old SQLContext and HiveContext are kept for 
backward compatibility. A new `catalog` interface is accessible from 
`SparkSession` - existing API on databases and tables access such as 
`listTables`, `createExternalTable`, `dropTempView`, `cacheTable` are moved 
here.
 
  - Dataset API and DataFrame API are unified. In Scala, `DataFrame` becomes a 
type alias for
@@ -288,15 +328,19 @@ displayTitle: Spark SQL Upgrading Guide
    single-node data frame notion in these languages.
 
  - Dataset and DataFrame API `unionAll` has been deprecated and replaced by 
`union`
+
  - Dataset and DataFrame API `explode` has been deprecated, alternatively, use 
`functions.explode()` with `select` or `flatMap`
+
  - Dataset and DataFrame API `registerTempTable` has been deprecated and 
replaced by `createOrReplaceTempView`
 
  - Changes to `CREATE TABLE ... LOCATION` behavior for Hive tables.
+
     - From Spark 2.0, `CREATE TABLE ... LOCATION` is equivalent to `CREATE 
EXTERNAL TABLE ... LOCATION`
       in order to prevent accidental dropping the existing data in the 
user-provided locations.
       That means, a Hive table created in Spark SQL with the user-specified 
location is always a Hive external table.
       Dropping external tables will not remove the data. Users are not allowed 
to specify the location for Hive managed tables.
       Note that this is different from the Hive behavior.
+
     - As a result, `DROP TABLE` statements on those tables will not remove the 
data.
 
  - `spark.sql.parquet.cacheMetadata` is no longer used.
@@ -315,6 +359,7 @@ displayTitle: Spark SQL Upgrading Guide
      --conf spark.sql.hive.thriftServer.singleSession=true \
      ...
    {% endhighlight %}
+
  - Since 1.6.1, withColumn method in sparkR supports adding a new column to or 
replacing existing columns
    of the same name of a DataFrame.
 
@@ -328,26 +373,35 @@ displayTitle: Spark SQL Upgrading Guide
  - Optimized execution using manually managed memory (Tungsten) is now enabled 
by default, along with
    code generation for expression evaluation. These features can both be 
disabled by setting
    `spark.sql.tungsten.enabled` to `false`.
+
  - Parquet schema merging is no longer enabled by default. It can be 
re-enabled by setting
    `spark.sql.parquet.mergeSchema` to `true`.
+
  - Resolution of strings to columns in python now supports using dots (`.`) to 
qualify the column or
    access nested values. For example `df['table.column.nestedField']`. 
However, this means that if
    your column name contains any dots you must now escape them using backticks 
(e.g., ``table.`column.with.dots`.nested``).
+
  - In-memory columnar storage partition pruning is on by default. It can be 
disabled by setting
    `spark.sql.inMemoryColumnarStorage.partitionPruning` to `false`.
+
  - Unlimited precision decimal columns are no longer supported, instead Spark 
SQL enforces a maximum
    precision of 38. When inferring schema from `BigDecimal` objects, a 
precision of (38, 18) is now
    used. When no precision is specified in DDL then the default remains 
`Decimal(10, 0)`.
+
  - Timestamps are now stored at a precision of 1us, rather than 1ns
+
  - In the `sql` dialect, floating point numbers are now parsed as decimal. 
HiveQL parsing remains
    unchanged.
+
  - The canonical name of SQL/DataFrame functions are now lower case (e.g., sum 
vs SUM).
+
  - JSON data source will not automatically load new files that are created by 
other applications
    (i.e. files that are not inserted to the dataset through Spark SQL).
    For a JSON persistent table (i.e. the metadata of the table is stored in 
Hive Metastore),
    users can use `REFRESH TABLE` SQL command or `HiveContext`'s `refreshTable` 
method
    to include those new files to the table. For a DataFrame representing a 
JSON dataset, users need to recreate
    the DataFrame and the new DataFrame will include new files.
+
  - DataFrame.withColumn method in pySpark supports adding a new column or 
replacing existing columns of the same name.
 
 ## Upgrading from Spark SQL 1.3 to 1.4


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to