Repository: spark
Updated Branches:
refs/heads/branch-2.0 d0bcec157 -> 1890f5fdf
Revert "[SPARK-15285][SQL] Generated SpecificSafeProjection.apply method grows
beyond 64 KB"
This reverts commit d0bcec157d2bd2ed4eff848f831841bef4745904.
Project:
Repository: spark
Updated Branches:
refs/heads/master fa244e5a9 -> de726b0d5
Revert "[SPARK-15285][SQL] Generated SpecificSafeProjection.apply method grows
beyond 64 KB"
This reverts commit fa244e5a90690d6a31be50f2aa203ae1a2e9a1cf.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Repository: spark
Updated Branches:
refs/heads/branch-2.0 c189be976 -> d0bcec157
[SPARK-15285][SQL] Generated SpecificSafeProjection.apply method grows beyond
64 KB
## What changes were proposed in this pull request?
This PR splits the generated code for ```SafeProjection.apply``` by using
Repository: spark
Updated Branches:
refs/heads/master d20771645 -> fa244e5a9
[SPARK-15285][SQL] Generated SpecificSafeProjection.apply method grows beyond
64 KB
## What changes were proposed in this pull request?
This PR splits the generated code for ```SafeProjection.apply``` by using
Repository: spark
Updated Branches:
refs/heads/branch-2.0 f3162b96d -> c189be976
[SPARK-15485][SQL][DOCS] Spark SQL Configuration
What changes were proposed in this pull request?
So far, the page Configuration in the official documentation does not have a
section for Spark SQL.
Repository: spark
Updated Branches:
refs/heads/master a15ca5533 -> d20771645
[SPARK-15485][SQL][DOCS] Spark SQL Configuration
What changes were proposed in this pull request?
So far, the page Configuration in the official documentation does not have a
section for Spark SQL.
Repository: spark
Updated Branches:
refs/heads/branch-2.0 220b9a08e -> f3162b96d
[SPARK-15464][ML][MLLIB][SQL][TESTS] Replace SQLContext and SparkContext with
SparkSession using builder pattern in python test code
## What changes were proposed in this pull request?
Replace SQLContext and
Repository: spark
Updated Branches:
refs/heads/master 5afd927a4 -> a15ca5533
[SPARK-15464][ML][MLLIB][SQL][TESTS] Replace SQLContext and SparkContext with
SparkSession using builder pattern in python test code
## What changes were proposed in this pull request?
Replace SQLContext and
Repository: spark
Updated Branches:
refs/heads/branch-2.0 3def56120 -> 220b9a08e
[SPARK-15311][SQL] Disallow DML on Regular Tables when Using In-Memory Catalog
What changes were proposed in this pull request?
So far, when using In-Memory Catalog, we allow DDL operations for the tables.
Repository: spark
Updated Branches:
refs/heads/master 01659bc50 -> 5afd927a4
[SPARK-15311][SQL] Disallow DML on Regular Tables when Using In-Memory Catalog
What changes were proposed in this pull request?
So far, when using In-Memory Catalog, we allow DDL operations for the tables.
Repository: spark
Updated Branches:
refs/heads/branch-2.0 ca271c792 -> 3def56120
[SPARK-15431][SQL] Support LIST FILE(s)|JAR(s) command natively
## What changes were proposed in this pull request?
Currently command `ADD FILE|JAR ` is supported natively in
SparkSQL. However, when this command
Repository: spark
Updated Branches:
refs/heads/master a8e97d17b -> 01659bc50
[SPARK-15431][SQL] Support LIST FILE(s)|JAR(s) command natively
## What changes were proposed in this pull request?
Currently command `ADD FILE|JAR ` is supported natively in
SparkSQL. However, when this command is
Repository: spark
Updated Branches:
refs/heads/branch-2.0 4673b88b4 -> ca271c792
[MINOR][SPARKR][DOC] Add a description for running unit tests in Windows
## What changes were proposed in this pull request?
This PR adds the description for running unit tests in Windows.
## How was this patch
Repository: spark
Updated Branches:
refs/heads/master 03c7b7c4b -> a8e97d17b
[MINOR][SPARKR][DOC] Add a description for running unit tests in Windows
## What changes were proposed in this pull request?
This PR adds the description for running unit tests in Windows.
## How was this patch
Repository: spark
Updated Branches:
refs/heads/branch-2.0 80bf4ce30 -> 4673b88b4
[SPARK-15315][SQL] Adding error check to the CSV datasource writer for
unsupported complex data types.
## What changes were proposed in this pull request?
Adds error handling to the CSV writer for unsupported
Repository: spark
Updated Branches:
refs/heads/branch-2.0 c55a39c97 -> 80bf4ce30
[MINOR][SQL][DOCS] Add notes of the deterministic assumption on UDF functions
## What changes were proposed in this pull request?
Spark assumes that UDF functions are deterministic. This PR adds explicit notes
Repository: spark
Updated Branches:
refs/heads/master 2585d2b32 -> 37c617e4f
[MINOR][SQL][DOCS] Add notes of the deterministic assumption on UDF functions
## What changes were proposed in this pull request?
Spark assumes that UDF functions are deterministic. This PR adds explicit notes
Repository: spark
Updated Branches:
refs/heads/master 07c36a2f0 -> 2585d2b32
[SPARK-15279][SQL] Catch conflicting SerDe when creating table
## What changes were proposed in this pull request?
The user may do something like:
```
CREATE TABLE my_tab ROW FORMAT SERDE 'anything' STORED AS
Repository: spark
Updated Branches:
refs/heads/branch-2.0 655d88293 -> c55a39c97
[SPARK-15279][SQL] Catch conflicting SerDe when creating table
## What changes were proposed in this pull request?
The user may do something like:
```
CREATE TABLE my_tab ROW FORMAT SERDE 'anything' STORED AS
Repository: spark
Updated Branches:
refs/heads/master 80091b8a6 -> 07c36a2f0
[SPARK-15471][SQL] ScalaReflection cleanup
## What changes were proposed in this pull request?
1. simplify the logic of deserializing option type.
2. simplify the logic of serializing array type, and remove
Repository: spark
Updated Branches:
refs/heads/branch-2.0 4462da707 -> 6eb8ec6f4
[SPARK-14031][SQL] speedup CSV writer
## What changes were proposed in this pull request?
Currently, we create an CSVWriter for every row, it's very expensive and memory
hungry, took about 15 seconds to write
Repository: spark
Updated Branches:
refs/heads/master dafcb05c2 -> 80091b8a6
[SPARK-14031][SQL] speedup CSV writer
## What changes were proposed in this pull request?
Currently, we create an CSVWriter for every row, it's very expensive and memory
hungry, took about 15 seconds to write out 1
Repository: spark
Updated Branches:
refs/heads/branch-2.0 ddac9f262 -> 4462da707
[SPARK-15425][SQL] Disallow cross joins by default
## What changes were proposed in this pull request?
In order to prevent users from inadvertently writing queries with cartesian
joins, this patch introduces a
Repository: spark
Updated Branches:
refs/heads/master fc44b694b -> dafcb05c2
[SPARK-15425][SQL] Disallow cross joins by default
## What changes were proposed in this pull request?
In order to prevent users from inadvertently writing queries with cartesian
joins, this patch introduces a new
24 matches
Mail list logo