spark git commit: [SPARK-21159][CORE] Don't try to connect to launcher in standalone cluster mode.

2017-06-23 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.2 a3088d23a -> 96c04f1ed [SPARK-21159][CORE] Don't try to connect to launcher in standalone cluster mode. Monitoring for standalone cluster mode is not implemented (see SPARK-11033), but the same scheduler implementation is used, and if

spark git commit: [SPARK-21159][CORE] Don't try to connect to launcher in standalone cluster mode.

2017-06-23 Thread wenchen
Repository: spark Updated Branches: refs/heads/master b837bf9ae -> bfd73a7c4 [SPARK-21159][CORE] Don't try to connect to launcher in standalone cluster mode. Monitoring for standalone cluster mode is not implemented (see SPARK-11033), but the same scheduler implementation is used, and if it

spark git commit: [SPARK-20555][SQL] Fix mapping of Oracle DECIMAL types to Spark types in read path

2017-06-23 Thread lixiao
Repository: spark Updated Branches: refs/heads/branch-2.1 bcaf06c49 -> f12883e32 [SPARK-20555][SQL] Fix mapping of Oracle DECIMAL types to Spark types in read path This PR is to revert some code changes in the read path of https://github.com/apache/spark/pull/14377. The original fix is

spark git commit: [SPARK-20555][SQL] Fix mapping of Oracle DECIMAL types to Spark types in read path

2017-06-23 Thread lixiao
Repository: spark Updated Branches: refs/heads/branch-2.2 3394b0641 -> a3088d23a [SPARK-20555][SQL] Fix mapping of Oracle DECIMAL types to Spark types in read path ## What changes were proposed in this pull request? This PR is to revert some code changes in the read path of

spark git commit: [SPARK-20555][SQL] Fix mapping of Oracle DECIMAL types to Spark types in read path

2017-06-23 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 7525ce98b -> b837bf9ae [SPARK-20555][SQL] Fix mapping of Oracle DECIMAL types to Spark types in read path ## What changes were proposed in this pull request? This PR is to revert some code changes in the read path of

spark git commit: [SPARK-20431][SS][FOLLOWUP] Specify a schema by using a DDL-formatted string in DataStreamReader

2017-06-23 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 03eb6117a -> 7525ce98b [SPARK-20431][SS][FOLLOWUP] Specify a schema by using a DDL-formatted string in DataStreamReader ## What changes were proposed in this pull request? This pr supported a DDL-formatted string in

spark git commit: [SPARK-21164][SQL] Remove isTableSample from Sample and isGenerated from Alias and AttributeReference

2017-06-23 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 13c2a4f2f -> 03eb6117a [SPARK-21164][SQL] Remove isTableSample from Sample and isGenerated from Alias and AttributeReference ## What changes were proposed in this pull request? `isTableSample` and `isGenerated ` were introduced for SQL

spark git commit: [SPARK-20417][SQL] Move subquery error handling to checkAnalysis from Analyzer

2017-06-23 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 4cc62951a -> 13c2a4f2f [SPARK-20417][SQL] Move subquery error handling to checkAnalysis from Analyzer ## What changes were proposed in this pull request? Currently we do a lot of validations for subquery in the Analyzer. We should move

spark git commit: [MINOR][DOCS] Docs in DataFrameNaFunctions.scala use wrong method

2017-06-23 Thread lixiao
Repository: spark Updated Branches: refs/heads/branch-2.1 f8fd3b48b -> bcaf06c49 [MINOR][DOCS] Docs in DataFrameNaFunctions.scala use wrong method ## What changes were proposed in this pull request? * Following the first few examples in this file, the remaining methods should also be

spark git commit: [MINOR][DOCS] Docs in DataFrameNaFunctions.scala use wrong method

2017-06-23 Thread lixiao
Repository: spark Updated Branches: refs/heads/branch-2.2 f16026738 -> 3394b0641 [MINOR][DOCS] Docs in DataFrameNaFunctions.scala use wrong method ## What changes were proposed in this pull request? * Following the first few examples in this file, the remaining methods should also be

spark git commit: [MINOR][DOCS] Docs in DataFrameNaFunctions.scala use wrong method

2017-06-23 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 2ebd0838d -> 4cc62951a [MINOR][DOCS] Docs in DataFrameNaFunctions.scala use wrong method ## What changes were proposed in this pull request? * Following the first few examples in this file, the remaining methods should also be methods of

spark git commit: [SPARK-21192][SS] Preserve State Store provider class configuration across StreamingQuery restarts

2017-06-23 Thread zsxwing
Repository: spark Updated Branches: refs/heads/master 1ebe7ffe0 -> 2ebd0838d [SPARK-21192][SS] Preserve State Store provider class configuration across StreamingQuery restarts ## What changes were proposed in this pull request? If the SQL conf for StateStore provider class is changed

spark git commit: [SPARK-21181] Release byteBuffers to suppress netty error messages

2017-06-23 Thread vanzin
Repository: spark Updated Branches: refs/heads/branch-2.1 1a98d5d0a -> f8fd3b48b [SPARK-21181] Release byteBuffers to suppress netty error messages ## What changes were proposed in this pull request? We are explicitly calling release on the byteBuf's used to encode the string to Base64 to

spark git commit: [SPARK-21181] Release byteBuffers to suppress netty error messages

2017-06-23 Thread vanzin
Repository: spark Updated Branches: refs/heads/branch-2.2 9d2980832 -> f16026738 [SPARK-21181] Release byteBuffers to suppress netty error messages ## What changes were proposed in this pull request? We are explicitly calling release on the byteBuf's used to encode the string to Base64 to

spark git commit: [SPARK-21181] Release byteBuffers to suppress netty error messages

2017-06-23 Thread vanzin
Repository: spark Updated Branches: refs/heads/master b803b66a8 -> 1ebe7ffe0 [SPARK-21181] Release byteBuffers to suppress netty error messages ## What changes were proposed in this pull request? We are explicitly calling release on the byteBuf's used to encode the string to Base64 to

spark git commit: [SPARK-21180][SQL] Remove conf from stats functions since now we have conf in LogicalPlan

2017-06-23 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 07479b3cf -> b803b66a8 [SPARK-21180][SQL] Remove conf from stats functions since now we have conf in LogicalPlan ## What changes were proposed in this pull request? After wiring `SQLConf` in logical plan ([PR

spark git commit: [SPARK-21149][R] Add job description API for R

2017-06-23 Thread felixcheung
Repository: spark Updated Branches: refs/heads/master f3dea6079 -> 07479b3cf [SPARK-21149][R] Add job description API for R ## What changes were proposed in this pull request? Extend `setJobDescription` to SparkR API. ## How was this patch tested? It looks difficult to add a test. Manually

spark git commit: [SPARK-21144][SQL] Print a warning if the data schema and partition schema have the duplicate columns

2017-06-23 Thread lixiao
Repository: spark Updated Branches: refs/heads/branch-2.2 b6749ba09 -> 9d2980832 [SPARK-21144][SQL] Print a warning if the data schema and partition schema have the duplicate columns ## What changes were proposed in this pull request? The current master outputs unexpected results when the

spark git commit: [SPARK-21144][SQL] Print a warning if the data schema and partition schema have the duplicate columns

2017-06-23 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 5dca10b8f -> f3dea6079 [SPARK-21144][SQL] Print a warning if the data schema and partition schema have the duplicate columns ## What changes were proposed in this pull request? The current master outputs unexpected results when the data

spark git commit: [SPARK-21193][PYTHON] Specify Pandas version in setup.py

2017-06-23 Thread wenchen
Repository: spark Updated Branches: refs/heads/master acd208ee5 -> 5dca10b8f [SPARK-21193][PYTHON] Specify Pandas version in setup.py ## What changes were proposed in this pull request? It looks we missed specifying the Pandas version. This PR proposes to fix it. For the current state, it

spark git commit: [SPARK-21115][CORE] If the cores left is less than the coresPerExecutor, the cores left will not be allocated, so it should not to check in every schedule

2017-06-23 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 153dd49b7 -> acd208ee5 [SPARK-21115][CORE] If the cores left is less than the coresPerExecutor,the cores left will not be allocated, so it should not to check in every schedule ## What changes were proposed in this pull request? If we

spark git commit: [SPARK-21165] [SQL] [2.2] Use executedPlan instead of analyzedPlan in INSERT AS SELECT [WIP]

2017-06-23 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.2 b99c0e9d1 -> b6749ba09 [SPARK-21165] [SQL] [2.2] Use executedPlan instead of analyzedPlan in INSERT AS SELECT [WIP] ### What changes were proposed in this pull request? The input query schema of INSERT AS SELECT could be changed

spark git commit: [SPARK-21047] Add test suites for complicated cases in ColumnarBatchSuite

2017-06-23 Thread wenchen
Repository: spark Updated Branches: refs/heads/master fe24634d1 -> 153dd49b7 [SPARK-21047] Add test suites for complicated cases in ColumnarBatchSuite ## What changes were proposed in this pull request? Current ColumnarBatchSuite has very simple test cases for `Array` and `Struct`. This pr

spark git commit: [SPARK-21145][SS] Added StateStoreProviderId with queryRunId to reload StateStoreProviders when query is restarted

2017-06-23 Thread tdas
Repository: spark Updated Branches: refs/heads/master b8a743b6a -> fe24634d1 [SPARK-21145][SS] Added StateStoreProviderId with queryRunId to reload StateStoreProviders when query is restarted ## What changes were proposed in this pull request? StateStoreProvider instances are loaded