spark git commit: [SPARK-26030][BUILD] Bump previousSparkVersion in MimaBuild.scala to be 2.4.0

2018-11-12 Thread wenchen
Repository: spark
Updated Branches:
  refs/heads/master 8d7dbde91 -> e25bce5cc


[SPARK-26030][BUILD] Bump previousSparkVersion in MimaBuild.scala to be 2.4.0

## What changes were proposed in this pull request?

Since Spark 2.4.0 is already in maven repo, we can Bump previousSparkVersion in 
MimaBuild.scala to be 2.4.0.

Note that, seems we forgot to do it for branch 2.4, so this PR also updates 
MimaExcludes.scala

## How was this patch tested?

N/A

Closes #22977 from cloud-fan/mima.

Authored-by: Wenchen Fan 
Signed-off-by: Wenchen Fan 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/e25bce5c
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/e25bce5c
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/e25bce5c

Branch: refs/heads/master
Commit: e25bce5cc78ec6a6c123dd87025e3d8392b0f70e
Parents: 8d7dbde
Author: Wenchen Fan 
Authored: Tue Nov 13 14:15:15 2018 +0800
Committer: Wenchen Fan 
Committed: Tue Nov 13 14:15:15 2018 +0800

--
 project/MimaBuild.scala|  2 +-
 project/MimaExcludes.scala | 96 -
 2 files changed, 95 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/e25bce5c/project/MimaBuild.scala
--
diff --git a/project/MimaBuild.scala b/project/MimaBuild.scala
index adde213..79e6745 100644
--- a/project/MimaBuild.scala
+++ b/project/MimaBuild.scala
@@ -88,7 +88,7 @@ object MimaBuild {
 
   def mimaSettings(sparkHome: File, projectRef: ProjectRef) = {
 val organization = "org.apache.spark"
-val previousSparkVersion = "2.2.0"
+val previousSparkVersion = "2.4.0"
 val project = projectRef.project
 val fullId = "spark-" + project + "_2.11"
 mimaDefaultSettings ++

http://git-wip-us.apache.org/repos/asf/spark/blob/e25bce5c/project/MimaExcludes.scala
--
diff --git a/project/MimaExcludes.scala b/project/MimaExcludes.scala
index b6bd6b8..b030b6c 100644
--- a/project/MimaExcludes.scala
+++ b/project/MimaExcludes.scala
@@ -36,6 +36,8 @@ object MimaExcludes {
 
   // Exclude rules for 3.0.x
   lazy val v30excludes = v24excludes ++ Seq(
+// [SPARK-25908][CORE][SQL] Remove old deprecated items in Spark 3
+
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.BarrierTaskContext.isRunningLocally"),
 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.TaskContext.isRunningLocally"),
 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.executor.ShuffleWriteMetrics.shuffleBytesWritten"),
 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.executor.ShuffleWriteMetrics.shuffleWriteTime"),
@@ -54,10 +56,13 @@ object MimaExcludes {
 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.mllib.evaluation.MulticlassMetrics.precision"),
 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.ml.util.MLWriter.context"),
 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.ml.util.MLReader.context"),
+
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.ml.util.GeneralMLWriter.context"),
+
 // [SPARK-25737] Remove JavaSparkContextVarargsWorkaround
 
ProblemFilters.exclude[MissingTypesProblem]("org.apache.spark.api.java.JavaSparkContext"),
 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.api.java.JavaSparkContext.union"),
 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.streaming.api.java.JavaStreamingContext.union"),
+
 // [SPARK-16775] Remove deprecated accumulator v1 APIs
 
ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.Accumulable"),
 
ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.AccumulatorParam"),
@@ -77,14 +82,58 @@ object MimaExcludes {
 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.api.java.JavaSparkContext.accumulable"),
 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.api.java.JavaSparkContext.doubleAccumulator"),
 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.api.java.JavaSparkContext.accumulator"),
+
 // [SPARK-24109] Remove class SnappyOutputStreamWrapper
 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.io.SnappyCompressionCodec.version"),
+
 // [SPARK-19287] JavaPairRDD flatMapValues requires function returning 
Iterable, not Iterator
 
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.api.java.JavaPairRDD.flatMapValues"),
 
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.streaming.api.java.JavaPairDStream.flatMapValues"),
+
 

svn commit: r30866 - in /dev/spark/3.0.0-SNAPSHOT-2018_11_12_21_35-c491934-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-11-12 Thread pwendell
Author: pwendell
Date: Tue Nov 13 05:50:24 2018
New Revision: 30866

Log:
Apache Spark 3.0.0-SNAPSHOT-2018_11_12_21_35-c491934 docs


[This commit notification would consist of 1471 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-26007][SQL] DataFrameReader.csv() respects to spark.sql.columnNameOfCorruptRecord

2018-11-12 Thread gurwls223
Repository: spark
Updated Branches:
  refs/heads/master 88c826272 -> c49193437


[SPARK-26007][SQL] DataFrameReader.csv() respects to 
spark.sql.columnNameOfCorruptRecord

## What changes were proposed in this pull request?

Passing current value of SQL config `spark.sql.columnNameOfCorruptRecord` to 
`CSVOptions` inside of `DataFrameReader`.`csv()`.

## How was this patch tested?

Added a test where default value of `spark.sql.columnNameOfCorruptRecord` is 
changed.

Closes #23006 from MaxGekk/csv-corrupt-sql-config.

Lead-authored-by: Maxim Gekk 
Co-authored-by: Dongjoon Hyun 
Co-authored-by: Maxim Gekk 
Signed-off-by: hyukjinkwon 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/c4919343
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/c4919343
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/c4919343

Branch: refs/heads/master
Commit: c49193437745f072767d26e6b9099f4949cabf95
Parents: 88c8262
Author: Maxim Gekk 
Authored: Tue Nov 13 12:26:19 2018 +0800
Committer: hyukjinkwon 
Committed: Tue Nov 13 12:26:19 2018 +0800

--
 .../apache/spark/sql/catalyst/csv/CSVOptions.scala| 14 +-
 .../sql/execution/datasources/csv/CSVSuite.scala  | 11 +++
 2 files changed, 24 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/c4919343/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/csv/CSVOptions.scala
--
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/csv/CSVOptions.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/csv/CSVOptions.scala
index 6428235..6bb50b4 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/csv/CSVOptions.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/csv/CSVOptions.scala
@@ -25,6 +25,7 @@ import org.apache.commons.lang3.time.FastDateFormat
 
 import org.apache.spark.internal.Logging
 import org.apache.spark.sql.catalyst.util._
+import org.apache.spark.sql.internal.SQLConf
 
 class CSVOptions(
 @transient val parameters: CaseInsensitiveMap[String],
@@ -36,8 +37,19 @@ class CSVOptions(
   def this(
 parameters: Map[String, String],
 columnPruning: Boolean,
+defaultTimeZoneId: String) = {
+this(
+  CaseInsensitiveMap(parameters),
+  columnPruning,
+  defaultTimeZoneId,
+  SQLConf.get.columnNameOfCorruptRecord)
+  }
+
+  def this(
+parameters: Map[String, String],
+columnPruning: Boolean,
 defaultTimeZoneId: String,
-defaultColumnNameOfCorruptRecord: String = "") = {
+defaultColumnNameOfCorruptRecord: String) = {
   this(
 CaseInsensitiveMap(parameters),
 columnPruning,

http://git-wip-us.apache.org/repos/asf/spark/blob/c4919343/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
--
diff --git 
a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
 
b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
index d43efc8..2efe1dd 100644
--- 
a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
+++ 
b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
@@ -1848,4 +1848,15 @@ class CSVSuite extends QueryTest with SharedSQLContext 
with SQLTestUtils with Te
 val schema = new StructType().add("a", StringType).add("b", IntegerType)
 checkAnswer(spark.read.schema(schema).option("delimiter", 
delimiter).csv(input), Row("abc", 1))
   }
+
+  test("using spark.sql.columnNameOfCorruptRecord") {
+withSQLConf(SQLConf.COLUMN_NAME_OF_CORRUPT_RECORD.key -> "_unparsed") {
+  val csv = "\""
+  val df = spark.read
+.schema("a int, _unparsed string")
+.csv(Seq(csv).toDS())
+
+  checkAnswer(df, Row(null, csv))
+}
+  }
 }


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r30864 - in /dev/spark/2.4.1-SNAPSHOT-2018_11_12_19_33-65e5b26-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-11-12 Thread pwendell
Author: pwendell
Date: Tue Nov 13 03:47:58 2018
New Revision: 30864

Log:
Apache Spark 2.4.1-SNAPSHOT-2018_11_12_19_33-65e5b26 docs


[This commit notification would consist of 1476 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-26010][R] fix vignette eval with Java 11

2018-11-12 Thread felixcheung
Repository: spark
Updated Branches:
  refs/heads/branch-2.4 3bc4c3330 -> 65e5b2659


[SPARK-26010][R] fix vignette eval with Java 11

## What changes were proposed in this pull request?

changes in vignette only to disable eval

## How was this patch tested?

Jenkins

Author: Felix Cheung 

Closes #23007 from felixcheung/rjavavervig.

(cherry picked from commit 88c82627267a9731b2438f0cc28dd656eb3dc834)
Signed-off-by: Felix Cheung 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/65e5b265
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/65e5b265
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/65e5b265

Branch: refs/heads/branch-2.4
Commit: 65e5b26590e66ac4220b5f60e11b7966746c8b08
Parents: 3bc4c33
Author: Felix Cheung 
Authored: Mon Nov 12 19:03:30 2018 -0800
Committer: Felix Cheung 
Committed: Mon Nov 12 19:03:56 2018 -0800

--
 R/pkg/vignettes/sparkr-vignettes.Rmd | 14 ++
 1 file changed, 14 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/65e5b265/R/pkg/vignettes/sparkr-vignettes.Rmd
--
diff --git a/R/pkg/vignettes/sparkr-vignettes.Rmd 
b/R/pkg/vignettes/sparkr-vignettes.Rmd
index 090363c..b13f338 100644
--- a/R/pkg/vignettes/sparkr-vignettes.Rmd
+++ b/R/pkg/vignettes/sparkr-vignettes.Rmd
@@ -57,6 +57,20 @@ First, let's load and attach the package.
 library(SparkR)
 ```
 
+```{r, include=FALSE}
+# disable eval if java version not supported
+override_eval <- tryCatch(!is.numeric(SparkR:::checkJavaVersion()),
+  error = function(e) { TRUE },
+  warning = function(e) { TRUE })
+
+if (override_eval) {
+  opts_hooks$set(eval = function(options) {
+options$eval = FALSE
+options
+  })
+}
+```
+
 `SparkSession` is the entry point into SparkR which connects your R program to 
a Spark cluster. You can create a `SparkSession` using `sparkR.session` and 
pass in options such as the application name, any Spark packages depended on, 
etc.
 
 We use default settings in which it runs in local mode. It auto downloads 
Spark package in the background if no previous installation is found. For more 
details about setup, see [Spark Session](#SetupSparkSession).


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-26010][R] fix vignette eval with Java 11

2018-11-12 Thread felixcheung
Repository: spark
Updated Branches:
  refs/heads/master f9ff75653 -> 88c826272


[SPARK-26010][R] fix vignette eval with Java 11

## What changes were proposed in this pull request?

changes in vignette only to disable eval

## How was this patch tested?

Jenkins

Author: Felix Cheung 

Closes #23007 from felixcheung/rjavavervig.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/88c82627
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/88c82627
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/88c82627

Branch: refs/heads/master
Commit: 88c82627267a9731b2438f0cc28dd656eb3dc834
Parents: f9ff756
Author: Felix Cheung 
Authored: Mon Nov 12 19:03:30 2018 -0800
Committer: Felix Cheung 
Committed: Mon Nov 12 19:03:30 2018 -0800

--
 R/pkg/vignettes/sparkr-vignettes.Rmd | 14 ++
 1 file changed, 14 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/88c82627/R/pkg/vignettes/sparkr-vignettes.Rmd
--
diff --git a/R/pkg/vignettes/sparkr-vignettes.Rmd 
b/R/pkg/vignettes/sparkr-vignettes.Rmd
index 7d924ef..f80b45b 100644
--- a/R/pkg/vignettes/sparkr-vignettes.Rmd
+++ b/R/pkg/vignettes/sparkr-vignettes.Rmd
@@ -57,6 +57,20 @@ First, let's load and attach the package.
 library(SparkR)
 ```
 
+```{r, include=FALSE}
+# disable eval if java version not supported
+override_eval <- tryCatch(!is.numeric(SparkR:::checkJavaVersion()),
+  error = function(e) { TRUE },
+  warning = function(e) { TRUE })
+
+if (override_eval) {
+  opts_hooks$set(eval = function(options) {
+options$eval = FALSE
+options
+  })
+}
+```
+
 `SparkSession` is the entry point into SparkR which connects your R program to 
a Spark cluster. You can create a `SparkSession` using `sparkR.session` and 
pass in options such as the application name, any Spark packages depended on, 
etc.
 
 We use default settings in which it runs in local mode. It auto downloads 
Spark package in the background if no previous installation is found. For more 
details about setup, see [Spark Session](#SetupSparkSession).


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-26029][BUILD][2.4] Bump previousSparkVersion in MimaBuild.scala to be 2.3.0

2018-11-12 Thread wenchen
Repository: spark
Updated Branches:
  refs/heads/branch-2.4 1375f3477 -> 3bc4c3330


[SPARK-26029][BUILD][2.4] Bump previousSparkVersion in MimaBuild.scala to be 
2.3.0

## What changes were proposed in this pull request?

Although it's a little late, we should still update mima for branch 2.4, to 
avoid future breaking changes.

Note that, when merging, we should forward port it to master branch, so that 
the excluding rules are still in `v24excludes`.

TODO: update the release process document to mention about mima update.

## How was this patch tested?

N/A

Closes #23015 from cloud-fan/mima-2.4.

Authored-by: Wenchen Fan 
Signed-off-by: Wenchen Fan 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/3bc4c333
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/3bc4c333
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/3bc4c333

Branch: refs/heads/branch-2.4
Commit: 3bc4c3330f8da2979ce034c205bc3d0bed5f39f8
Parents: 1375f34
Author: Wenchen Fan 
Authored: Tue Nov 13 10:28:25 2018 +0800
Committer: Wenchen Fan 
Committed: Tue Nov 13 10:28:25 2018 +0800

--
 project/MimaBuild.scala|  2 +-
 project/MimaExcludes.scala | 45 -
 2 files changed, 45 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/3bc4c333/project/MimaBuild.scala
--
diff --git a/project/MimaBuild.scala b/project/MimaBuild.scala
index adde213..fbf9b8e 100644
--- a/project/MimaBuild.scala
+++ b/project/MimaBuild.scala
@@ -88,7 +88,7 @@ object MimaBuild {
 
   def mimaSettings(sparkHome: File, projectRef: ProjectRef) = {
 val organization = "org.apache.spark"
-val previousSparkVersion = "2.2.0"
+val previousSparkVersion = "2.3.0"
 val project = projectRef.project
 val fullId = "spark-" + project + "_2.11"
 mimaDefaultSettings ++

http://git-wip-us.apache.org/repos/asf/spark/blob/3bc4c333/project/MimaExcludes.scala
--
diff --git a/project/MimaExcludes.scala b/project/MimaExcludes.scala
index b7e9cbc..4246355 100644
--- a/project/MimaExcludes.scala
+++ b/project/MimaExcludes.scala
@@ -105,7 +105,50 @@ object MimaExcludes {
 
ProblemFilters.exclude[InheritedNewAbstractMethodProblem]("org.apache.spark.ml.param.shared.HasValidationIndicatorCol.validationIndicatorCol"),
 
 // [SPARK-23042] Use OneHotEncoderModel to encode labels in 
MultilayerPerceptronClassifier
-
ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.ml.classification.LabelConverter")
+
ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.ml.classification.LabelConverter"),
+
+// [SPARK-21842][MESOS] Support Kerberos ticket renewal and creation in 
Mesos
+
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.deploy.SparkHadoopUtil.getDateOfNextUpdate"),
+
+// [SPARK-23366] Improve hot reading path in ReadAheadInputStream
+
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.io.ReadAheadInputStream.this"),
+
+// [SPARK-22941][CORE] Do not exit JVM when submit fails with in-process 
launcher.
+
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.deploy.SparkSubmit.addJarToClasspath"),
+
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.deploy.SparkSubmit.mergeFileLists"),
+
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment$default$2"),
+
+// Data Source V2 API changes
+// TODO: they are unstable APIs and should not be tracked by mima.
+
ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.sql.sources.v2.ReadSupportWithSchema"),
+
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.sql.sources.v2.reader.SupportsScanColumnarBatch.createDataReaderFactories"),
+
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.sql.sources.v2.reader.SupportsScanColumnarBatch.createBatchDataReaderFactories"),
+
ProblemFilters.exclude[ReversedMissingMethodProblem]("org.apache.spark.sql.sources.v2.reader.SupportsScanColumnarBatch.planBatchInputPartitions"),
+
ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.sql.sources.v2.reader.SupportsScanUnsafeRow"),
+
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.sql.sources.v2.reader.DataSourceReader.createDataReaderFactories"),
+
ProblemFilters.exclude[ReversedMissingMethodProblem]("org.apache.spark.sql.sources.v2.reader.DataSourceReader.planInputPartitions"),
+
ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.sql.sources.v2.reader.SupportsPushDownCatalystFilters"),
+

svn commit: r30856 - in /dev/spark/3.0.0-SNAPSHOT-2018_11_12_13_22-f9ff756-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-11-12 Thread pwendell
Author: pwendell
Date: Mon Nov 12 21:36:33 2018
New Revision: 30856

Log:
Apache Spark 3.0.0-SNAPSHOT-2018_11_12_13_22-f9ff756 docs


[This commit notification would consist of 1471 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-26013][R][BUILD] Upgrade R tools version from 3.4.0 to 3.5.1 in AppVeyor build

2018-11-12 Thread gurwls223
Repository: spark
Updated Branches:
  refs/heads/master 0ba9715c7 -> f9ff75653


[SPARK-26013][R][BUILD] Upgrade R tools version from 3.4.0 to 3.5.1 in AppVeyor 
build

## What changes were proposed in this pull request?

R tools 3.5.1 is released few months ago. Spark currently uses 3.4.0. We should 
better upgrade in AppVeyor.

## How was this patch tested?

AppVeyor builds.

Closes #23011 from HyukjinKwon/SPARK-26013.

Authored-by: hyukjinkwon 
Signed-off-by: hyukjinkwon 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/f9ff7565
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/f9ff7565
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/f9ff7565

Branch: refs/heads/master
Commit: f9ff75653fa8cd055fbcbfe94243049c38c60507
Parents: 0ba9715
Author: hyukjinkwon 
Authored: Tue Nov 13 01:21:03 2018 +0800
Committer: hyukjinkwon 
Committed: Tue Nov 13 01:21:03 2018 +0800

--
 dev/appveyor-install-dependencies.ps1 | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/f9ff7565/dev/appveyor-install-dependencies.ps1
--
diff --git a/dev/appveyor-install-dependencies.ps1 
b/dev/appveyor-install-dependencies.ps1
index 06d9d70..cc68ffb 100644
--- a/dev/appveyor-install-dependencies.ps1
+++ b/dev/appveyor-install-dependencies.ps1
@@ -116,7 +116,7 @@ Pop-Location
 
 # == R
 $rVer = "3.5.1"
-$rToolsVer = "3.4.0"
+$rToolsVer = "3.5.1"
 
 InstallR
 InstallRtools


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r30831 - in /dev/spark/3.0.0-SNAPSHOT-2018_11_12_00_36-0ba9715-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-11-12 Thread pwendell
Author: pwendell
Date: Mon Nov 12 08:51:14 2018
New Revision: 30831

Log:
Apache Spark 3.0.0-SNAPSHOT-2018_11_12_00_36-0ba9715 docs


[This commit notification would consist of 1471 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org