Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15841
+1 for no problem :).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14124
Thanks @cloud-fan, sure, that sounds great.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14124
Oh wait, @cloud-fan, it seems, at least, Parquet files could possibly be
written with not nullable fields. So, reading it without user-specified schema
might also cause the inconsistency
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14124
Actually, nvm. I think handling this in `DataFrameReader.schema` will deal
with most of general cases.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15848
Just to be clear, from the discussion in the JIRA, it seems
`PageViewStream` example is missed intendedly here for now because changing
from `local[2]` to `local[4]` is failed for an unknown
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15863
Hi @dongjoon-hyun, I just happened to look at this PR and I am just
curious. Would this be possible to override `filterKeys` in
`CastInsensitiveMap` as something like below?
```scala
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15863
Oh, it seems not. I just removed my suggestion. I will take a look at this
again if it is possible in similar way. Thanks!
---
If your project is set up for it, you can reply to this email and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14788
Hi @rxin, do you mind if I ask what you do think about this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15361
ping ..
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15694
Would there be other things I should take care of maybe?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15815
https://cloud.githubusercontent.com/assets/6477701/20239969/3d5cb0ba-a950-11e6-8a55-e96e8cd02970.png";>
For Python documenation, it seems fine.
---
If your project is set u
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15865
Hi @ConeyLiu, It seems
```
[ERROR]
src/main/java/org/apache/spark/io/NioBufferedFileInputStream.java:[133]
(coding) NoFinalizer: Avoid using finalizer method.
```
is
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/15866
[SPARK-18422][CORE] Fix wholeTextFiles test to pass on Windows in
JavaAPISuite
## What changes were proposed in this pull request?
This PR fixes the test `wholeTextFiles` in
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15866
Build started: [CORE] `org.apache.spark.JavaAPISuite`
[![PR-15866](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=EAC49351-734B-4218-A2CF-13B3C3CB83E8&svg=
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15866
@srowen It seems `org.apache.spark.JavaAPISuite.writeWithNewAPIHadoopFile`
is also being failed but the root cause seems different. That is being failed
with the exception as below
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/15867
[WIP][SPARK-18423][Streaming] ReceiverTracker should close checkpoint dir
when stopped even if it was not started
## What changes were proposed in this pull request?
Several tests are
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15867
Hi @dtas and @zsxwing, could you please take a look whether this change
makes sense or not by any chance?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13837#discussion_r87728887
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetOptions.scala
---
@@ -40,7 +40,7 @@ private[sql] class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13837#discussion_r87733959
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -322,6 +323,9 @@ case class DataSource
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15871#discussion_r87734522
--- Diff: python/pyspark/ml/base.py ---
@@ -59,6 +59,12 @@ def fit(self, dataset, params=None):
return [self.fit(dataset, paramMap) for
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15871
I think we need a test here maybe and update the argument description if
this change is legitimate.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15871
cc @holdenk
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15865
@ConeyLiu Let us please make the title `[SPARK-18420][BUILD] Fix the errors
caused by lint check in Java` for this PR and Jira you opened.
I think for the one below
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15879
Hi @sarutak and @moomindani, I happened to look thought this just for my
curiosity.
```
./docs/programming-guide.md:* The `textFile` method also takes an optional
second argument
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15871#discussion_r87759435
--- Diff: python/pyspark/ml/base.py ---
@@ -59,6 +59,12 @@ def fit(self, dataset, params=None):
return [self.fit(dataset, paramMap) for
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15866
Thank you Sean. Actually, this is a bit annoying.
Here is what happens in the original test.
1. Writes a file to read back by `wholeTextFiles`.
```scala
scala
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15866
Let me add a comment here and will try to clean up more.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15866
Build started: [CORE] `org.apache.spark.JavaAPISuite`
[![PR-15866](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=198DDA52-F201-4D2B-BE2F-244E0C1725B2&svg=
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15867
Build started: [Streaming] `org.apache.spark.streaming.JavaAPISuite`
[![PR-15867](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=75F427CC-5356-4863-8F8E-9977F288C39B
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15865#discussion_r87798658
--- Diff: dev/checkstyle-suppressions.xml ---
@@ -30,6 +30,8 @@
+
--- End diff --
Oh, sorry. Actually, I didn't
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15865#discussion_r87798899
--- Diff: dev/checkstyle-suppressions.xml ---
@@ -30,6 +30,8 @@
+
--- End diff --
Ah, I thought we could disable it
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15871#discussion_r87807344
--- Diff: python/pyspark/ml/base.py ---
@@ -59,6 +59,12 @@ def fit(self, dataset, params=None):
return [self.fit(dataset, paramMap) for
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15865#discussion_r87935645
--- Diff:
sql/catalyst/src/main/java/org/apache/spark/sql/catalyst/expressions/UnsafeArrayData.java
---
@@ -109,7 +109,8 @@ public void pointTo(Object
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15865#discussion_r87935757
--- Diff:
examples/src/main/java/org/apache/spark/examples/ml/JavaInteractionExample.java
---
@@ -48,8 +47,7 @@ public static void main(String[] args
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15865
For sure, I ran
```bash
$ ./dev/lint-java
Using `mvn` from path: .../mvn
Checkstyle checks passed.
```
It seems fine. It looks good to me for few minor comments
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15835
I guess we should ping @liancheng as he was reviewing the previous one.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/15889
[WIP][SPARK-18445][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that`
across Scala/Java API documentation
## What changes were proposed in this pull request?
It seems in Scala
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15889
I am going to leave the images from built API doc each by each change to
double-check.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15889
Hi @sandeepmreddy, thanks for approving but actually it is WIP. It would be
great if it is reviewed when it's complete.
---
If your project is set up for it, you can reply to this emai
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15866
Oh, the test this PR tries to deal with,
`org.apache.spark.JavaAPISuite.wholeTextFiles` is passed. However, another test
in here `org.apache.spark.JavaAPISuite.writeWithNewAPIHadoopFile` is
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15865
I just checked for sure.
```
$ ./dev/lint-java
Using `mvn` from path: .../mvn
Checkstyle checks passed.
```
---
If your project is set up for it, you can reply to this
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15867
Thank you!!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15898
I was just watching the JIRA just for my curiosity. Actually, aren't we
able to add test for `SparkOrcNewRecordReader` like the one as below?
```scala
test("Empty schem
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15898#discussion_r88175031
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/orc/OrcQuerySuite.scala ---
@@ -577,4 +579,23 @@ class OrcQuerySuite extends QueryTest with
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15889
Ah, I couldn't check all of the java documentation yet because I have some
problems with javadoc8 because of several errors (although they look existing
ones). So, I am thinking of tryi
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15889#discussion_r88414985
--- Diff: core/src/main/scala/org/apache/spark/Partitioner.scala ---
@@ -101,7 +101,7 @@ class HashPartitioner(partitions: Int) extends
Partitioner
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15889#discussion_r88416340
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -970,7 +970,7 @@ class SparkContext(config: SparkConf) extends Logging
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15889#discussion_r88417168
--- Diff: project/SparkBuild.scala ---
@@ -741,7 +741,8 @@ object Unidoc {
javacOptions in (JavaUnidoc, unidoc) := Seq(
"-window
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15889
(it was simply rebased and added 5 instances introduced in another PR by
https://github.com/apache/spark/pull/15889/commits/39873dc40fbd1e62283e2dc9cc4b647ed5388a2f)
---
If your project is set
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15889#discussion_r88439190
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -1648,13 +1650,13 @@ class Dataset[T] private[sql
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15889
Thank you for approving Sean!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15889
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15889#discussion_r88597141
--- Diff: core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
---
@@ -1014,7 +1015,7 @@ class PairRDDFunctions[K, V](self: RDD[(K, V
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15889#discussion_r88598964
--- Diff: core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
---
@@ -1014,7 +1015,7 @@ class PairRDDFunctions[K, V](self: RDD[(K, V
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15889#discussion_r88599858
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/api/java/JavaStreamingContext.scala
---
@@ -396,7 +396,7 @@ class JavaStreamingContext
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15889#discussion_r88611836
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaPairRDD.scala
---
@@ -234,6 +234,9 @@ class JavaPairRDD[K, V](val rdd: RDD[(K, V
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15889#discussion_r88634271
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/PCA.scala ---
@@ -142,8 +142,8 @@ class PCAModel private[ml] (
/**
* Transform
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15866
Yes please. I am pretty sure that this is a right fix without any other
side effects.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15889#discussion_r88679304
--- Diff: core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
---
@@ -57,15 +57,18 @@ class PairRDDFunctions[K, V](self: RDD[(K, V
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15889
Thank you!!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15937
Hi @rxin, actually, I found another wrong example in `ConcatWs` before.
in `stringExpressions.scala`
```diff
- > SELECT _FUNC_(' ', Spark', 'SQL
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14627
(I minimised the changes here to make the review easier)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15889#discussion_r88781534
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/SparkHadoopMapReduceWriter.scala
---
@@ -119,7 +119,7 @@ object SparkHadoopMapReduceWriter
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15889#discussion_r88782853
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/SparkHadoopMapReduceWriter.scala
---
@@ -119,7 +119,7 @@ object SparkHadoopMapReduceWriter
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15889#discussion_r88782865
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/SparkHadoopMapReduceWriter.scala
---
@@ -119,7 +119,7 @@ object SparkHadoopMapReduceWriter
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15889#discussion_r88782911
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/SparkHadoopMapReduceWriter.scala
---
@@ -119,7 +119,7 @@ object SparkHadoopMapReduceWriter
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15889#discussion_r88783173
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/SparkHadoopMapReduceWriter.scala
---
@@ -119,7 +119,7 @@ object SparkHadoopMapReduceWriter
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15889#discussion_r88783252
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/SparkHadoopMapReduceWriter.scala
---
@@ -119,7 +119,7 @@ object SparkHadoopMapReduceWriter
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15889#discussion_r88783349
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/SparkHadoopMapReduceWriter.scala
---
@@ -119,7 +119,7 @@ object SparkHadoopMapReduceWriter
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/15939
[SPARK-3359][BUILD][DOCS] Print examples in javadoc and disable group and
tparam tags in javadoc
## What changes were proposed in this pull request?
This PR proposes/fixes two things
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15940
Yes, you should check if they are exposed in the API documentation. I
intentionally updated only those ones before.
---
If your project is set up for it, you can reply to this email and have
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15939
Oh, no. It seems still there are a lot of errors. I just meant to show what
I have tried to test with .. :).
Thanks for approving!
---
If your project is set up for it, you can reply to
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15940
Ah, @aditya1702, I assume, from the title, you meant to do this for Python
documentation. Actually, the PR you pointed out deals with all of Scala/Java
documentation if I havenât missed some
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15943#discussion_r88794018
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -705,12 +701,10 @@ class Dataset[T] private[sql](
*
* @param
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15677
Some of them already had backricks in the description and others did not.
Matching it up with backticks was initially suggested by
https://github.com/apache/spark/pull/15513
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15943#discussion_r88796208
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -705,12 +701,10 @@ class Dataset[T] private[sql](
*
* @param
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15939
Thank you Sean!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15944
> We don't need every possible overload here.
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proj
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/15947
[SPARK-18447][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that` across
Python API documentation
## What changes were proposed in this pull request?
It seems in Python, there are
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15947#discussion_r88805067
--- Diff: python/pyspark/context.py ---
@@ -520,8 +520,8 @@ def wholeTextFiles(self, path, minPartitions=None,
use_unicode=True
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15947#discussion_r88805082
--- Diff: python/pyspark/conf.py ---
@@ -90,8 +90,8 @@ class SparkConf(object):
All setter methods in this class support chaining. For example
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15947#discussion_r88805095
--- Diff: python/pyspark/rdd.py ---
@@ -1220,10 +1219,10 @@ def top(self, num, key=None):
"""
Get the top N e
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15947
cc @srowen and @aditya1702 who was interested in this issue.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/15952
[WIP][SPARK-18514][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that`
across R API documentation
## What changes were proposed in this pull request?
It seems in R, there are
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15952#discussion_r88821734
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2541,7 +2541,8 @@ generateAliasesForIntersectedCols <- function (x,
intersectedColNames, suf
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15952#discussion_r88822027
--- Diff: R/pkg/R/functions.R ---
@@ -2296,7 +2296,7 @@ setMethod("n", signature(x = "Column"),
#' A pattern could be f
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15952#discussion_r88822111
--- Diff: R/pkg/R/functions.R ---
@@ -2341,7 +2341,7 @@ setMethod("from_utc_timestamp", signature(y =
"Column", x = "charac
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15952#discussion_r88822185
--- Diff: R/pkg/R/functions.R ---
@@ -2779,7 +2779,8 @@ setMethod("window", signature(x = "Column"),
#' locate
#
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15952
@felixcheung , I just noticed that the API documentation for `context.R`,
`RDD.R` and `pairRDD.R` seems not appearing in the documentation. Is this true?
I ended up with getting rid of them
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15361
I think the recent related codes were committed by @rxin. Do you mind if I
ask to take a look please?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15499#discussion_r89012336
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -232,10 +232,10 @@ class DataFrameReader private[sql](sparkSession
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15499#discussion_r89013219
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -232,10 +232,10 @@ class DataFrameReader private[sql](sparkSession
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15966#discussion_r89013627
--- Diff: docs/sql-programming-guide.md ---
@@ -1073,6 +1073,16 @@ the following case-sensitive options:
+ numPartitions
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15952
Oh, I see. I didn't notice. Thank you both!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15952
Thank you!!!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/15999
[WIP][SPARK-3359][BUILD][DOCS] More changes to resolve javadoc 8 errors
that will help unidoc/genjavadoc compatibility
## What changes were proposed in this pull request?
This PR only
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15999
I should double check the built javadoc. Will leave some images in the
changes. BTW, this still does not fully resolve the problem (cc @srowen).
---
If your project is set up for it, you can
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15999
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15999
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15999#discussion_r89451946
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/regression/IsotonicRegression.scala
---
@@ -238,23 +238,22 @@ object IsotonicRegressionModel
1101 - 1200 of 12634 matches
Mail list logo