Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12187#issuecomment-206027319
Hi, @vanzin .
Could you review this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12187#issuecomment-206028456
Thank you, @rxin !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/12185
[SPARK-14415] Add ExpressionDescription annotation for SQL expressions
## What changes were proposed in this pull request?
For Spark SQL, this PR aims to show the following function
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12192#issuecomment-206098980
Hi, @jaceklaskowski and @marmbrus .
Thank you for fixing this. I didn't notice there is such a problem.
Sorry for late response!
I have another PR
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12242#issuecomment-207548881
By the way, Jenkins fails to find `mvn`. Is there any problem there?
```
Using `mvn` from path:
/home/jenkins/workspace/SparkPullRequestBuilder/build
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12242#issuecomment-207543531
Thank you, @srowen , @davies , @rxin . I moved those files.
- rename core/src/main/{scala =>
java}/org/apache/spark/io/LZ4BlockInputStream.java (
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12242#discussion_r59069172
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/BufferedRowIterator.java
---
@@ -60,7 +60,7 @@ public long durationMs
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12185#issuecomment-207552180
Hi, @rxin .
Could you give me some directional advice to improve this PR?
---
If your project is set up for it, you can reply to this email and have your
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12242#issuecomment-207548499
@srowen . It's moved like the following.
- rename sql/core/src/main/{scala =>
java}/org/apache/spark/sql/execution/BufferedRowIterator.java (
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12267#issuecomment-208448042
Thank you, @davies , @cloud-fan , and @rxin !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12306#discussion_r59287005
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -665,7 +665,7 @@ class Analyzer(
def
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12267#discussion_r59155908
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/NonNullableBinaryComparisonSimplificationSuite.scala
---
@@ -0,0 +1,99
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12267#discussion_r59156009
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/NonNullableBinaryComparisonSimplificationSuite.scala
---
@@ -0,0 +1,99
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12267#discussion_r59154150
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/NonNullableBinaryComparisonSimplificationSuite.scala
---
@@ -0,0 +1,99
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12267#discussion_r59154875
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -787,6 +788,28 @@ object BooleanSimplification
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12267#discussion_r59155308
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/NonNullableBinaryComparisonSimplificationSuite.scala
---
@@ -0,0 +1,99
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12267#issuecomment-208170041
Thank you for *deep review*, @cloud-fan . :)
I changed PR according to the one comment first.
For the others, I asked some questions to understand more
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12267#issuecomment-208181978
@cloud-fan . Thank you so much for improving this PR.
Overall, this PR starts to include one Non-Null case (`EqualNullSafe`) now.
May I change
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12267#issuecomment-208189439
Up to now, I updated the followings.
1. Extends `EqualNullSafe` case to handle non-nullable operands.
1. Adds a testcase `Nullable Simplication
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11797#issuecomment-198185802
Thank you, @jodersky and @rxin . I'll do like that.
Just one more question: `JobFailed` is needed to be `private[spark]`?
---
If your project is set
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11797#issuecomment-198194856
I updated this PR and JIRA according to the comments. Thank you all.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/11770#discussion_r56428060
--- Diff: core/src/main/scala/org/apache/spark/SparkEnv.scala ---
@@ -46,9 +46,6 @@ import org.apache.spark.util.{RpcUtils, Utils}
* including
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/11797
Make `@DeveloperApi`-annotated things public.
## What changes were proposed in this pull request?
Spark uses `@DeveloperApi` annotation, but sometimes it seems to conflict
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11842#issuecomment-198664596
Thank you, @srowen . :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/11831
[SPARK-14011] Enable `LineLength` Java checkstyle rule
## What changes were proposed in this pull request?
[Spark Coding Style
Guide](https://cwiki.apache.org/confluence/display
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/11773
[MINOR][SQL][BUILD] Remove duplicated lines
## What changes were proposed in this pull request?
This PR removes three minor duplicated lines. First one is making the
following
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/11770
[SPARK-13942][CORE][DOCS] Remove Shark-related docs and visibility for 2.x
## What changes were proposed in this pull request?
`Shark` was merged into `Spark SQL` since [July
2014
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11770#issuecomment-197589588
Now I cleaned up the description of this PR and JIRA consistently.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/11773#discussion_r56431109
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala ---
@@ -49,7 +49,6 @@ class JoinSuite extends QueryTest with SharedSQLContext
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11734#issuecomment-197538496
Oh, thank you for fast response. I'll do right now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/11773#discussion_r56431084
--- Diff: project/MimaExcludes.scala ---
@@ -299,13 +299,11 @@ object MimaExcludes {
// [SPARK-13244][SQL] Migrates DataFrame to Dataset
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11848#issuecomment-199166103
Thank you again!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11831#issuecomment-199165962
Thank you, @srowen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11734#issuecomment-197541126
Oh, that's great, @JoshRosen .
I made a Jira issue, but could you change that appropriately with yours?
https://issues.apache.org/jira/browse
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/11848
[MINOR][DOCS] Add proper periods and spaces for CLI help messages and
`config` doc.
## What changes were proposed in this pull request?
This PR adds some proper periods and spaces
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11637#issuecomment-198185995
Thank you, @HyukjinKwon ! :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11851#issuecomment-199519237
I added more testcases for `Not` canonicalization in ExpressionSetSuite.
Thank you, @rxin and @marmbrus .
---
If your project is set up for it, you can reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11848#issuecomment-198890212
Thank you, @srowen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/11851
[SPARK-14029][SQL] Improve BooleanSimplification optimization by
implementing `Not` canonicalization.
## What changes were proposed in this pull request?
Currently
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/11831#discussion_r56762251
--- Diff:
core/src/main/java/org/apache/spark/api/java/function/DoubleFunction.java ---
@@ -23,5 +23,5 @@
* A function that returns Doubles
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/11831#discussion_r56762305
--- Diff:
core/src/main/java/org/apache/spark/api/java/function/DoubleFunction.java ---
@@ -23,5 +23,5 @@
* A function that returns Doubles
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11831#issuecomment-198547361
Thank you, @srowen .
The change looks a bit verbose here.
But, as you said, I'm sure that the direction is right and is needed.
The code will remain
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/11868
[SPARK-14051] Implement `Double.NaN==Float.NaN` in `row.equals` for
consistency.
## What changes were proposed in this pull request?
Since [SPARK-9079](https://issues.apache.org
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11868#issuecomment-199956737
Hi, @rxin . Thank you again!
I made a big mistake in this PR and now I fixed it due to your advice. Now,
the followings are true.
```
scala
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11868#issuecomment-199961154
[IBM
DB2](https://www.ibm.com/support/knowledgecenter/SSEPEK_10.0.0/com.ibm.db2z10.doc.sqlref/src/tpc/db2z_numericcomparisions.dita)
also says "From a
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11868#issuecomment-199874683
Oh, thank you for pointing out that. I missed that part. Let me check that
again. I guess we can change `Row[NaN].hashCode` together in this PR.
---
If your
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11851#issuecomment-199921600
Oh, thank you so much for merging this PR, @marmbrus !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11868#issuecomment-199959344
It's because Scala uses the standard way of Java and IEEE floating point. I
also know that NaN is always false with even other NaN.
However, it's about `Row
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11868#issuecomment-199960241
For example,
[Oracle](https://docs.oracle.com/cd/B12037_01/server.101/b10759/sql_elements001.htm)
orders NaN greatest with respect to all other values
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11851#issuecomment-199903429
Hi, @marmbrus .
Could you review again for the newly added tests when you have a time? :)
---
If your project is set up for it, you can reply to this email
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11868#issuecomment-10025
I think this PR makes Spark users feel less confused by completing the
missing part of `NaN` in Spark SQL(Row).
---
If your project is set up for it, you can
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11868#issuecomment-199982128
Oh, I see what is the point here now. @rxin , may I explain a little bit
more?
Mathematically, `NaN` equality is defined `false`. The followings are all
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/11920#discussion_r57232029
--- Diff:
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -75,11 +74,9 @@ class SparkILoop(in0: Option[BufferedReader
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11920#issuecomment-200531357
In Jira, I chose `Spark Shell` as a component, but here `CORE` is the
nearest one.
---
If your project is set up for it, you can reply to this email and have
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/11920
[SPARK-14102] Block `reset` command in SparkShell
## What changes were proposed in this pull request?
Spark Shell provides an easy way to use Spark in Scala environment. This PR
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/11920#discussion_r57233141
--- Diff:
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -75,11 +74,9 @@ class SparkILoop(in0: Option[BufferedReader
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11920#issuecomment-201341699
Hi, @srowen .
Recently Jenkins was a little bit flakey.
How do you think about this PR now?
The code is the same. It is just rebased to pass the test
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11920#issuecomment-201392194
Thank you for the positive feedback. Indeed, it was just one-line change to
block `reset`.
All the others changes in that file is just about ScalaStyle
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11797#issuecomment-199352456
Oh, Thank you, @srowen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11851#issuecomment-199385936
Hi, @rxin .
Could you review this PR when you have some time?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11963#issuecomment-201488695
Thank you for review, @gatorsmile and @srowen .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11968#issuecomment-201528379
@JoshRosen . Now, this PR only contains removing unused imports and fixing
java-lint errors.
---
If your project is set up for it, you can reply to this email
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/11964
[SPARK-14164] Improve input layer validation to
MultilayerPerceptronClassifier
## What changes were proposed in this pull request?
This issue improves an input layer validation
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11968#issuecomment-201523593
Thank you for quick review. I closed that Jira issue a minute ago.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11964#issuecomment-201539165
Hi, @mengxr .
Could you review this PR please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11966#issuecomment-201518778
@jkbradley . I updated the code and PR information.
Thank you for your review!
---
If your project is set up for it, you can reply to this email and have
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11966#issuecomment-201514423
Oh, sure. If so, I'll update like that.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/11968
[SPARK-14167][MINOR] Remove redundant `returns` in Scala code.
## What changes were proposed in this pull request?
Spark Scala code takes advantage of `return` statement as a control
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11968#issuecomment-201522515
Oh, I see. I will close the JIRA.
By the way, may I fix that minor Java lint error here with changed title?
---
If your project is set up for it, you can
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/11963
[MINOR][SQL] Fix substr/substring testcases.
## What changes were proposed in this pull request?
This PR fixes the following two testcases in order to test the correct
usages
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11966#issuecomment-201493002
Hi, @jkbradley .
Could you review this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/11966
[MINOR][MLLIB] Change parameter `categories` type from Seq to List.
## What changes were proposed in this pull request?
This PR fixes the following line and the related code
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11968#issuecomment-201917668
Thank you, @srowen .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11963#issuecomment-202125611
Hi, @srowen .
Could you merge this PR please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11963#issuecomment-202130064
Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11966#issuecomment-202130520
Oh, thank you so much, @srowen .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11966#issuecomment-202124851
Hi, @jkbradley .
Could you merge this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11907#issuecomment-200121813
Thank you always. :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11907#issuecomment-200201593
Thank you for merging, @rxin .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11907#issuecomment-200151880
Rebased.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/11907
[MINOR][SQL][DOCS] Update `sql/README.md` and remove some unused imports in
`sql` module.
## What changes were proposed in this pull request?
This PR updates `sql/README.md
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/11842
[MINOR][DOCS] Use `spark-submit` instead of `sparkR` to submit R script.
## What changes were proposed in this pull request?
Since `sparkR` is not used for submitting R Scripts from
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11734#issuecomment-197537154
Oh, @vanzin .
Sorry for writing comments at the closed PR.
I've read your concern and investigate some. The root cause was
`GenerateMIMAIgnore.scala
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11567#issuecomment-193578502
`PySpark` failure is irrelevant for this PR, but I rebased this PR to the
master because this is still a problem.
---
If your project is set up for it, you can
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11567#issuecomment-193656403
Thank you! I see.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11567#issuecomment-193731456
Thank you for merging.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/11530#discussion_r55345620
--- Diff:
examples/src/main/java/org/apache/spark/examples/streaming/JavaTwitterHashTagJoinSentiments.java
---
@@ -171,5 +171,6 @@ public void call
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11530#issuecomment-193746151
Hi, @srowen . I've finished.
Let's see the result, [Jenkins
#52663](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52663/consoleFull
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11530#issuecomment-193645680
Thank you, @zsxwing .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11567#issuecomment-193660852
Thank you, @srowen . The pom.xml file is updated as you said.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11567#issuecomment-193655686
Hi, @srowen and @rxin .
In the dev mailing list, there is another report about the location of
`scalastyle-config.xml`.
In `pom.xml
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11527#issuecomment-193670345
Hi, @mengxr .
Could you review this PR, please?
After #11519 , I found that the missing `reqParam` argument in some APIs.
---
If your project is set up
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11530#issuecomment-193665106
Hi, @srowen .
Could you merge this PR, please?
It includes `JavaKinesisStreamSuite` part, too.
---
If your project is set up for it, you can reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11567#issuecomment-193669049
About `?s part`, @zsxwing gave us some note and pointer in #11438 .
> FYI because mvn checkstyle:check depends on mvn install which cost huge
t
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11541#issuecomment-193662443
The conflict will be resolved via #11567 . After #11567 is committed, I
will update this.
---
If your project is set up for it, you can reply to this email
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11530#issuecomment-193890161
Hi, @srowen .
I'm back online now. Please let me know if there is more things to do here.
Thank you for reviewing and guiding in these days
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/11519
[SPARK-13676] Fix mismatched default values for regParam in
LogisticRegression
## What changes were proposed in this pull request?
The default value of regularization parameter
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11292#issuecomment-191310287
Cool! I'm looking to seeing in master soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11481#issuecomment-192003831
Thank you, @srowen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11527#issuecomment-192758886
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
601 - 700 of 7331 matches
Mail list logo