GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/17144
[SPARK-19803][TEST] flaky BlockManagerReplicationSuite test failure
## What changes were proposed in this pull request?
give more time for replication to happen and new block be reported
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17052
ping @zsxwing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17141#discussion_r104073813
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/ReservoirSampleSuit.scala
---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17141#discussion_r104073535
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala
---
@@ -397,3 +402,110 @@ object
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17141#discussion_r104073674
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala
---
@@ -397,3 +402,110 @@ object
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17141#discussion_r104073607
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala
---
@@ -397,3 +402,110 @@ object
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17141#discussion_r104073516
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala
---
@@ -397,3 +402,110 @@ object
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17141#discussion_r104073290
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala
---
@@ -397,3 +402,110 @@ object
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17141
cc @zsxwing and @tdas
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17080#discussion_r104070946
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -82,17 +82,25 @@ class SparkHadoopUtil extends Logging
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17141
@srowen There are some unsupported `operator` for Structured Streaming. You
can view here:
https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16970
@zsxwing Thanks, I am missing it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/17141
[SPARK-19800][SS][WIP] Implement one kind of streaming sampling - reservoir
sampling
## What changes were proposed in this pull request?
This pr adds a special streaming sample operator
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17052
\cc @zsxwing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17080
\cc @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17080
@steveloughran OK
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16970
One question: witout aggregation, how to drop duplication between
partitions?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17052
cc @zsxwing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17052
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17052#discussion_r103384682
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSourceSuite.scala
---
@@ -989,7 +989,8 @@ class FileStreamSourceSuite extends
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17052#discussion_r103385145
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala ---
@@ -231,8 +231,9 @@ abstract class SparkStrategies extends
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17052#discussion_r103385054
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1119,11 +1119,16 @@ case class DecimalAggregates
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17052
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17080
@steveloughran IMHO, there is no need to use
`org.apache.hadoop.fs.s3a.Constants` and
`com.amazonaws.SDKGlobalConfiguration`, otherwise we will import `hadoop-aws`
and `aws-java-sdk-core
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17052
working on unit test failure
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14731#discussion_r103187577
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -140,7 +137,7 @@ class FileInputDStream[K, V, F
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17082
@srowen I think this is the only one souce forgotten to name.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/17082
[SPARK-19749][SS] Name socket source with a meaningful name
## What changes were proposed in this pull request?
Name socket source with a meaningful name
## How was this patch
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/17080
[SPARK-19739][CORE] propagate S3 session token to cluser
## What changes were proposed in this pull request?
propagate S3 session token to cluser
## How was this patch tested
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17052
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17052
@zsxwing got it
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17052#discussion_r102922775
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/rules.scala
---
@@ -393,6 +393,17 @@ case class PreprocessTableInsertion
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17052#discussion_r102922659
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
---
@@ -535,7 +535,8 @@ case class Range
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17052
cc @zsxwing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/17052
[SPARK-19690][SS] Join a streaming DataFrame with a batch DataFrame which
has an aggregation may not work
## What changes were proposed in this pull request?
`StatefulAggregationStrategy
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17033
@vanzin done
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17033
cc @vanzin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17033
cc @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/17033
[DOCS] application environment rest api
## What changes were proposed in this pull request?
application environment rest api
## How was this patch tested?
jenkins
You
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17025
wip
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen closed the pull request at:
https://github.com/apache/spark/pull/17025
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17025
woring on test failure
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/17025
[SPARK-19690][SS] Join a streaming DataFrame with a batch DataFrame which
has an aggregation may not work
## What changes were proposed in this pull request?
`StatefulAggregationStrategy
Github user uncleGen closed the pull request at:
https://github.com/apache/spark/pull/16936
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user uncleGen closed the pull request at:
https://github.com/apache/spark/pull/16691
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16980
The JIRA ID is not SPARK-19645?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16980
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16972#discussion_r102390214
--- Diff: core/src/main/scala/org/apache/spark/storage/DiskStore.scala ---
@@ -73,17 +81,52 @@ private[spark] class DiskStore(conf: SparkConf,
diskManager
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16949
cc @srowen and @vanzin also.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user uncleGen closed the pull request at:
https://github.com/apache/spark/pull/17011
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17011
@srowen @vanzin I think the root cause is I test it in root user. So it
always be readable no matter what access permission. IMHO, it is OK to add once
extra access permission check, as the code
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17011
@srowen @vanzin I will test in some other platforms.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17011
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16977
build successfully in local, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/17011
[SPARK-19676][CORE] Flaky test: FsHistoryProviderSuite.SPARK-3697: ignore
directories that cannot be read.
## What changes were proposed in this pull request?
Flaky test
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16949
cc @srowen also.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16818#discussion_r102115145
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/WindowSpec.scala ---
@@ -180,16 +180,20 @@ class WindowSpec private[sql
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16972#discussion_r101969639
--- Diff: core/src/main/scala/org/apache/spark/storage/DiskStore.scala ---
@@ -73,17 +81,52 @@ private[spark] class DiskStore(conf: SparkConf,
diskManager
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16977#discussion_r101954581
--- Diff:
core/src/main/scala/org/apache/spark/rdd/ParallelCollectionRDD.scala ---
@@ -105,6 +105,17 @@ private[spark] class ParallelCollectionRDD[T
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16949
cc @vanzin Take a second review please!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16818#discussion_r101948606
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/WindowSpec.scala ---
@@ -180,16 +180,20 @@ class WindowSpec private[sql
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16857#discussion_r101764130
--- Diff:
external/kafka-0-10-sql/src/test/resources/kafka-source-initial-offset-version-2.1.0.bin
---
@@ -0,0 +1 @@
+2{"kafka-initial-offset-
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16949
@vanzin @ajbozarth sure, I will check related code in.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16972
working on ut faliure
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16972#discussion_r101682505
--- Diff: core/src/main/scala/org/apache/spark/storage/DiskStore.scala ---
@@ -73,17 +81,52 @@ private[spark] class DiskStore(conf: SparkConf,
diskManager
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16972#discussion_r101682451
--- Diff: core/src/test/scala/org/apache/spark/storage/DiskStoreSuite.scala
---
@@ -39,27 +40,27 @@ class DiskStoreSuite extends SparkFunSuite
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16972#discussion_r101682663
--- Diff: core/src/main/scala/org/apache/spark/storage/DiskStore.scala ---
@@ -21,17 +21,25 @@ import java.io.{FileOutputStream, IOException
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16972#discussion_r101682730
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -344,7 +370,7 @@ private[spark] class MemoryStore(
val
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16972
@vanzin I will add some unit test in future. But could you please review
this first? I think I may be missing something.
---
If your project is set up for it, you can reply to this email and have
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/16972
[SPARK-19556][CORE][WIP] Broadcast data is not encrypted when I/O
encryption is on
## What changes were proposed in this pull request?
`TorrentBroadcast` uses a couple of "back
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16949
@vanzin I opened a jira (https://issues.apache.org/jira/browse/SPARK-19642)
to research and address the potential security flaws. Do you mind if I continue
this pr?
---
If your project is set up
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16936
Let us call @zsxwing for some suggestions.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16949
@srowen good questionï¼IMHOï¼we should add this API:
- provide complete API, the same as users see in webui
- if this is a security issue, we should address it in other ways
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16920
@rxin cool, `Jirafy` is enough.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16949
terminated by signal 9. retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16949
cc @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16949
jenkins crushed. retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16818
cc @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/16949
[SPARK-16122][CORE] Add rest api for job environment
## What changes were proposed in this pull request?
add rest api for job environment.
## How was this patch tested
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16936
@srowen
> Does it really not work if not enough receivers can schedule?
That's not what I want to express. What I mean is the stream output can not
operate.
---
If your proj
Github user uncleGen closed the pull request at:
https://github.com/apache/spark/pull/16920
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16920
@rxin just a fast link to jira :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16936#discussion_r101223265
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/scheduler/ReceiverTracker.scala
---
@@ -437,6 +438,74 @@ class ReceiverTracker(ssc
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16936#discussion_r101223862
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/config.scala
---
@@ -215,10 +215,6 @@ package object config
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16936#discussion_r101223957
--- Diff:
streaming/src/test/scala/org/apache/spark/streaming/StreamingContextSuite.scala
---
@@ -938,7 +958,7 @@ object SlowTestReceiver
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16656
@zsxwing Could you please take a review?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/16936
[SPARK-19605][DStream] Fail it if existing resource is not enough to run
streaming job
## What changes were proposed in this pull request?
For more detailed discussion, please review
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16818#discussion_r101187326
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/WindowSpec.scala ---
@@ -180,16 +180,20 @@ class WindowSpec private[sql
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16920
cc @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/16920
[MINOR][DOCS] Add jira url in pull request description
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
## How was this patch
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16818
cc @hvanhovell and @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16827
@srowen make sense, close it first before there is a follow-up
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user uncleGen closed the pull request at:
https://github.com/apache/spark/pull/16827
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16827
@srowen make senses. What about logging a warning/error message if
'spark.master' is set with different values?
---
If your project is set up for it, you can reply to this email and have your
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16827
@srowen Could you please take a second review?
cc @rxin also
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16818
cc @gatorsmile also
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16857
I think it is best to add a new unit test for this issue?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16857
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16818
cc @cloud-fan @hvanhovell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16863
Please review http://spark.apache.org/contributing.html before opening a
pull request.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
101 - 200 of 689 matches
Mail list logo