Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15915#discussion_r92739178
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -331,7 +331,15 @@ private[spark] class MemoryStore(
var
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16142
Besides, the unit test has proved that the older file will be cleaned first.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16142
@vanzin
> For the feature, it feels like it's trying to make the SHS more like a
"log management system" than a history server.
Sorry, I do not get it. I just add a
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15915
OK, I will give an update as soon as possible.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15915
@srowen The truth is that `chunk size` is passed from different code path
with different value. There is not a reasonable value when `chunk size` as a
invalid value. Experiencedly, `chunk size
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16142
cc @vanzin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16142
@AmplabJenkins retest please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15915
```
Discovery starting.
Discovery completed in 37 seconds, 984 milliseconds.
Run starting. Expected test count is: 10
MemoryStoreSuite:
- reserve/release unroll memory
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15915#discussion_r92308058
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -331,7 +331,12 @@ private[spark] class MemoryStore(
var
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16142
retest it please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15915#discussion_r92305310
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -331,7 +331,12 @@ private[spark] class MemoryStore(
var
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16142
retest please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15915#discussion_r92104532
--- Diff:
core/src/test/scala/org/apache/spark/util/io/ChunkedByteBufferOutputStreamSuite.scala
---
@@ -119,4 +119,21 @@ class
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15915#discussion_r92103662
--- Diff:
core/src/main/scala/org/apache/spark/util/io/ChunkedByteBufferOutputStream.scala
---
@@ -30,9 +31,14 @@ import
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16142
@vanzin I have removed related changes in `EventLoggingListener`, and
provide a new clean mode, i.e. `space` based mode. Please take a review.
---
If your project is set up for it, you can reply
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16164
@srowen Indeed, there is no need to change current code, because this
happens when something goes wrong. I will close this PR.
---
If your project is set up for it, you can reply to this email
Github user uncleGen closed the pull request at:
https://github.com/apache/spark/pull/16164
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user uncleGen closed the pull request at:
https://github.com/apache/spark/pull/15904
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15904
@saturday-shi Well, we should not spend much time to do the repetitive
work. So, if you make sure to complete this work, I will re-close this PR. OK?
---
If your project is set up for it, you can
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15915
Generally, there are three ways to fix this issue:
1. Check if `chunkSize` exceed `Int.MaxValue` to narrow this issue.
2. Provide a new config to set `chunkSize`.
3. Reuse existing
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16142#discussion_r91889826
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -90,6 +91,10 @@ private[spark] class EventLoggingListener
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15915#discussion_r9169
--- Diff:
core/src/main/scala/org/apache/spark/broadcast/TorrentBroadcast.scala ---
@@ -78,6 +80,7 @@ private[spark] class TorrentBroadcast[T: ClassTag
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15915#discussion_r91880781
--- Diff: core/src/main/scala/org/apache/spark/memory/MemoryManager.scala
---
@@ -223,8 +222,10 @@ private[spark] abstract class MemoryManager
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15915
@srowen Sorry for the delay, I will update it as soon as possible on the
basis of comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15904
@vanzin Sorry for the delay, I will update as soon as possible on the basis
of your comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15904
@saturday-shi let us improve this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user uncleGen reopened a pull request:
https://github.com/apache/spark/pull/15904
[SPARK-18470][STREAMING][WIP] Provide Spark Streaming Monitor Rest Api
## What changes were proposed in this pull request?
Provide Spark Streaming Monitor Rest Api
- /api/v1
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16000
@saturday-shi Yes, I will reopen it, and welcome to improve it and work
together :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16164
Wait for a moment, I will provide a better case :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16164
cc @zsxwing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16142
@vanzin I found you provided this base work, could you please take a look?
Thank you.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15915
@srowen +1 to your comment.
> I think the suggestion was to base it off of some existing property, like
page size?
Yes, introducing a new config property is a practice of l
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16000
> It's completely possible to mount the streaming API on top of the
existing spark-core API.
A similar pr https://github.com/apache/spark/pull/15904 for you. Is it what
you w
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16142
@srowen If I have understand what you mean correctly, the **"log
rotation"** is different with **"job event log clean up"**. The "job event
log" is used to r
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16164
@srowen Indeed, it is not a normal case. And I found this problem when the
streaming job went wrong. As you said
> one can compare the graphs visually.
It still may mislead us
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16142
@srowen Spark History Server may do the clean-up work. The precondition is
we start it and it keeps running. Besides, if there are abundant applications
constantly, the event log may still take up
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16142
cc @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/16164
[SPARK-18732][WEB-UI] The Y axis ranges of "schedulingDelay",
"processingTime"â¦
## What changes were proposed in this pull request?
Currently, the Y axis ranges
Github user uncleGen closed the pull request at:
https://github.com/apache/spark/pull/15992
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16142
@SparkQA Test it agagin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16142
retest it please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/16142
[SPARK-18716][CORE] Restrict the disk usage of spark event log.
## What changes were proposed in this pull request?
We've had reports of overfull disk usage of spark event log file
Github user uncleGen closed the pull request at:
https://github.com/apache/spark/pull/16096
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/16096
[SPARK-18617][BACKPORT] backport to branch-2.0
## What changes were proposed in this pull request?
backport #16052 to branch-2.0 with incremental update in #16091
## How
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15915
Is there any update? @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16052#discussion_r90191359
--- Diff:
streaming/src/test/scala/org/apache/spark/streaming/StreamingContextSuite.scala
---
@@ -869,6 +891,31 @@ object TestReceiver {
val
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16052#discussion_r90191009
--- Diff:
streaming/src/test/scala/org/apache/spark/streaming/StreamingContextSuite.scala
---
@@ -869,6 +891,31 @@ object TestReceiver {
val
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16052
@rxin OK, I will backport it to branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16052
OK
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16052#discussion_r89983105
--- Diff:
core/src/main/scala/org/apache/spark/serializer/SerializerManager.scala ---
@@ -155,7 +155,12 @@ private[spark] class
SerializerManager
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16052
cc @zsxwing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/16052
[SPARK-18617][CORE][STREAMING] Close "kryo auto pick" feature for Spark
Streaming
## What changes were proposed in this pull request?
#15992 provided a solution to fix th
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15992
@zsxwing Maybe, it is indeed a well-advised solution. I will provide
another PR first to add a configuration for streaming. IMHO, I will keep this
PR for farther discussion and optimization. What
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15992
NO one will review this PR? @srowen Could you please call someone to
review this? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15992
@JoshRosen and @zsxwing waiting for your feedback.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16001
@srowen waiting for your feedback :D
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15992#discussion_r89463950
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/receiver/Receiver.scala ---
@@ -83,7 +84,12 @@ import org.apache.spark.storage.StorageLevel
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15992
@jerryshao yep, the simplest way is to turn off it. Generally, this is a
followup patch for #11755. I tends to support
**automatically-pick-best-serializer** feature here to get better performance
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15992#discussion_r89461071
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/receiver/Receiver.scala ---
@@ -83,7 +84,12 @@ import org.apache.spark.storage.StorageLevel
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16000#discussion_r89457847
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/status/api/v1/JacksonMessageWriter.scala
---
@@ -0,0 +1,94 @@
+/*
+ * Licensed
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16000#discussion_r89458348
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/status/api/v1/StreamingApiRootResource.scala
---
@@ -0,0 +1,144 @@
+/*
+ * Licensed
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16000#discussion_r89458704
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/status/api/v1/StreamingApiRootResource.scala
---
@@ -0,0 +1,144 @@
+/*
+ * Licensed
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16000#discussion_r89456857
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/status/api/v1/AllBatchesResource.scala
---
@@ -0,0 +1,80 @@
+/*
+ * Licensed
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16001
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15992
cc @zsxwing and @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/16001
SPARK-18575: Keep same style: adjust the position of driver log links
## What changes were proposed in this pull request?
NOT BUG, just adjust the position of driver log link to keep
Github user uncleGen closed the pull request at:
https://github.com/apache/spark/pull/15904
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15974
OK
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15915
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15915
@SparkQA retest again
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15915
@JoshRosen The main cause for this issue is the mix-up of "chunkSize" and
other "size"
---
If your project is set up for it, you can reply to this email and have your
re
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15915
@JoshRosen I find some other usage of "chunkSize" in other code path, and
we set it with **1024 * 1024 * 4**. Is there any reasonï¼ Now, I use it as
default value for "chunkSize
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15992
cc @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/15992
[SPARK-18560][CORE][STREAMING] Receiver data can not be dataSerialized
properly.
## What changes were proposed in this pull request?
My spark streaming job can run correctly on Spark
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15974
I think the base solutions are same, expect some other information which I
am working to add.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15974
I think there is no need to open another reduplicate PR. Do your mind
closing this PR, and let work on #15904 ?
---
If your project is set up for it, you can reply to this email and have your
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15904
@ajbozarth Thank you for reminding me, i will take a look at it later.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15827
@zsxwing OKï¼close it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user uncleGen closed the pull request at:
https://github.com/apache/spark/pull/15827
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15915#discussion_r88602061
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -331,7 +331,12 @@ private[spark] class MemoryStore(
var
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15924
I think you can convert "Spark-18498" to "[SPARK-18498]" in title to keep
the same format with others.
---
If your project is set up for it, you can reply to this email and ha
Github user uncleGen closed the pull request at:
https://github.com/apache/spark/pull/15919
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15919
OK, I close this PR
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15919
But, there are many other config which are set dynamically just after
application submited, like "spark.app.id", "spark.driver.port" and so on.
![image](https://cloud.g
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15919
@srowen This PR just works when application run in cluster
mode.(https://github.com/apache/spark/blob/master/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala#L192
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15915#discussion_r88429590
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -331,7 +332,8 @@ private[spark] class MemoryStore(
var
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15915#discussion_r88429484
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -331,7 +332,8 @@ private[spark] class MemoryStore(
var
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/15919
SPARK-18488: Update the value of "spark.ui.port" after application stâ¦
## What changes were proposed in this pull request?
In cluster mode, the "spark.ui.port" i
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/15915
[SPARK-18485][CORE] Underlying integer overflow when create
ChunkedByteBufferOutputStream in MemoryStore
## What changes were proposed in this pull request?
There is an underlying
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/15904
[SPARK-18470][STREAMING][WIP] Provide Spark Streaming Monitor Rest Api
## What changes were proposed in this pull request?
Provide Spark Streaming Monitor Rest Api
## How
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15849
just rebase
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15852
LGTM overall. If this is accepted, then i will close #15827
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15852#discussion_r87941243
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/CompactibleFileStreamLog.scala
---
@@ -63,7 +63,60 @@ abstract class
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15852#discussion_r87936909
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/CompactibleFileStreamLog.scala
---
@@ -63,7 +63,60 @@ abstract class
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15849#discussion_r87934733
--- Diff:
examples/src/main/java/org/apache/spark/examples/sql/streaming/JavaStructuredKafkaWordCount.java
---
@@ -0,0 +1,96 @@
+/*
+ * Licensed
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15829
cc @yhuai @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/15849
[SPARK-18410][STREAMING] Add structured kafka example
## What changes were proposed in this pull request?
This PR provides structured kafka wordcount examples
## How
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15827
cc @zsxwing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15828
related to #15827
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/15829
[SPARK-18379][SQL] Make the parallelism of parallelPartitionDiscovery
configurable.
## What changes were proposed in this pull request?
The largest parallelism
301 - 400 of 689 matches
Mail list logo