[jira] [Created] (SPARK-20943) Correct BypassMergeSortShuffleWriter's comment

2017-05-31 Thread CanBin Zheng (JIRA)
CanBin Zheng created SPARK-20943:


 Summary: Correct BypassMergeSortShuffleWriter's comment
 Key: SPARK-20943
 URL: https://issues.apache.org/jira/browse/SPARK-20943
 Project: Spark
  Issue Type: Improvement
  Components: Shuffle, Spark Core
Affects Versions: 2.1.1
Reporter: CanBin Zheng


There are some comments written in BypassMergeSortShuffleWriter.java about when 
to select this write path, the three required conditions are described as 
follows:  
1. no Ordering is specified, and
2. no Aggregator is specified, and
3. the number of partitions is less than 
 spark.shuffle.sort.bypassMergeThreshold

Obviously, the conditions written are partially wrong and misleading, the right 
conditions should be:
1. map-side combine is false, and
2. the number of partitions is less than 
 spark.shuffle.sort.bypassMergeThreshold



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20944) Move SortShuffleWriter.shouldBypassMergeSort to SortShuffleManager

2017-05-31 Thread CanBin Zheng (JIRA)
CanBin Zheng created SPARK-20944:


 Summary: Move SortShuffleWriter.shouldBypassMergeSort to 
SortShuffleManager
 Key: SPARK-20944
 URL: https://issues.apache.org/jira/browse/SPARK-20944
 Project: Spark
  Issue Type: Improvement
  Components: Shuffle
Affects Versions: 2.1.1
Reporter: CanBin Zheng


SortShuffleWriter.shouldBypassMergeSort should be moved to SortShuffleManager 
to be together with SortShuffleManager.canUseSerializedShuffle, for consistent 
code structure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20943) Correct BypassMergeSortShuffleWriter's comment

2017-06-01 Thread CanBin Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16033988#comment-16033988
 ] 

CanBin Zheng commented on SPARK-20943:
--

Look at there two cases.
 
 //Has Aggregator defined
  @Test
  def testGroupByKeyUsingBypassMergeSort(): Unit = {
val data = List("Hello", "World", "Hello", "One", "Two")
val rdd = sc.parallelize(data).map((_, 1)).groupByKey(2)
rdd.collect()
  }

  //Has Ordering defined
  @Test
  def testShuffleWithKeyOrderingUsingBypassMergeSort(): Unit = {
val data = List("Hello", "World", "Hello", "One", "Two")
val rdd = sc.parallelize(data).map((_, 1))
val ord = implicitly[Ordering[String]]
val shuffledRDD = new ShuffledRDD[String, Int, Int](rdd, new 
HashPartitioner(2)).setKeyOrdering(ord)
shuffledRDD.collect()
  }

> Correct BypassMergeSortShuffleWriter's comment
> --
>
> Key: SPARK-20943
> URL: https://issues.apache.org/jira/browse/SPARK-20943
> Project: Spark
>  Issue Type: Improvement
>  Components: Documentation, Shuffle
>Affects Versions: 2.1.1
>Reporter: CanBin Zheng
>Priority: Trivial
>  Labels: starter
>
> There are some comments written in BypassMergeSortShuffleWriter.java about 
> when to select this write path, the three required conditions are described 
> as follows:  
> 1. no Ordering is specified, and
> 2. no Aggregator is specified, and
> 3. the number of partitions is less than 
>  spark.shuffle.sort.bypassMergeThreshold
> Obviously, the conditions written are partially wrong and misleading, the 
> right conditions should be:
> 1. map-side combine is false, and
> 2. the number of partitions is less than 
>  spark.shuffle.sort.bypassMergeThreshold



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-20943) Correct BypassMergeSortShuffleWriter's comment

2017-06-01 Thread CanBin Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16033988#comment-16033988
 ] 

CanBin Zheng edited comment on SPARK-20943 at 6/2/17 1:09 AM:
--

Look at there two cases.
 
 `//Has Aggregator defined
  @Test
  def testGroupByKeyUsingBypassMergeSort(): Unit = {
val data = List("Hello", "World", "Hello", "One", "Two")
val rdd = sc.parallelize(data).map((_, 1)).groupByKey(2)
rdd.collect()
  }

  //Has Ordering defined
  @Test
  def testShuffleWithKeyOrderingUsingBypassMergeSort(): Unit = {
val data = List("Hello", "World", "Hello", "One", "Two")
val rdd = sc.parallelize(data).map((_, 1))
val ord = implicitly[Ordering[String]]
val shuffledRDD = new ShuffledRDD[String, Int, Int](rdd, new 
HashPartitioner(2)).setKeyOrdering(ord)
shuffledRDD.collect()
  }`


was (Author: canbinzheng):
Look at there two cases.
 
 //Has Aggregator defined
  @Test
  def testGroupByKeyUsingBypassMergeSort(): Unit = {
val data = List("Hello", "World", "Hello", "One", "Two")
val rdd = sc.parallelize(data).map((_, 1)).groupByKey(2)
rdd.collect()
  }

  //Has Ordering defined
  @Test
  def testShuffleWithKeyOrderingUsingBypassMergeSort(): Unit = {
val data = List("Hello", "World", "Hello", "One", "Two")
val rdd = sc.parallelize(data).map((_, 1))
val ord = implicitly[Ordering[String]]
val shuffledRDD = new ShuffledRDD[String, Int, Int](rdd, new 
HashPartitioner(2)).setKeyOrdering(ord)
shuffledRDD.collect()
  }

> Correct BypassMergeSortShuffleWriter's comment
> --
>
> Key: SPARK-20943
> URL: https://issues.apache.org/jira/browse/SPARK-20943
> Project: Spark
>  Issue Type: Improvement
>  Components: Documentation, Shuffle
>Affects Versions: 2.1.1
>Reporter: CanBin Zheng
>Priority: Trivial
>  Labels: starter
>
> There are some comments written in BypassMergeSortShuffleWriter.java about 
> when to select this write path, the three required conditions are described 
> as follows:  
> 1. no Ordering is specified, and
> 2. no Aggregator is specified, and
> 3. the number of partitions is less than 
>  spark.shuffle.sort.bypassMergeThreshold
> Obviously, the conditions written are partially wrong and misleading, the 
> right conditions should be:
> 1. map-side combine is false, and
> 2. the number of partitions is less than 
>  spark.shuffle.sort.bypassMergeThreshold



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-20943) Correct BypassMergeSortShuffleWriter's comment

2017-06-01 Thread CanBin Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16033988#comment-16033988
 ] 

CanBin Zheng edited comment on SPARK-20943 at 6/2/17 1:11 AM:
--

Look at there two cases.
 {code}
 //Has Aggregator defined
  @Test
  def testGroupByKeyUsingBypassMergeSort(): Unit = {
val data = List("Hello", "World", "Hello", "One", "Two")
val rdd = sc.parallelize(data).map((_, 1)).groupByKey(2)
rdd.collect()
  }

  //Has Ordering defined
  @Test
  def testShuffleWithKeyOrderingUsingBypassMergeSort(): Unit = {
val data = List("Hello", "World", "Hello", "One", "Two")
val rdd = sc.parallelize(data).map((_, 1))
val ord = implicitly[Ordering[String]]
val shuffledRDD = new ShuffledRDD[String, Int, Int](rdd, new 
HashPartitioner(2)).setKeyOrdering(ord)
shuffledRDD.collect()
  }
{code}


was (Author: canbinzheng):
Look at there two cases.
 
 `//Has Aggregator defined
  @Test
  def testGroupByKeyUsingBypassMergeSort(): Unit = {
val data = List("Hello", "World", "Hello", "One", "Two")
val rdd = sc.parallelize(data).map((_, 1)).groupByKey(2)
rdd.collect()
  }

  //Has Ordering defined
  @Test
  def testShuffleWithKeyOrderingUsingBypassMergeSort(): Unit = {
val data = List("Hello", "World", "Hello", "One", "Two")
val rdd = sc.parallelize(data).map((_, 1))
val ord = implicitly[Ordering[String]]
val shuffledRDD = new ShuffledRDD[String, Int, Int](rdd, new 
HashPartitioner(2)).setKeyOrdering(ord)
shuffledRDD.collect()
  }`

> Correct BypassMergeSortShuffleWriter's comment
> --
>
> Key: SPARK-20943
> URL: https://issues.apache.org/jira/browse/SPARK-20943
> Project: Spark
>  Issue Type: Improvement
>  Components: Documentation, Shuffle
>Affects Versions: 2.1.1
>Reporter: CanBin Zheng
>Priority: Trivial
>  Labels: starter
>
> There are some comments written in BypassMergeSortShuffleWriter.java about 
> when to select this write path, the three required conditions are described 
> as follows:  
> 1. no Ordering is specified, and
> 2. no Aggregator is specified, and
> 3. the number of partitions is less than 
>  spark.shuffle.sort.bypassMergeThreshold
> Obviously, the conditions written are partially wrong and misleading, the 
> right conditions should be:
> 1. map-side combine is false, and
> 2. the number of partitions is less than 
>  spark.shuffle.sort.bypassMergeThreshold



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-20943) Correct BypassMergeSortShuffleWriter's comment

2017-06-02 Thread CanBin Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16033988#comment-16033988
 ] 

CanBin Zheng edited comment on SPARK-20943 at 6/2/17 7:00 AM:
--

Look at these two cases, either Aggregator or Ordering is defined but 
mapsideCombine is false,  they both run with BypassMergeSortShuffleWriter,
 {code}
 //Has Aggregator defined
  @Test
  def testGroupByKeyUsingBypassMergeSort(): Unit = {
val data = List("Hello", "World", "Hello", "One", "Two")
val rdd = sc.parallelize(data).map((_, 1)).groupByKey(2)
rdd.collect()
  }

  //Has Ordering defined
  @Test
  def testShuffleWithKeyOrderingUsingBypassMergeSort(): Unit = {
val data = List("Hello", "World", "Hello", "One", "Two")
val rdd = sc.parallelize(data).map((_, 1))
val ord = implicitly[Ordering[String]]
val shuffledRDD = new ShuffledRDD[String, Int, Int](rdd, new 
HashPartitioner(2)).setKeyOrdering(ord)
shuffledRDD.collect()
  }
{code}


was (Author: canbinzheng):
Look at there two cases.
 {code}
 //Has Aggregator defined
  @Test
  def testGroupByKeyUsingBypassMergeSort(): Unit = {
val data = List("Hello", "World", "Hello", "One", "Two")
val rdd = sc.parallelize(data).map((_, 1)).groupByKey(2)
rdd.collect()
  }

  //Has Ordering defined
  @Test
  def testShuffleWithKeyOrderingUsingBypassMergeSort(): Unit = {
val data = List("Hello", "World", "Hello", "One", "Two")
val rdd = sc.parallelize(data).map((_, 1))
val ord = implicitly[Ordering[String]]
val shuffledRDD = new ShuffledRDD[String, Int, Int](rdd, new 
HashPartitioner(2)).setKeyOrdering(ord)
shuffledRDD.collect()
  }
{code}

> Correct BypassMergeSortShuffleWriter's comment
> --
>
> Key: SPARK-20943
> URL: https://issues.apache.org/jira/browse/SPARK-20943
> Project: Spark
>  Issue Type: Improvement
>  Components: Documentation, Shuffle
>Affects Versions: 2.1.1
>Reporter: CanBin Zheng
>Priority: Trivial
>  Labels: starter
>
> There are some comments written in BypassMergeSortShuffleWriter.java about 
> when to select this write path, the three required conditions are described 
> as follows:  
> 1. no Ordering is specified, and
> 2. no Aggregator is specified, and
> 3. the number of partitions is less than 
>  spark.shuffle.sort.bypassMergeThreshold
> Obviously, the conditions written are partially wrong and misleading, the 
> right conditions should be:
> 1. map-side combine is false, and
> 2. the number of partitions is less than 
>  spark.shuffle.sort.bypassMergeThreshold



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20943) Correct BypassMergeSortShuffleWriter's comment

2017-06-02 Thread CanBin Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16034746#comment-16034746
 ] 

CanBin Zheng commented on SPARK-20943:
--

[~saisai_shao]  I got you. But I think it's better to change the description, 
it has confused me for a long time, maybe someone else has the same puzzle.

> Correct BypassMergeSortShuffleWriter's comment
> --
>
> Key: SPARK-20943
> URL: https://issues.apache.org/jira/browse/SPARK-20943
> Project: Spark
>  Issue Type: Improvement
>  Components: Documentation, Shuffle
>Affects Versions: 2.1.1
>Reporter: CanBin Zheng
>Priority: Trivial
>  Labels: starter
>
> There are some comments written in BypassMergeSortShuffleWriter.java about 
> when to select this write path, the three required conditions are described 
> as follows:  
> 1. no Ordering is specified, and
> 2. no Aggregator is specified, and
> 3. the number of partitions is less than 
>  spark.shuffle.sort.bypassMergeThreshold
> Obviously, the conditions written are partially wrong and misleading, the 
> right conditions should be:
> 1. map-side combine is false, and
> 2. the number of partitions is less than 
>  spark.shuffle.sort.bypassMergeThreshold



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20943) Correct BypassMergeSortShuffleWriter's comment

2017-06-05 Thread CanBin Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16037987#comment-16037987
 ] 

CanBin Zheng commented on SPARK-20943:
--

[~srowen][~saisai_shao] What's the final conclusion? Should I close this issue ?

> Correct BypassMergeSortShuffleWriter's comment
> --
>
> Key: SPARK-20943
> URL: https://issues.apache.org/jira/browse/SPARK-20943
> Project: Spark
>  Issue Type: Improvement
>  Components: Documentation, Shuffle
>Affects Versions: 2.1.1
>Reporter: CanBin Zheng
>Priority: Trivial
>  Labels: starter
>
> There are some comments written in BypassMergeSortShuffleWriter.java about 
> when to select this write path, the three required conditions are described 
> as follows:  
> 1. no Ordering is specified, and
> 2. no Aggregator is specified, and
> 3. the number of partitions is less than 
>  spark.shuffle.sort.bypassMergeThreshold
> Obviously, the conditions written are partially wrong and misleading, the 
> right conditions should be:
> 1. map-side combine is false, and
> 2. the number of partitions is less than 
>  spark.shuffle.sort.bypassMergeThreshold



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31786) Exception on submitting Spark-Pi to Kubernetes 1.17.3

2020-05-21 Thread Canbin Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17113724#comment-17113724
 ] 

Canbin Zheng commented on SPARK-31786:
--

It seems the same issue as 
https://github.com/fabric8io/kubernetes-client/issues/2212. I have tried out 
v4.9.2 in Flink and it works as expected.
JIRA: https://issues.apache.org/jira/browse/FLINK-17565 

> Exception on submitting Spark-Pi to Kubernetes 1.17.3
> -
>
> Key: SPARK-31786
> URL: https://issues.apache.org/jira/browse/SPARK-31786
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes
>Affects Versions: 2.4.5, 3.0.0
>Reporter: Maciej Bryński
>Priority: Blocker
>
> Hi,
> I'm getting exception when submitting Spark-Pi app to Kubernetes cluster.
> Kubernetes version: 1.17.3
> JDK version: openjdk version "1.8.0_252"
> Exception:
> {code}
>  ./bin/spark-submit --master k8s://https://172.31.23.60:8443 --deploy-mode 
> cluster --name spark-pi --conf 
> spark.kubernetes.container.image=spark-py:2.4.5 --conf 
> spark.kubernetes.executor.request.cores=0.1 --conf 
> spark.kubernetes.authenticate.driver.serviceAccountName=spark --conf 
> spark.executor.instances=1 local:///opt/spark/examples/src/main/python/pi.py
> log4j:WARN No appenders could be found for logger 
> (io.fabric8.kubernetes.client.Config).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> Using Spark's default log4j profile: 
> org/apache/spark/log4j-defaults.properties
> Exception in thread "main" 
> io.fabric8.kubernetes.client.KubernetesClientException: Operation: [create]  
> for kind: [Pod]  with name: [null]  in namespace: [default]  failed.
> at 
> io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:64)
> at 
> io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:72)
> at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:337)
> at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:330)
> at 
> org.apache.spark.deploy.k8s.submit.Client$$anonfun$run$2.apply(KubernetesClientApplication.scala:141)
> at 
> org.apache.spark.deploy.k8s.submit.Client$$anonfun$run$2.apply(KubernetesClientApplication.scala:140)
> at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2543)
> at 
> org.apache.spark.deploy.k8s.submit.Client.run(KubernetesClientApplication.scala:140)
> at 
> org.apache.spark.deploy.k8s.submit.KubernetesClientApplication$$anonfun$run$5.apply(KubernetesClientApplication.scala:250)
> at 
> org.apache.spark.deploy.k8s.submit.KubernetesClientApplication$$anonfun$run$5.apply(KubernetesClientApplication.scala:241)
> at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2543)
> at 
> org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.run(KubernetesClientApplication.scala:241)
> at 
> org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.start(KubernetesClientApplication.scala:204)
> at 
> org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
> at 
> org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
> at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
> at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
> at 
> org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: java.net.SocketException: Broken pipe (Write failed)
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at 
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
> at sun.security.ssl.OutputRecord.writeBuffer(OutputRecord.java:431)
> at sun.security.ssl.OutputRecord.write(OutputRecord.java:417)
> at 
> sun.security.ssl.SSLSocketImpl.writeRecordInternal(SSLSocketImpl.java:894)
> at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:865)
> at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:123)
> at okio.Okio$1.write(Okio.java:79)
> at okio.AsyncTimeout$1.write(AsyncTimeout.java:180)
> at okio.RealBufferedSink.flush(RealBufferedSink.java:224)
> at okhttp3.internal.http2.Http2Writer.settings(Http2Writer.java:203)
> at 
> okhttp3.internal.http2.Http2C

[jira] [Commented] (SPARK-31696) Support spark.kubernetes.driver.service.annotation

2020-05-21 Thread Canbin Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17113729#comment-17113729
 ] 

Canbin Zheng commented on SPARK-31696:
--

Hi [~dongjoon]! Are there some scenarios that the users would like to set 
annotations for the headless service? 

> Support spark.kubernetes.driver.service.annotation
> --
>
> Key: SPARK-31696
> URL: https://issues.apache.org/jira/browse/SPARK-31696
> Project: Spark
>  Issue Type: New Feature
>  Components: Kubernetes, Spark Core
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Major
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31696) Support spark.kubernetes.driver.service.annotation

2020-05-25 Thread Canbin Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115873#comment-17115873
 ] 

Canbin Zheng commented on SPARK-31696:
--

Thanks [~dongjoon]! Since not familiar with Prometheus, I am not sure how the 
Prometheus leverage annotation of the headless service, is there any detailed 
example for this usage?

> Support spark.kubernetes.driver.service.annotation
> --
>
> Key: SPARK-31696
> URL: https://issues.apache.org/jira/browse/SPARK-31696
> Project: Spark
>  Issue Type: New Feature
>  Components: Kubernetes, Spark Core
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Major
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org