[jira] [Updated] (SPARK-41006) ConfigMap has the same name when launching two pods on the same namespace

2022-11-03 Thread Eric (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric updated SPARK-41006:
-
Description: 
If we use the Spark Launcher to launch our spark apps in k8s:
{code:java}
val sparkLauncher = new InProcessLauncher()
 .setMaster(k8sMaster)
 .setDeployMode(deployMode)
 .setAppName(appName)
 .setVerbose(true)

sparkLauncher.startApplication(new SparkAppHandle.Listener { ...{code}
We have an issue when we launch another spark driver in the same namespace 
where other spark app was running:
{code:java}
kp -n audit-exporter-eee5073aac -w
NAME                                     READY   STATUS        RESTARTS   AGE
audit-exporter-71489e843d8085c0-driver   1/1     Running       0          9m54s
audit-exporter-7e6b8b843d80b9e6-exec-1   1/1     Running       0          9m40s
data-io-120204843d899567-driver          0/1     Terminating   0          1s
data-io-120204843d899567-driver          0/1     Terminating   0          2s
data-io-120204843d899567-driver          0/1     Terminating   0          3s
data-io-120204843d899567-driver          0/1     Terminating   0          
3s{code}
The error is:
{code:java}
{"time":"2022-11-03T12:49:45.626Z","lvl":"WARN","logger":"o.a.s.l.InProcessAppHandle","thread":"spark-app-38:
 'data-io'","msg":"Application failed with 
exception.","stack_trace":"io.fabric8.kubernetes.client.KubernetesClientException:
 Failure executing: PUT at: 
https://kubernetes.default/api/v1/namespaces/audit-exporter-eee5073aac/configmaps/spark-drv-d19c37843d80350c-conf-map.
 Message: ConfigMap \"spark-drv-d19c37843d80350c-conf-map\" is invalid: data: 
Forbidden: field is immutable when `immutable` is set. Received status: 
Status(apiVersion=v1, code=422, 
details=StatusDetails(causes=[StatusCause(field=data, message=Forbidden: field 
is immutable when `immutable` is set, reason=FieldValueForbidden, 
additionalProperties={})], group=null, kind=ConfigMap, 
name=spark-drv-d19c37843d80350c-conf-map, retryAfterSeconds=null, uid=null, 
additionalProperties={}), kind=Status, message=ConfigMap 
\"spark-drv-d19c37843d80350c-conf-map\" is invalid: data: Forbidden: field is 
immutable when `immutable` is set, metadata=ListMeta(_continue=null, 
remainingItemCount=null, resourceVersion=null, selfLink=null, 
additionalProperties={}), reason=Invalid, status=Failure, 
additionalProperties={}).\n\tat 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:661)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:612)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:555)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:518)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleUpdate(OperationSupport.java:342)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleUpdate(OperationSupport.java:322)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleUpdate(BaseOperation.java:649)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.lambda$replace$1(HasMetadataOperation.java:195)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation$$Lambda$5360/00.apply(Unknown
 Source)\n\tat 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.replace(HasMetadataOperation.java:200)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.replace(HasMetadataOperation.java:141)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation$$Lambda$4618/00.apply(Unknown
 Source)\n\tat 
io.fabric8.kubernetes.client.utils.CreateOrReplaceHelper.replace(CreateOrReplaceHelper.java:69)\n\tat
 
io.fabric8.kubernetes.client.utils.CreateOrReplaceHelper.createOrReplace(CreateOrReplaceHelper.java:61)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.createOrReplace(BaseOperation.java:318)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.createOrReplace(BaseOperation.java:83)\n\tat
 
io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableImpl.java:105)\n\tat
 
io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.lambda$createOrReplace$7(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:174)\n\tat
 
io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl$$Lambda$5012/00.apply(Unknown
 Source)\n\tat java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown 
Source)\n\tat 
java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown 

[jira] [Updated] (SPARK-41006) ConfigMap has the same name when launching two pods on the same namespace

2022-11-03 Thread Eric (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric updated SPARK-41006:
-
Description: 
If we use the Spark Launcher to launch our spark apps in k8s:
{code:java}
val sparkLauncher = new InProcessLauncher()
 .setMaster(k8sMaster)
 .setDeployMode(deployMode)
 .setAppName(appName)
 .setVerbose(true)

sparkLauncher.startApplication(new SparkAppHandle.Listener { ...{code}
We have an issue when we launch another spark driver in the same namespace 
where other spark app was running:
{code:java}
kp -n audit-exporter-eee5073aac -w
NAME                                     READY   STATUS        RESTARTS   AGE
audit-exporter-71489e843d8085c0-driver   1/1     Running       0          9m54s
audit-exporter-7e6b8b843d80b9e6-exec-1   1/1     Running       0          9m40s
data-io-120204843d899567-driver          0/1     Terminating   0          1s
data-io-120204843d899567-driver          0/1     Terminating   0          2s
data-io-120204843d899567-driver          0/1     Terminating   0          3s
data-io-120204843d899567-driver          0/1     Terminating   0          
3s{code}
The error is:
{code:java}
{"time":"2022-11-03T12:49:45.626Z","lvl":"WARN","logger":"o.a.s.l.InProcessAppHandle","thread":"spark-app-38:
 'data-io'","msg":"Application failed with 
exception.","stack_trace":"io.fabric8.kubernetes.client.KubernetesClientException:
 Failure executing: PUT at: 
https://kubernetes.default/api/v1/namespaces/audit-exporter-eee5073aac/configmaps/spark-drv-d19c37843d80350c-conf-map.
 Message: ConfigMap \"spark-drv-d19c37843d80350c-conf-map\" is invalid: data: 
Forbidden: field is immutable when `immutable` is set. Received status: 
Status(apiVersion=v1, code=422, 
details=StatusDetails(causes=[StatusCause(field=data, message=Forbidden: field 
is immutable when `immutable` is set, reason=FieldValueForbidden, 
additionalProperties={})], group=null, kind=ConfigMap, 
name=spark-drv-d19c37843d80350c-conf-map, retryAfterSeconds=null, uid=null, 
additionalProperties={}), kind=Status, message=ConfigMap 
\"spark-drv-d19c37843d80350c-conf-map\" is invalid: data: Forbidden: field is 
immutable when `immutable` is set, metadata=ListMeta(_continue=null, 
remainingItemCount=null, resourceVersion=null, selfLink=null, 
additionalProperties={}), reason=Invalid, status=Failure, 
additionalProperties={}).\n\tat 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:661)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:612)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:555)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:518)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleUpdate(OperationSupport.java:342)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleUpdate(OperationSupport.java:322)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleUpdate(BaseOperation.java:649)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.lambda$replace$1(HasMetadataOperation.java:195)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation$$Lambda$5360/00.apply(Unknown
 Source)\n\tat 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.replace(HasMetadataOperation.java:200)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.replace(HasMetadataOperation.java:141)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation$$Lambda$4618/00.apply(Unknown
 Source)\n\tat 
io.fabric8.kubernetes.client.utils.CreateOrReplaceHelper.replace(CreateOrReplaceHelper.java:69)\n\tat
 
io.fabric8.kubernetes.client.utils.CreateOrReplaceHelper.createOrReplace(CreateOrReplaceHelper.java:61)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.createOrReplace(BaseOperation.java:318)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.createOrReplace(BaseOperation.java:83)\n\tat
 
io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableImpl.java:105)\n\tat
 
io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.lambda$createOrReplace$7(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:174)\n\tat
 
io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl$$Lambda$5012/00.apply(Unknown
 Source)\n\tat java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown 
Source)\n\tat 
java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown 

[jira] [Updated] (SPARK-41006) ConfigMap has the same name when launching two pods on the same namespace

2022-11-03 Thread Eric (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric updated SPARK-41006:
-
Description: 
If we use the Spark Launcher to launch our spark apps in k8s:
{code:java}
val sparkLauncher = new InProcessLauncher()
 .setMaster(k8sMaster)
 .setDeployMode(deployMode)
 .setAppName(appName)
 .setVerbose(true)

sparkLauncher.startApplication(new SparkAppHandle.Listener { ...{code}
We have an issue when we launch another spark driver in the same namespace 
where other spark app was running:
{code:java}
kp -n qa-topfive-python-spark-2-15d42ac3b9
NAME                                                READY   STATUS    RESTARTS  
 AGE
data-io-c590a7843d47e206-driver                     1/1     Terminating   0     
     2s
qa-top-five-python-1667475391655-exec-1             1/1     Running   0         
 94s
qa-topfive-python-spark-2-462c5d843d46e38b-driver   1/1     Running   0         
 119s {code}
The error is:
{code:java}
{"time":"2022-10-24T15:08:50.239Z","lvl":"WARN","logger":"o.a.s.l.InProcessAppHandle","thread":"spark-app-44:
 'data-io'","msg":"Application failed with 
exception.","stack_trace":"io.fabric8.kubernetes.client.KubernetesClientException:
 Failure executing: PUT at: 
https://kubernetes.default/api/v1/namespaces/qa-topfive-python-spark-2-edf723f942/configmaps/spark-drv-34c4e3840a0466c2-conf-map.
 Message: ConfigMap \"spark-drv-34c4e3840a0466c2-conf-map\" is invalid: data: 
Forbidden: field is immutable when `immutable` is set. Received status: 
Status(apiVersion=v1, code=422, 
details=StatusDetails(causes=[StatusCause(field=data, message=Forbidden: field 
is immutable when `immutable` is set, reason=FieldValueForbidden, 
additionalProperties={})], group=null, kind=ConfigMap, 
name=spark-drv-34c4e3840a0466c2-conf-map, retryAfterSeconds=null, uid=null, 
additionalProperties={}), kind=Status, message=ConfigMap 
\"spark-drv-34c4e3840a0466c2-conf-map\" is invalid: data: Forbidden: field is 
immutable when `immutable` is set, metadata=ListMeta(_continue=null, 
remainingItemCount=null, resourceVersion=null, selfLink=null, 
additionalProperties={}), reason=Invalid, status=Failure, 
additionalProperties={}).\n\tat 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:661)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:612)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:555)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:518)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleUpdate(OperationSupport.java:342)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleUpdate(OperationSupport.java:322)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleUpdate(BaseOperation.java:649)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.lambda$replace$1(HasMetadataOperation.java:195)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation$$Lambda$5663/00.apply(Unknown
 Source)\n\tat 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.replace(HasMetadataOperation.java:200)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.replace(HasMetadataOperation.java:141)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation$$Lambda$5183/00.apply(Unknown
 Source)\n\tat 
io.fabric8.kubernetes.client.utils.CreateOrReplaceHelper.replace(CreateOrReplaceHelper.java:69)\n\tat
 
io.fabric8.kubernetes.client.utils.CreateOrReplaceHelper.createOrReplace(CreateOrReplaceHelper.java:61)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.createOrReplace(BaseOperation.java:318)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.createOrReplace(BaseOperation.java:83)\n\tat
 
io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableImpl.java:105)\n\tat
 
io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.lambda$createOrReplace$7(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:174)\n\tat
 
io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl$$Lambda$5578/00.apply(Unknown
 Source)\n\tat java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown 
Source)\n\tat 
java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown 
Source)\n\tat java.base/java.util.stream.AbstractPipeline.copyInto(Unknown 
Source)\n\tat 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown 
Source)\n\tat 

[jira] [Created] (SPARK-41006) ConfigMap has the same name when launching two pods on the same namespace

2022-11-03 Thread Eric (Jira)
Eric created SPARK-41006:


 Summary: ConfigMap has the same name when launching two pods on 
the same namespace
 Key: SPARK-41006
 URL: https://issues.apache.org/jira/browse/SPARK-41006
 Project: Spark
  Issue Type: Bug
  Components: Kubernetes
Affects Versions: 3.3.0, 3.2.0, 3.1.0
Reporter: Eric


If we use the Spark Launcher to launch our spark apps in k8s:
{code:java}
val sparkLauncher = new InProcessLauncher()
 .setMaster(k8sMaster)
 .setDeployMode(deployMode)
 .setAppName(appName)
 .setVerbose(true)

sparkLauncher.startApplication(new SparkAppHandle.Listener { ...{code}
We have an issue when we launch another spark driver in the same namespace 
where other spark app was running:
{code:java}
kp -n qa-topfive-python-spark-2-15d42ac3b9
NAME                                                READY   STATUS    RESTARTS  
 AGE
data-io-c590a7843d47e206-driver                     1/1     Terminating   0     
     2s
qa-top-five-python-1667475391655-exec-1             1/1     Running   0         
 94s
qa-topfive-python-spark-2-462c5d843d46e38b-driver   1/1     Running   0         
 119s {code}
The error is:
{code:java}
{"time":"2022-10-24T15:08:50.239Z","lvl":"WARN","logger":"o.a.s.l.InProcessAppHandle","thread":"spark-app-44:
 'data-io'","msg":"Application failed with 
exception.","stack_trace":"io.fabric8.kubernetes.client.KubernetesClientException:
 Failure executing: PUT at: 
https://kubernetes.default/api/v1/namespaces/qa-topfive-python-spark-2-edf723f942/configmaps/spark-drv-34c4e3840a0466c2-conf-map.
 Message: ConfigMap \"spark-drv-34c4e3840a0466c2-conf-map\" is invalid: data: 
Forbidden: field is immutable when `immutable` is set. Received status: 
Status(apiVersion=v1, code=422, 
details=StatusDetails(causes=[StatusCause(field=data, message=Forbidden: field 
is immutable when `immutable` is set, reason=FieldValueForbidden, 
additionalProperties={})], group=null, kind=ConfigMap, 
name=spark-drv-34c4e3840a0466c2-conf-map, retryAfterSeconds=null, uid=null, 
additionalProperties={}), kind=Status, message=ConfigMap 
\"spark-drv-34c4e3840a0466c2-conf-map\" is invalid: data: Forbidden: field is 
immutable when `immutable` is set, metadata=ListMeta(_continue=null, 
remainingItemCount=null, resourceVersion=null, selfLink=null, 
additionalProperties={}), reason=Invalid, status=Failure, 
additionalProperties={}).\n\tat 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:661)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:612)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:555)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:518)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleUpdate(OperationSupport.java:342)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleUpdate(OperationSupport.java:322)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleUpdate(BaseOperation.java:649)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.lambda$replace$1(HasMetadataOperation.java:195)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation$$Lambda$5663/00.apply(Unknown
 Source)\n\tat 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.replace(HasMetadataOperation.java:200)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.replace(HasMetadataOperation.java:141)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation$$Lambda$5183/00.apply(Unknown
 Source)\n\tat 
io.fabric8.kubernetes.client.utils.CreateOrReplaceHelper.replace(CreateOrReplaceHelper.java:69)\n\tat
 
io.fabric8.kubernetes.client.utils.CreateOrReplaceHelper.createOrReplace(CreateOrReplaceHelper.java:61)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.createOrReplace(BaseOperation.java:318)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.createOrReplace(BaseOperation.java:83)\n\tat
 
io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableImpl.java:105)\n\tat
 
io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.lambda$createOrReplace$7(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:174)\n\tat
 
io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl$$Lambda$5578/00.apply(Unknown
 Source)\n\tat java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown 
Source)\n\tat 

[jira] (SPARK-30105) Add StreamingQueryListener support to PySpark

2022-10-24 Thread Eric (Jira)


[ https://issues.apache.org/jira/browse/SPARK-30105 ]


Eric deleted comment on SPARK-30105:
--

was (Author: ladoe00):
Fixed by SPARK-38759

> Add StreamingQueryListener support to PySpark
> -
>
> Key: SPARK-30105
> URL: https://issues.apache.org/jira/browse/SPARK-30105
> Project: Spark
>  Issue Type: New Feature
>  Components: PySpark, Structured Streaming
>Affects Versions: 3.1.0
>Reporter: Abhijeet Prasad
>Priority: Minor
>
> Add support for StreamingQueryListener to PySpark.
>  
> Currently the `StreamingQueryListener` in Scala is implemented as an abstract 
> class, so we cannot use Python proxies (Py4j) to access it unless we create 
> our own custom Scala/Java wrapper.
>  
> [https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/streaming/StreamingQueryListener.scala]
>  
> This would be very useful in my personal case, I am building a library that 
> allows you to send Python errors to Sentry.io 
> [https://docs.sentry.io/platforms/python/pyspark/] and would like to hook 
> onto onQueryTerminated to send errors.
>  
> I can take this on if you point me in which direction to go, new to the 
> codebase so not quite sure what the process for porting Scala API -> PySpark 
> API changes usually look like.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30105) Add StreamingQueryListener support to PySpark

2022-10-24 Thread Eric (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17623227#comment-17623227
 ] 

Eric commented on SPARK-30105:
--

Fixed by SPARK-38759

> Add StreamingQueryListener support to PySpark
> -
>
> Key: SPARK-30105
> URL: https://issues.apache.org/jira/browse/SPARK-30105
> Project: Spark
>  Issue Type: New Feature
>  Components: PySpark, Structured Streaming
>Affects Versions: 3.1.0
>Reporter: Abhijeet Prasad
>Priority: Minor
>
> Add support for StreamingQueryListener to PySpark.
>  
> Currently the `StreamingQueryListener` in Scala is implemented as an abstract 
> class, so we cannot use Python proxies (Py4j) to access it unless we create 
> our own custom Scala/Java wrapper.
>  
> [https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/streaming/StreamingQueryListener.scala]
>  
> This would be very useful in my personal case, I am building a library that 
> allows you to send Python errors to Sentry.io 
> [https://docs.sentry.io/platforms/python/pyspark/] and would like to hook 
> onto onQueryTerminated to send errors.
>  
> I can take this on if you point me in which direction to go, new to the 
> codebase so not quite sure what the process for porting Scala API -> PySpark 
> API changes usually look like.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30495) How to disable 'spark.security.credentials.${service}.enabled' in Structured streaming while connecting to a kafka cluster

2020-04-17 Thread Eric (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085914#comment-17085914
 ] 

Eric commented on SPARK-30495:
--

Anyone knows when this fix will make it to a public build?  I see that it is 
part of the RC1 tag, but I can't find any RC1 binaries in maven.  I am using 
3.0.0 preview2 and this bug is a blocker for us.  Any idea when 3.0.0 will be 
officially out?

> How to disable 'spark.security.credentials.${service}.enabled' in Structured 
> streaming while connecting to a kafka cluster
> --
>
> Key: SPARK-30495
> URL: https://issues.apache.org/jira/browse/SPARK-30495
> Project: Spark
>  Issue Type: Bug
>  Components: Structured Streaming
>Affects Versions: 3.0.0
>Reporter: act_coder
>Assignee: Gabor Somogyi
>Priority: Major
> Fix For: 3.0.0
>
>
> Trying to read data from a secured Kafka cluster using spark structured
>  streaming. Also, using the below library to read the data -
>  +*"spark-sql-kafka-0-10_2.12":"3.0.0-preview"*+ since it has the feature to
>  specify our custom group id (instead of spark setting its own custom group
>  id)
> +*Dependency used in code:*+
>         org.apache.spark
>          spark-sql-kafka-0-10_2.12
>          3.0.0-preview
>  
> +*Logs:*+
> Getting the below error - even after specifying the required JAAS
>  configuration in spark options.
> Caused by: java.lang.IllegalArgumentException: requirement failed:
>  *Delegation token must exist for this connector*. at
>  scala.Predef$.require(Predef.scala:281) at
> org.apache.spark.kafka010.KafkaTokenUtil$.isConnectorUsingCurrentToken(KafkaTokenUtil.scala:299)
>  at
>  
> org.apache.spark.sql.kafka010.KafkaDataConsumer.getOrRetrieveConsumer(KafkaDataConsumer.scala:533)
>  at
>  
> org.apache.spark.sql.kafka010.KafkaDataConsumer.$anonfun$get$1(KafkaDataConsumer.scala:275)
>  
> +*Spark configuration used to read from Kafka:*+
> val kafkaDF = sparkSession.readStream
>  .format("kafka")
>  .option("kafka.bootstrap.servers", bootStrapServer)
>  .option("subscribe", kafkaTopic )
>  
> //Setting JAAS Configuration
> .option("kafka.sasl.jaas.config", KAFKA_JAAS_SASL)
>  .option("kafka.sasl.mechanism", "PLAIN")
>  .option("kafka.security.protocol", "SASL_SSL")
> // Setting custom consumer group id
> .option("kafka.group.id", "test_cg")
>  .load()
>  
> Following document specifies that we can disable the feature of obtaining
>  delegation token -
>  
> [https://spark.apache.org/docs/3.0.0-preview/structured-streaming-kafka-integration.html]
> Tried setting this property *spark.security.credentials.kafka.enabled to*
>  *false in spark config,* but it is still failing with the same error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-28925) Update Kubernetes-client to 4.4.2 to be compatible with Kubernetes 1.13 and 1.14

2019-08-30 Thread Eric (Jira)
Eric created SPARK-28925:


 Summary: Update Kubernetes-client to 4.4.2 to be compatible with 
Kubernetes 1.13 and 1.14
 Key: SPARK-28925
 URL: https://issues.apache.org/jira/browse/SPARK-28925
 Project: Spark
  Issue Type: Bug
  Components: Kubernetes
Affects Versions: 2.4.3
Reporter: Eric


Hello,

If you use Spark with Kubernetes 1.13 or 1.14 you will see this error:
{code:java}
{"time": "2019-08-28T09:56:11.866Z", "lvl":"INFO", "logger": 
"org.apache.spark.internal.Logging", 
"thread":"kubernetes-executor-snapshots-subscribers-0","msg":"Going to request 
1 executors from Kubernetes."}
{"time": "2019-08-28T09:56:12.028Z", "lvl":"WARN", "logger": 
"io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2", 
"thread":"OkHttp https://kubernetes.default.svc/...","msg":"Exec Failure: HTTP 
403, Status: 403 - "}
java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden'
{code}
Apparently the bug is fixed here: 
[https://github.com/fabric8io/kubernetes-client/pull/1669]


We have currently compiled Spark source code with Kubernetes-client 4.4.2 and 
it's working great on our cluster. We are using Kubernetes 1.13.10.

 

Could it be possible to update that dependency version?

 

Thanks!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org