[jira] [Updated] (SPARK-45468) More transparent proxy handling for HTTP redirects

2023-10-10 Thread Nobuaki Sukegawa (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nobuaki Sukegawa updated SPARK-45468:
-
Description: 
Currently, proxies can be made transparent for hyperlinks in Spark web UIs with 
spark.ui.proxyRoot or X-Forwarded-Context header alone. However, HTTP redirects 
(such as job/stage kill) currently requires explicit spark.ui.proxyRedirectUri 
as well for handling proxy. This is not ideal as proxy hostname may not be 
known at the time configuring Spark apps.

This can be mitigated by 1) always prepending spark.ui.proxyRoot to redirect 
path for those proxies that intelligently rewrite Location headers and 2) by 
using path without hostname (/jobs/, not [https://example.com/jobs/]) for those 
proxies without Location header rewrites. Then redirects behavior would be 
basically the same way as other hyperlinks.
h2. Example

Let's say proxy URL is [https://example.org/sparkui/]... forwarding to 
[http://drv.svc/]...
and spark.ui.proxyRoot is configured to be /sparkui
h3. Existing behavior (without spark.ui.proxyRedirectUri)

job/stage kill links redirects to [http://drv.svc/jobs/] - likely 404
(other hyperlinks are to paths with prefix, e.g., /sparkui/executors - works 
fine)
h3. After the change 2)

links redirects to /sparkui/jobs/ - works fine
also consistent with other hyperlinks

NOTE: while hostname was originally required in RFC 2616 in 1999, since RFC 
7231 in 2014 hostname can be formally omitted as most browsers already 
supported it (it is rather hard to find any browser that doesn't support it).

  was:
Currently, proxies can be made transparent for hyperlinks in Spark web UIs with 
spark.ui.proxyRoot or X-Forwarded-Context header alone. However, HTTP redirects 
(such as job/stage kill) currently requires explicit spark.ui.proxyRedirectUri 
as well for handling proxy. This is not ideal as proxy hostname may not be 
known at the time configuring Spark apps.

This can be mitigated by 1) always prepending spark.ui.proxyRoot to redirect 
path for those proxies that intelligently rewrite Location headers and 2) by 
using path without hostname (/jobs/, not https://example.com/jobs/) for those 
proxies without Location header rewrites. Then redirects behavior would be 
basically the same way as other hyperlinks.

Regarding 2), while hostname was originally required in RFC 2616 in 1999, since 
RFC 7231 in 2014 hostname can be formally omitted as most browsers already 
supported it (it is rather hard to find any browser that doesn't support it).


> More transparent proxy handling for HTTP redirects
> --
>
> Key: SPARK-45468
> URL: https://issues.apache.org/jira/browse/SPARK-45468
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 3.5.0
>Reporter: Nobuaki Sukegawa
>Priority: Major
>  Labels: pull-request-available
>
> Currently, proxies can be made transparent for hyperlinks in Spark web UIs 
> with spark.ui.proxyRoot or X-Forwarded-Context header alone. However, HTTP 
> redirects (such as job/stage kill) currently requires explicit 
> spark.ui.proxyRedirectUri as well for handling proxy. This is not ideal as 
> proxy hostname may not be known at the time configuring Spark apps.
> This can be mitigated by 1) always prepending spark.ui.proxyRoot to redirect 
> path for those proxies that intelligently rewrite Location headers and 2) by 
> using path without hostname (/jobs/, not [https://example.com/jobs/]) for 
> those proxies without Location header rewrites. Then redirects behavior would 
> be basically the same way as other hyperlinks.
> h2. Example
> Let's say proxy URL is [https://example.org/sparkui/]... forwarding to 
> [http://drv.svc/]...
> and spark.ui.proxyRoot is configured to be /sparkui
> h3. Existing behavior (without spark.ui.proxyRedirectUri)
> job/stage kill links redirects to [http://drv.svc/jobs/] - likely 404
> (other hyperlinks are to paths with prefix, e.g., /sparkui/executors - works 
> fine)
> h3. After the change 2)
> links redirects to /sparkui/jobs/ - works fine
> also consistent with other hyperlinks
> NOTE: while hostname was originally required in RFC 2616 in 1999, since RFC 
> 7231 in 2014 hostname can be formally omitted as most browsers already 
> supported it (it is rather hard to find any browser that doesn't support it).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-45201) NoClassDefFoundError: InternalFutureFailureAccess when compiling Spark 3.5.0

2023-10-09 Thread Nobuaki Sukegawa (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-45201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17773360#comment-17773360
 ] 

Nobuaki Sukegawa commented on SPARK-45201:
--

We've experienced the same issue and resolved it in the same way (by removing 
the connect common JAR and applying the patch).

For some reason the issue did not always reproduce. Using the same container 
image on a same Kubernetes cluster, it seems that it only happens on certain 
nodes. I suspect that it is because of the use of wildcard in Spark classpath 
that JVM probably resolves to the actual filepaths using system call without 
any guaranteed ordering in the result (just a guess from the behavior).

> NoClassDefFoundError: InternalFutureFailureAccess when compiling Spark 3.5.0
> 
>
> Key: SPARK-45201
> URL: https://issues.apache.org/jira/browse/SPARK-45201
> Project: Spark
>  Issue Type: Bug
>  Components: Connect
>Affects Versions: 3.5.0
>Reporter: Sebastian Daberdaku
>Priority: Major
> Attachments: Dockerfile, spark-3.5.0.patch
>
>
> I am trying to compile Spark 3.5.0 and make a distribution that supports 
> Spark Connect and Kubernetes. The compilation seems to complete correctly, 
> but when I try to run the Spark Connect server on kubernetes I get a 
> "NoClassDefFoundError" as follows:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/sparkproject/guava/util/concurrent/internal/InternalFutureFailureAccess
>     at java.base/java.lang.ClassLoader.defineClass1(Native Method)
>     at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1017)
>     at 
> java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:150)
>     at 
> java.base/jdk.internal.loader.BuiltinClassLoader.defineClass(BuiltinClassLoader.java:862)
>     at 
> java.base/jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(BuiltinClassLoader.java:760)
>     at 
> java.base/jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(BuiltinClassLoader.java:681)
>     at 
> java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:639)
>     at 
> java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188)
>     at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:525)
>     at java.base/java.lang.ClassLoader.defineClass1(Native Method)
>     at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1017)
>     at 
> java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:150)
>     at 
> java.base/jdk.internal.loader.BuiltinClassLoader.defineClass(BuiltinClassLoader.java:862)
>     at 
> java.base/jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(BuiltinClassLoader.java:760)
>     at 
> java.base/jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(BuiltinClassLoader.java:681)
>     at 
> java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:639)
>     at 
> java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188)
>     at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:525)
>     at java.base/java.lang.ClassLoader.defineClass1(Native Method)
>     at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1017)
>     at 
> java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:150)
>     at 
> java.base/jdk.internal.loader.BuiltinClassLoader.defineClass(BuiltinClassLoader.java:862)
>     at 
> java.base/jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(BuiltinClassLoader.java:760)
>     at 
> java.base/jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(BuiltinClassLoader.java:681)
>     at 
> java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:639)
>     at 
> java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188)
>     at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:525)
>     at 
> org.sparkproject.guava.cache.LocalCache$LoadingValueReference.(LocalCache.java:3511)
>     at 
> org.sparkproject.guava.cache.LocalCache$LoadingValueReference.(LocalCache.java:3515)
>     at 
> org.sparkproject.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2168)
>     at 
> org.sparkproject.guava.cache.LocalCache$Segment.get(LocalCache.java:2079)
>     at org.sparkproject.guava.cache.LocalCache.get(LocalCache.java:4011)
>     at org.sparkproject.guava.cache.LocalCache.getOrLoad(LocalCache.java:4034)
>     at 
> org.sparkproject.guava.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:5010)
>     at 
> org.apache.spark.storage.BlockManagerId$.getCachedBlockManagerId(BlockManagerId.scala:146)
>     at 
> 

[jira] [Updated] (SPARK-45468) More transparent proxy handling for HTTP redirects

2023-10-09 Thread Nobuaki Sukegawa (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nobuaki Sukegawa updated SPARK-45468:
-
Description: 
Currently, proxies can be made transparent for hyperlinks in Spark web UIs with 
spark.ui.proxyRoot or X-Forwarded-Context header alone. However, HTTP redirects 
(such as job/stage kill) currently requires explicit spark.ui.proxyRedirectUri 
as well for handling proxy. This is not ideal as proxy hostname may not be 
known at the time configuring Spark apps.

This can be mitigated by 1) always prepending spark.ui.proxyRoot to redirect 
path for those proxies that intelligently rewrite Location headers and 2) by 
using path without hostname (/jobs/, not https://example.com/jobs/) for those 
proxies without Location header rewrites. Then redirects behavior would be 
basically the same way as other hyperlinks.

Regarding 2), while hostname was originally required in RFC 2616 in 1999, since 
RFC 7231 in 2014 hostname can be formally omitted as most browsers already 
supported it (it is rather hard to find any browser that doesn't support it).

  was:
Currently, proxies can be made transparent for hyperlinks in Spark web UIs with 
spark.ui.proxyRoot or X-Forwarded-Context header. However, HTTP redirects (such 
as job/stage kill) currently requires explicit spark.ui.proxyRedirectUri for 
handling proxy. This is not ideal as proxy hostname may not be known at the 
time configuring Spark apps.

This can be mitigated by using path without hostname (/jobs/, not 
https://example.com/jobs/). Then redirects behavior would be basically the same 
way as other hyperlinks.

While hostname was originally required in RFC 2616 in 1999, since RFC 7231 in 
2014 hostname can be formally omitted as most browsers already supported it (it 
is rather hard to find any browser that doesn't support it).


> More transparent proxy handling for HTTP redirects
> --
>
> Key: SPARK-45468
> URL: https://issues.apache.org/jira/browse/SPARK-45468
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 3.5.0
>Reporter: Nobuaki Sukegawa
>Priority: Major
>
> Currently, proxies can be made transparent for hyperlinks in Spark web UIs 
> with spark.ui.proxyRoot or X-Forwarded-Context header alone. However, HTTP 
> redirects (such as job/stage kill) currently requires explicit 
> spark.ui.proxyRedirectUri as well for handling proxy. This is not ideal as 
> proxy hostname may not be known at the time configuring Spark apps.
> This can be mitigated by 1) always prepending spark.ui.proxyRoot to redirect 
> path for those proxies that intelligently rewrite Location headers and 2) by 
> using path without hostname (/jobs/, not https://example.com/jobs/) for those 
> proxies without Location header rewrites. Then redirects behavior would be 
> basically the same way as other hyperlinks.
> Regarding 2), while hostname was originally required in RFC 2616 in 1999, 
> since RFC 7231 in 2014 hostname can be formally omitted as most browsers 
> already supported it (it is rather hard to find any browser that doesn't 
> support it).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-45468) More transparent proxy handling for HTTP redirects

2023-10-09 Thread Nobuaki Sukegawa (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nobuaki Sukegawa updated SPARK-45468:
-
Summary: More transparent proxy handling for HTTP redirects  (was: Add 
option to use path without hostname for redirects)

> More transparent proxy handling for HTTP redirects
> --
>
> Key: SPARK-45468
> URL: https://issues.apache.org/jira/browse/SPARK-45468
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 3.5.0
>Reporter: Nobuaki Sukegawa
>Priority: Major
>
> Currently, proxies can be made transparent for hyperlinks in Spark web UIs 
> with spark.ui.proxyRoot or X-Forwarded-Context header. However, HTTP 
> redirects (such as job/stage kill) currently requires explicit 
> spark.ui.proxyRedirectUri for handling proxy. This is not ideal as proxy 
> hostname may not be known at the time configuring Spark apps.
> This can be mitigated by using path without hostname (/jobs/, not 
> https://example.com/jobs/). Then redirects behavior would be basically the 
> same way as other hyperlinks.
> While hostname was originally required in RFC 2616 in 1999, since RFC 7231 in 
> 2014 hostname can be formally omitted as most browsers already supported it 
> (it is rather hard to find any browser that doesn't support it).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-40065) Executor ConfigMap is not mounted if profile is not default

2022-08-12 Thread Nobuaki Sukegawa (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nobuaki Sukegawa updated SPARK-40065:
-
Description: 
When executor config map is made optional in SPARK-34316, mount volume is 
unconditionally disabled erroneously when non-default profile is used.

When spark.kubernetes.executor.disableConfigMap is false, expected behavior is 
that the ConfigMap is mounted regardless of executor's resource profile. 
However, it is not mounted if the resource profile is non-default.

  was:
When executor config map is made optional in SPARK-34316, mount volume is 
unconditionally disabled erroneously when non-default profile is used.

When spark.kubernetes.executor.disableConfigMap is false, expected behavior is 
that the ConfigMap is mounted regardless of executor's resource profile. 
However, it was not mounted if the resource profile is non-default.


> Executor ConfigMap is not mounted if profile is not default
> ---
>
> Key: SPARK-40065
> URL: https://issues.apache.org/jira/browse/SPARK-40065
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes
>Affects Versions: 3.2.0, 3.2.1, 3.3.0, 3.2.2
>Reporter: Nobuaki Sukegawa
>Priority: Minor
>
> When executor config map is made optional in SPARK-34316, mount volume is 
> unconditionally disabled erroneously when non-default profile is used.
> When spark.kubernetes.executor.disableConfigMap is false, expected behavior 
> is that the ConfigMap is mounted regardless of executor's resource profile. 
> However, it is not mounted if the resource profile is non-default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-40065) Executor ConfigMap is not mounted if profile is not default

2022-08-12 Thread Nobuaki Sukegawa (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nobuaki Sukegawa updated SPARK-40065:
-
Description: 
When executor config map is made optional in SPARK-34316, mount volume is 
unconditionally disabled erroneously when non-default profile is used.

When spark.kubernetes.executor.disableConfigMap is false, expected behavior is 
that the ConfigMap is mounted regardless of executor's resource profile. 
However, it was not mounted if the resource profile is non-default.

  was:When the resource profile is non-default, executor configmap is not 
mounted even if spark.kubernetes.executor.disableConfigMap is false.


> Executor ConfigMap is not mounted if profile is not default
> ---
>
> Key: SPARK-40065
> URL: https://issues.apache.org/jira/browse/SPARK-40065
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes
>Affects Versions: 3.2.0, 3.2.1, 3.3.0, 3.2.2
>Reporter: Nobuaki Sukegawa
>Priority: Minor
>
> When executor config map is made optional in SPARK-34316, mount volume is 
> unconditionally disabled erroneously when non-default profile is used.
> When spark.kubernetes.executor.disableConfigMap is false, expected behavior 
> is that the ConfigMap is mounted regardless of executor's resource profile. 
> However, it was not mounted if the resource profile is non-default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-40065) Executor ConfigMap is not mounted if profile is not default

2022-08-12 Thread Nobuaki Sukegawa (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nobuaki Sukegawa updated SPARK-40065:
-
Description: When the resource profile is non-default, executor configmap 
is not mounted even if spark.kubernetes.executor.disableConfigMap is false.  
(was: When the resource profile is non-default, executor configmap is not 
created even if spark.kubernetes.executor.disableConfigMap is false.)

> Executor ConfigMap is not mounted if profile is not default
> ---
>
> Key: SPARK-40065
> URL: https://issues.apache.org/jira/browse/SPARK-40065
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes
>Affects Versions: 3.2.0, 3.2.1, 3.3.0, 3.2.2
>Reporter: Nobuaki Sukegawa
>Priority: Minor
>
> When the resource profile is non-default, executor configmap is not mounted 
> even if spark.kubernetes.executor.disableConfigMap is false.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-40065) Executor ConfigMap is not mounted if profile is not default

2022-08-12 Thread Nobuaki Sukegawa (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nobuaki Sukegawa updated SPARK-40065:
-
Summary: Executor ConfigMap is not mounted if profile is not default  (was: 
Executor ConfigMap is not created if profile is not default)

> Executor ConfigMap is not mounted if profile is not default
> ---
>
> Key: SPARK-40065
> URL: https://issues.apache.org/jira/browse/SPARK-40065
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes
>Affects Versions: 3.2.0, 3.2.1, 3.3.0, 3.2.2
>Reporter: Nobuaki Sukegawa
>Priority: Minor
>
> When the resource profile is non-default, executor configmap is not created 
> even if spark.kubernetes.executor.disableConfigMap is false.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-40065) Executor ConfigMap is not created if profile is not default

2022-08-12 Thread Nobuaki Sukegawa (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nobuaki Sukegawa updated SPARK-40065:
-
Affects Version/s: 3.2.1
   3.2.0

> Executor ConfigMap is not created if profile is not default
> ---
>
> Key: SPARK-40065
> URL: https://issues.apache.org/jira/browse/SPARK-40065
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes
>Affects Versions: 3.2.0, 3.2.1, 3.3.0, 3.2.2
>Reporter: Nobuaki Sukegawa
>Priority: Minor
>
> When the resource profile is non-default, executor configmap is not created 
> even if spark.kubernetes.executor.disableConfigMap is false.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-40065) Executor ConfigMap is not created if profile is not default

2022-08-12 Thread Nobuaki Sukegawa (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nobuaki Sukegawa updated SPARK-40065:
-
Affects Version/s: 3.2.2

> Executor ConfigMap is not created if profile is not default
> ---
>
> Key: SPARK-40065
> URL: https://issues.apache.org/jira/browse/SPARK-40065
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes
>Affects Versions: 3.3.0, 3.2.2
>Reporter: Nobuaki Sukegawa
>Priority: Minor
>
> When the resource profile is non-default, executor configmap is not created 
> even if spark.kubernetes.executor.disableConfigMap is false.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-40065) Executor ConfigMap is not created if profile is not default

2022-08-12 Thread Nobuaki Sukegawa (Jira)
Nobuaki Sukegawa created SPARK-40065:


 Summary: Executor ConfigMap is not created if profile is not 
default
 Key: SPARK-40065
 URL: https://issues.apache.org/jira/browse/SPARK-40065
 Project: Spark
  Issue Type: Bug
  Components: Kubernetes
Affects Versions: 3.3.0
Reporter: Nobuaki Sukegawa


When the resource profile is non-default, executor configmap is not created 
even if spark.kubernetes.executor.disableConfigMap is false.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org