[jira] [Commented] (SPARK-35124) Local mode fails to start cluster due to configuration value escape issue

2021-04-19 Thread Malthe Borch (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325228#comment-17325228
 ] 

Malthe Borch commented on SPARK-35124:
--

The issue seems to stem from the way Python's "Popen" command handles arguments 
on Windows – basically it doesn't have a clue that "&" has a special meaning on 
Windows (and a number of other characters – in particular {{&}}, {{|}}, {{(}}, 
{{)}}, {{<}}, {{>}}, {{^}}).

> Local mode fails to start cluster due to configuration value escape issue
> -
>
> Key: SPARK-35124
> URL: https://issues.apache.org/jira/browse/SPARK-35124
> Project: Spark
>  Issue Type: Bug
>  Components: Java API
>Affects Versions: 3.1.1
>Reporter: Malthe Borch
>Priority: Major
>
> At least on Windows and perhaps also other systems, running Spark in local 
> mode fails to start a cluster when a configuration key contains a value with 
> "&" (ampersand) in it.
> This happens during the "getOrCreate()" call from a Spark session builder.
> The reason seems to be incorrect or insufficient escaping since on Windows, 
> the attempt to start a Spark process actually ends up running multiple 
> commands for each occurrence of the "&" character.
> On Windows specifically, the correct way to escape "&" would be "^&", but I 
> have not been able yet to figure out exactly where the process is started and 
> how the configuration is passed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-35124) Local mode fails to start cluster due to configuration value escape issue

2021-04-18 Thread Malthe Borch (Jira)
Malthe Borch created SPARK-35124:


 Summary: Local mode fails to start cluster due to configuration 
value escape issue
 Key: SPARK-35124
 URL: https://issues.apache.org/jira/browse/SPARK-35124
 Project: Spark
  Issue Type: Bug
  Components: Java API
Affects Versions: 3.1.1
Reporter: Malthe Borch


At least on Windows and perhaps also other systems, running Spark in local mode 
fails to start a cluster when a configuration key contains a value with "&" 
(ampersand) in it.

This happens during the "getOrCreate()" call from a Spark session builder.

The reason seems to be incorrect or insufficient escaping since on Windows, the 
attempt to start a Spark process actually ends up running multiple commands for 
each occurrence of the "&" character.

On Windows specifically, the correct way to escape "&" would be "^&", but I 
have not been able yet to figure out exactly where the process is started and 
how the configuration is passed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-34358) Add API for all built-in expresssion functions

2021-02-04 Thread Malthe Borch (Jira)
Malthe Borch created SPARK-34358:


 Summary: Add API for all built-in expresssion functions
 Key: SPARK-34358
 URL: https://issues.apache.org/jira/browse/SPARK-34358
 Project: Spark
  Issue Type: Improvement
  Components: Java API, PySpark
Affects Versions: 3.0.1
Reporter: Malthe Borch


>From the [SQL 
>functions|https://spark.apache.org/docs/latest/api/java/org/apache/spark/sql/functions.html]
> documentation:

{quote}Commonly used functions available for DataFrame operations. Using 
functions defined here provides a little bit more compile-time safety to make 
sure the function exists. {quote}

Functions such as "inline_outer" are actually commonly used, but are not 
currently included in the API, meaning that we lose compile-time safety for 
those invocations. We should implement the required function definitions for 
the remaining built-in functions when applicable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29356) Stopping Spark doesn't shut down all network connections

2019-10-04 Thread Malthe Borch (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Malthe Borch updated SPARK-29356:
-
Description: The Spark session's gateway client still has an open network 
connection after a call to `spark.stop()`. This is unexpected and for example 
in a test suite, this triggers a resource warning when tearing down the test 
case.  (was: The Spark session's gateway client still has an open network 
connection after a call to `spark.stop()`. This is unexpected and in for 
example a test suite, this triggers a resource warning when tearing down the 
test case.)

> Stopping Spark doesn't shut down all network connections
> 
>
> Key: SPARK-29356
> URL: https://issues.apache.org/jira/browse/SPARK-29356
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 2.4.4
>Reporter: Malthe Borch
>Priority: Minor
>
> The Spark session's gateway client still has an open network connection after 
> a call to `spark.stop()`. This is unexpected and for example in a test suite, 
> this triggers a resource warning when tearing down the test case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29356) Stopping Spark doesn't shut down all network connections

2019-10-04 Thread Malthe Borch (Jira)
Malthe Borch created SPARK-29356:


 Summary: Stopping Spark doesn't shut down all network connections
 Key: SPARK-29356
 URL: https://issues.apache.org/jira/browse/SPARK-29356
 Project: Spark
  Issue Type: Bug
  Components: PySpark
Affects Versions: 2.4.4
Reporter: Malthe Borch


The Spark session's gateway client still has an open network connection after a 
call to `spark.stop()`. This is unexpected and in for example a test suite, 
this triggers a resource warning when tearing down the test case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org