[jira] [Updated] (SPARK-12176) SparkLauncher's setConf() does not support configs containing spaces

2015-12-07 Thread Yuhang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-12176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuhang Chen updated SPARK-12176:

Summary: SparkLauncher's setConf() does not support configs containing 
spaces  (was: SparkLauncher's setConf() does not support configs with spaces in 
their typing)

> SparkLauncher's setConf() does not support configs containing spaces
> 
>
> Key: SPARK-12176
> URL: https://issues.apache.org/jira/browse/SPARK-12176
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2
> Environment: All
>Reporter: Yuhang Chen
>Priority: Minor
>
> The spark-submit uses '--conf K=V' pattern for setting configs. And according 
> to the docs, if the 'V' you set contains spaces in it, you should wrap the 
> whole 'K=V' parts with quotes. However, the SparkLauncher 
> (org.apache.spark.launcher.SparkLauncher) would not wrap the ‘K=V' parts for 
> you, and there is no place for wrapping by yourself with the API it provides.
> I checked up the source, all confs are stored in a Map before generating 
> launching commands. Thus. my advice is checking all values of the conf Map 
> and do wrapping during command building.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-12176) SparkLauncher's setConf() does not support configs containing spaces

2015-12-07 Thread Yuhang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-12176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuhang Chen updated SPARK-12176:

Description: 
The spark-submit uses '--conf K=V' pattern for setting configs. According to 
the docs, if the 'V' you set has spaces in it, the whole 'K=V' parts should be 
wrapped with quotes. 

However, the SparkLauncher (org.apache.spark.launcher.SparkLauncher) would not 
do that wrapping for you, and there is no chance for wrapping by yourself with 
the API it provides.

I checked up the source, all confs are stored in a Map before generating 
launching commands. Thus. my advice is checking all values of the conf Map and 
do wrapping during command building.



  was:
The spark-submit uses '--conf K=V' pattern for setting configs. And according 
to the docs, if the 'V' you set contains spaces in it, you should wrap the 
whole 'K=V' parts with quotes. However, the SparkLauncher 
(org.apache.spark.launcher.SparkLauncher) would not wrap the ‘K=V' parts for 
you, and there is no place for wrapping by yourself with the API it provides.

I checked up the source, all confs are stored in a Map before generating 
launching commands. Thus. my advice is checking all values of the conf Map and 
do wrapping during command building.




> SparkLauncher's setConf() does not support configs containing spaces
> 
>
> Key: SPARK-12176
> URL: https://issues.apache.org/jira/browse/SPARK-12176
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2
> Environment: All
>Reporter: Yuhang Chen
>Priority: Minor
>
> The spark-submit uses '--conf K=V' pattern for setting configs. According to 
> the docs, if the 'V' you set has spaces in it, the whole 'K=V' parts should 
> be wrapped with quotes. 
> However, the SparkLauncher (org.apache.spark.launcher.SparkLauncher) would 
> not do that wrapping for you, and there is no chance for wrapping by yourself 
> with the API it provides.
> I checked up the source, all confs are stored in a Map before generating 
> launching commands. Thus. my advice is checking all values of the conf Map 
> and do wrapping during command building.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-12176) SparkLauncher's setConf() does not support configs containing spaces

2015-12-07 Thread Yuhang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-12176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuhang Chen updated SPARK-12176:

Flags: Important

> SparkLauncher's setConf() does not support configs containing spaces
> 
>
> Key: SPARK-12176
> URL: https://issues.apache.org/jira/browse/SPARK-12176
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2
> Environment: All
>Reporter: Yuhang Chen
>Priority: Minor
>
> The spark-submit uses '--conf K=V' pattern for setting configs. According to 
> the docs, if the 'V' you set has spaces in it, the whole 'K=V' parts should 
> be wrapped with quotes. 
> However, the SparkLauncher (org.apache.spark.launcher.SparkLauncher) would 
> not do that wrapping for you, and there is no chance for wrapping by yourself 
> with the API it provides.
> I checked up the source, all confs are stored in a Map before generating 
> launching commands. Thus. my advice is checking all values of the conf Map 
> and do wrapping during command building.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-12176) SparkLauncher's setConf() does not support configs containing spaces

2015-12-07 Thread Yuhang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-12176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuhang Chen updated SPARK-12176:

Description: 
The spark-submit uses '--conf K=V' pattern for setting configs. According to 
the docs, if the 'V' you set has spaces in it, the whole 'K=V' parts should be 
wrapped with quotes. 

However, the SparkLauncher (org.apache.spark.launcher.SparkLauncher) would not 
do that wrapping for you, and there is no chance for wrapping by yourself with 
the API it provides.

I checked up the source, all confs are stored in a Map before generating 
launching commands. Thus. my advice is checking all values of the conf Map and 
do wrapping during command building.


For example, I want to add "-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" for 
executors (spark.executor.extraJavaOptions), and the conf contains a space in 
it. For spark-submit, I should wrap the conf with quotes like this:
--conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps"

But when I use the setConf() API of SparkLauncher, I write code like this:
launcher.setConf("spark.executor.extraJavaOptions", "-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps"). Now, SparkLauncher uses Java's ProcessBuilder to 
start a sub-process, in which the spark-submit is finally executed. I debugged 
the source, and it turns out the command is like this;
--conf spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps

See? the quotes are gone, and the job counld not be launched with this command.

  was:
The spark-submit uses '--conf K=V' pattern for setting configs. According to 
the docs, if the 'V' you set has spaces in it, the whole 'K=V' parts should be 
wrapped with quotes. 

However, the SparkLauncher (org.apache.spark.launcher.SparkLauncher) would not 
do that wrapping for you, and there is no chance for wrapping by yourself with 
the API it provides.

I checked up the source, all confs are stored in a Map before generating 
launching commands. Thus. my advice is checking all values of the conf Map and 
do wrapping during command building.




> SparkLauncher's setConf() does not support configs containing spaces
> 
>
> Key: SPARK-12176
> URL: https://issues.apache.org/jira/browse/SPARK-12176
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2
> Environment: All
>Reporter: Yuhang Chen
>Priority: Minor
>
> The spark-submit uses '--conf K=V' pattern for setting configs. According to 
> the docs, if the 'V' you set has spaces in it, the whole 'K=V' parts should 
> be wrapped with quotes. 
> However, the SparkLauncher (org.apache.spark.launcher.SparkLauncher) would 
> not do that wrapping for you, and there is no chance for wrapping by yourself 
> with the API it provides.
> I checked up the source, all confs are stored in a Map before generating 
> launching commands. Thus. my advice is checking all values of the conf Map 
> and do wrapping during command building.
> For example, I want to add "-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" for 
> executors (spark.executor.extraJavaOptions), and the conf contains a space in 
> it. For spark-submit, I should wrap the conf with quotes like this:
> --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps"
> But when I use the setConf() API of SparkLauncher, I write code like this:
> launcher.setConf("spark.executor.extraJavaOptions", "-XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps"). Now, SparkLauncher uses Java's ProcessBuilder to 
> start a sub-process, in which the spark-submit is finally executed. I 
> debugged the source, and it turns out the command is like this;
> --conf spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps
> See? the quotes are gone, and the job counld not be launched with this 
> command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-12176) SparkLauncher's setConf() does not support configs containing spaces

2015-12-07 Thread Yuhang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-12176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuhang Chen updated SPARK-12176:

Description: 
The spark-submit uses '--conf K=V' pattern for setting configs. According to 
the docs, if the 'V' you set has spaces in it, the whole 'K=V' parts should be 
wrapped with quotes. 

However, the SparkLauncher (org.apache.spark.launcher.SparkLauncher) would not 
do that wrapping for you, and there is no chance for wrapping by yourself with 
the API it provides.

For example, I want to add "-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" for 
executors (spark.executor.extraJavaOptions), and the conf contains a space in 
it. 

For spark-submit, I should wrap the conf with quotes like this:
--conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps"

But when I use the setConf() API of SparkLauncher, I write code like this:
launcher.setConf("spark.executor.extraJavaOptions", "-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps"). Now, SparkLauncher uses Java's ProcessBuilder to 
start a sub-process, in which the spark-submit is finally executed. And it 
turns out that the final command is like this;
--conf spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps

See? the quotes are gone, and the job counld not be launched with this command. 

Then I checked up the source, all confs are stored in a Map before generating 
launching commands. Thus. my advice is checking all values of the conf Map and 
do wrapping during command building.

  was:
The spark-submit uses '--conf K=V' pattern for setting configs. According to 
the docs, if the 'V' you set has spaces in it, the whole 'K=V' parts should be 
wrapped with quotes. 

However, the SparkLauncher (org.apache.spark.launcher.SparkLauncher) would not 
do that wrapping for you, and there is no chance for wrapping by yourself with 
the API it provides.

I checked up the source, all confs are stored in a Map before generating 
launching commands. Thus. my advice is checking all values of the conf Map and 
do wrapping during command building.


For example, I want to add "-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" for 
executors (spark.executor.extraJavaOptions), and the conf contains a space in 
it. For spark-submit, I should wrap the conf with quotes like this:
--conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps"

But when I use the setConf() API of SparkLauncher, I write code like this:
launcher.setConf("spark.executor.extraJavaOptions", "-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps"). Now, SparkLauncher uses Java's ProcessBuilder to 
start a sub-process, in which the spark-submit is finally executed. I debugged 
the source, and it turns out the command is like this;
--conf spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps

See? the quotes are gone, and the job counld not be launched with this command.


> SparkLauncher's setConf() does not support configs containing spaces
> 
>
> Key: SPARK-12176
> URL: https://issues.apache.org/jira/browse/SPARK-12176
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2
> Environment: All
>Reporter: Yuhang Chen
>Priority: Minor
>
> The spark-submit uses '--conf K=V' pattern for setting configs. According to 
> the docs, if the 'V' you set has spaces in it, the whole 'K=V' parts should 
> be wrapped with quotes. 
> However, the SparkLauncher (org.apache.spark.launcher.SparkLauncher) would 
> not do that wrapping for you, and there is no chance for wrapping by yourself 
> with the API it provides.
> For example, I want to add "-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" for 
> executors (spark.executor.extraJavaOptions), and the conf contains a space in 
> it. 
> For spark-submit, I should wrap the conf with quotes like this:
> --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps"
> But when I use the setConf() API of SparkLauncher, I write code like this:
> launcher.setConf("spark.executor.extraJavaOptions", "-XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps"). Now, SparkLauncher uses Java's ProcessBuilder to 
> start a sub-process, in which the spark-submit is finally executed. And it 
> turns out that the final command is like this;
> --conf spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps
> See? the quotes are gone, and the job counld not be launched with this 
> command. 
> Then I checked up the source, all confs are stored in a Map before generating 
> launching commands. Thus. my advice is checking all values of the conf Map 
> and do wrapping during command building.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SPARK-12176) SparkLauncher's setConf() does not support configs containing spaces

2015-12-07 Thread Yuhang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-12176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuhang Chen updated SPARK-12176:

Description: 
The spark-submit uses '--conf K=V' pattern for setting configs. According to 
the docs, if the 'V' you set has spaces in it, the whole 'K=V' parts should be 
wrapped with quotes. 

However, the SparkLauncher (org.apache.spark.launcher.SparkLauncher) would not 
do that wrapping for you, and there is no chance for wrapping by yourself with 
the API it provides.

For example, I want to add {{-XX:+PrintGCDetails -XX:+PrintGCTimeStamps}} for 
executors (spark.executor.extraJavaOptions), and the conf contains a space in 
it. 

For spark-submit, I should wrap the conf with quotes like this:
bq. --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps"

But when I use the setConf() API of SparkLauncher, I write code like this:
{{launcher.setConf("spark.executor.extraJavaOptions", "-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps");}} 
Now, SparkLauncher uses Java's ProcessBuilder to start a sub-process, in which 
the spark-submit is finally executed. And it turns out that the final command 
is like this;
bq. --conf spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps

See? the quotes are gone, and the job counld not be launched with this command. 

Then I checked up the source, all confs are stored in a Map before generating 
launching commands. Thus. my advice is checking all values of the conf Map and 
do wrapping during command building.

  was:
The spark-submit uses '--conf K=V' pattern for setting configs. According to 
the docs, if the 'V' you set has spaces in it, the whole 'K=V' parts should be 
wrapped with quotes. 

However, the SparkLauncher (org.apache.spark.launcher.SparkLauncher) would not 
do that wrapping for you, and there is no chance for wrapping by yourself with 
the API it provides.

For example, I want to add "-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" for 
executors (spark.executor.extraJavaOptions), and the conf contains a space in 
it. 

For spark-submit, I should wrap the conf with quotes like this:
--conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps"

But when I use the setConf() API of SparkLauncher, I write code like this:
launcher.setConf("spark.executor.extraJavaOptions", "-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps"). Now, SparkLauncher uses Java's ProcessBuilder to 
start a sub-process, in which the spark-submit is finally executed. And it 
turns out that the final command is like this;
--conf spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps

See? the quotes are gone, and the job counld not be launched with this command. 

Then I checked up the source, all confs are stored in a Map before generating 
launching commands. Thus. my advice is checking all values of the conf Map and 
do wrapping during command building.


> SparkLauncher's setConf() does not support configs containing spaces
> 
>
> Key: SPARK-12176
> URL: https://issues.apache.org/jira/browse/SPARK-12176
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2
> Environment: All
>Reporter: Yuhang Chen
>Priority: Minor
>
> The spark-submit uses '--conf K=V' pattern for setting configs. According to 
> the docs, if the 'V' you set has spaces in it, the whole 'K=V' parts should 
> be wrapped with quotes. 
> However, the SparkLauncher (org.apache.spark.launcher.SparkLauncher) would 
> not do that wrapping for you, and there is no chance for wrapping by yourself 
> with the API it provides.
> For example, I want to add {{-XX:+PrintGCDetails -XX:+PrintGCTimeStamps}} for 
> executors (spark.executor.extraJavaOptions), and the conf contains a space in 
> it. 
> For spark-submit, I should wrap the conf with quotes like this:
> bq. --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps"
> But when I use the setConf() API of SparkLauncher, I write code like this:
> {{launcher.setConf("spark.executor.extraJavaOptions", "-XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps");}} 
> Now, SparkLauncher uses Java's ProcessBuilder to start a sub-process, in 
> which the spark-submit is finally executed. And it turns out that the final 
> command is like this;
> bq. --conf spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps
> See? the quotes are gone, and the job counld not be launched with this 
> command. 
> Then I checked up the source, all confs are stored in a Map before generating 
> launching commands. Thus. my advice is checking all values of the conf Map 
> and do wrapping during command building.



--
This message was sent by Atlassian JIRA
(v6.3.4#

[jira] [Updated] (SPARK-12176) SparkLauncher's setConf() does not support configs containing spaces

2015-12-07 Thread Yuhang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-12176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuhang Chen updated SPARK-12176:

Description: 
The spark-submit uses '--conf K=V' pattern for setting configs. According to 
the docs, if the 'V' you set has spaces in it, the whole 'K=V' parts should be 
wrapped with quotes. 

However, the SparkLauncher (org.apache.spark.launcher.SparkLauncher) would not 
do that wrapping for you, and there is no chance for wrapping by yourself with 
the API it provides.

For example, I want to add {{-XX:+PrintGCDetails -XX:+PrintGCTimeStamps}} for 
executors (spark.executor.extraJavaOptions), and the conf contains a space in 
it. 

For spark-submit, I should wrap the conf with quotes like this:
bq. --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps"

But when I use the setConf() API of SparkLauncher, I write code like this:
{{launcher.setConf("spark.executor.extraJavaOptions", "-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps");}} 

Now, SparkLauncher uses Java's ProcessBuilder to start a sub-process, in which 
the spark-submit is finally executed. And it turns out that the final command 
is like this;
bq. --conf spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps

See? the quotes are gone, and the job counld not be launched with this command. 

Then I checked up the source, all confs are stored in a Map before generating 
launching commands. Thus. my advice is checking all values of the conf Map and 
do wrapping during command building.

  was:
The spark-submit uses '--conf K=V' pattern for setting configs. According to 
the docs, if the 'V' you set has spaces in it, the whole 'K=V' parts should be 
wrapped with quotes. 

However, the SparkLauncher (org.apache.spark.launcher.SparkLauncher) would not 
do that wrapping for you, and there is no chance for wrapping by yourself with 
the API it provides.

For example, I want to add {{-XX:+PrintGCDetails -XX:+PrintGCTimeStamps}} for 
executors (spark.executor.extraJavaOptions), and the conf contains a space in 
it. 

For spark-submit, I should wrap the conf with quotes like this:
bq. --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps"

But when I use the setConf() API of SparkLauncher, I write code like this:
{{launcher.setConf("spark.executor.extraJavaOptions", "-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps");}} 
Now, SparkLauncher uses Java's ProcessBuilder to start a sub-process, in which 
the spark-submit is finally executed. And it turns out that the final command 
is like this;
bq. --conf spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps

See? the quotes are gone, and the job counld not be launched with this command. 

Then I checked up the source, all confs are stored in a Map before generating 
launching commands. Thus. my advice is checking all values of the conf Map and 
do wrapping during command building.


> SparkLauncher's setConf() does not support configs containing spaces
> 
>
> Key: SPARK-12176
> URL: https://issues.apache.org/jira/browse/SPARK-12176
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2
> Environment: All
>Reporter: Yuhang Chen
>Priority: Minor
>
> The spark-submit uses '--conf K=V' pattern for setting configs. According to 
> the docs, if the 'V' you set has spaces in it, the whole 'K=V' parts should 
> be wrapped with quotes. 
> However, the SparkLauncher (org.apache.spark.launcher.SparkLauncher) would 
> not do that wrapping for you, and there is no chance for wrapping by yourself 
> with the API it provides.
> For example, I want to add {{-XX:+PrintGCDetails -XX:+PrintGCTimeStamps}} for 
> executors (spark.executor.extraJavaOptions), and the conf contains a space in 
> it. 
> For spark-submit, I should wrap the conf with quotes like this:
> bq. --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps"
> But when I use the setConf() API of SparkLauncher, I write code like this:
> {{launcher.setConf("spark.executor.extraJavaOptions", "-XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps");}} 
> Now, SparkLauncher uses Java's ProcessBuilder to start a sub-process, in 
> which the spark-submit is finally executed. And it turns out that the final 
> command is like this;
> bq. --conf spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps
> See? the quotes are gone, and the job counld not be launched with this 
> command. 
> Then I checked up the source, all confs are stored in a Map before generating 
> launching commands. Thus. my advice is checking all values of the conf Map 
> and do wrapping during command building.



--
This message was sent by Atlassi

[jira] [Updated] (SPARK-12176) SparkLauncher's setConf() does not support configs containing spaces

2015-12-07 Thread Yuhang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-12176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuhang Chen updated SPARK-12176:

Description: 
The spark-submit uses '--conf K=V' pattern for setting configs. According to 
the docs, if the 'V' you set has spaces in it, the whole 'K=V' parts should be 
wrapped with quotes. 

However, the SparkLauncher (org.apache.spark.launcher.SparkLauncher) would not 
do that wrapping for you, and there is no chance for wrapping by yourself with 
the API it provides.

For example, I want to add {{-XX:+PrintGCDetails -XX:+PrintGCTimeStamps}} for 
executors (spark.executor.extraJavaOptions), and the conf contains a space in 
it. 

For spark-submit, I should wrap the conf with quotes like this:
{code}
--conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps"
{code}

But when I use the setConf() API of SparkLauncher, I write code like this:
{code}
launcher.setConf("spark.executor.extraJavaOptions", "-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps");
{code} 

Now, SparkLauncher uses Java's ProcessBuilder to start a sub-process, in which 
the spark-submit is finally executed. And it turns out that the final command 
is like this;
{code} 
--conf spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps
{code} 

See? the quotes are gone, and the job counld not be launched with this command. 

Then I checked up the source, all confs are stored in a Map before generating 
launching commands. Thus. my advice is checking all values of the conf Map and 
do wrapping during command building.

  was:
The spark-submit uses '--conf K=V' pattern for setting configs. According to 
the docs, if the 'V' you set has spaces in it, the whole 'K=V' parts should be 
wrapped with quotes. 

However, the SparkLauncher (org.apache.spark.launcher.SparkLauncher) would not 
do that wrapping for you, and there is no chance for wrapping by yourself with 
the API it provides.

For example, I want to add {{-XX:+PrintGCDetails -XX:+PrintGCTimeStamps}} for 
executors (spark.executor.extraJavaOptions), and the conf contains a space in 
it. 

For spark-submit, I should wrap the conf with quotes like this:
bq. --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps"

But when I use the setConf() API of SparkLauncher, I write code like this:
{{launcher.setConf("spark.executor.extraJavaOptions", "-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps");}} 

Now, SparkLauncher uses Java's ProcessBuilder to start a sub-process, in which 
the spark-submit is finally executed. And it turns out that the final command 
is like this;
bq. --conf spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps

See? the quotes are gone, and the job counld not be launched with this command. 

Then I checked up the source, all confs are stored in a Map before generating 
launching commands. Thus. my advice is checking all values of the conf Map and 
do wrapping during command building.


> SparkLauncher's setConf() does not support configs containing spaces
> 
>
> Key: SPARK-12176
> URL: https://issues.apache.org/jira/browse/SPARK-12176
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2
> Environment: All
>Reporter: Yuhang Chen
>Priority: Minor
>
> The spark-submit uses '--conf K=V' pattern for setting configs. According to 
> the docs, if the 'V' you set has spaces in it, the whole 'K=V' parts should 
> be wrapped with quotes. 
> However, the SparkLauncher (org.apache.spark.launcher.SparkLauncher) would 
> not do that wrapping for you, and there is no chance for wrapping by yourself 
> with the API it provides.
> For example, I want to add {{-XX:+PrintGCDetails -XX:+PrintGCTimeStamps}} for 
> executors (spark.executor.extraJavaOptions), and the conf contains a space in 
> it. 
> For spark-submit, I should wrap the conf with quotes like this:
> {code}
> --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps"
> {code}
> But when I use the setConf() API of SparkLauncher, I write code like this:
> {code}
> launcher.setConf("spark.executor.extraJavaOptions", "-XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps");
> {code} 
> Now, SparkLauncher uses Java's ProcessBuilder to start a sub-process, in 
> which the spark-submit is finally executed. And it turns out that the final 
> command is like this;
> {code} 
> --conf spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps
> {code} 
> See? the quotes are gone, and the job counld not be launched with this 
> command. 
> Then I checked up the source, all confs are stored in a Map before generating 
> launching commands. Thus. my advice is checking all values of the conf Map 
> an