[ 
https://issues.apache.org/jira/browse/SPARK-26324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725898#comment-16725898
 ] 

ASF GitHub Bot commented on SPARK-26324:
----------------------------------------

srowen closed pull request #23342: [SPARK-26324][DOCS] Add Spark docs for 
Running in Mesos with SSL
URL: https://github.com/apache/spark/pull/23342
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md
index 968d668e2c93a..a07773c1c71e1 100644
--- a/docs/running-on-mesos.md
+++ b/docs/running-on-mesos.md
@@ -108,6 +108,19 @@ Please note that if you specify multiple ways to obtain 
the credentials then the
 
 An equivalent order applies for the secret.  Essentially we prefer the 
configuration to be specified directly rather than indirectly by files, and we 
prefer that configuration settings are used over environment variables.
 
+### Deploy to a Mesos running on Secure Sockets
+
+If you want to deploy a Spark Application into a Mesos cluster that is running 
in a secure mode there are some environment variables that need to be set.
+
+- `LIBPROCESS_SSL_ENABLED=true` enables SSL communication
+- `LIBPROCESS_SSL_VERIFY_CERT=false` verifies the ssl certificate 
+- `LIBPROCESS_SSL_KEY_FILE=pathToKeyFile.key` path to key 
+- `LIBPROCESS_SSL_CERT_FILE=pathToCRTFile.crt` the certificate file to be used
+
+All options can be found at http://mesos.apache.org/documentation/latest/ssl/
+
+Then submit happens as described in Client mode or Cluster mode below
+
 ## Uploading Spark Package
 
 When Mesos runs a task on a Mesos slave for the first time, that slave must 
have a Spark binary


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Spark submit does not work with messos over ssl [Missing docs]
> --------------------------------------------------------------
>
>                 Key: SPARK-26324
>                 URL: https://issues.apache.org/jira/browse/SPARK-26324
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Submit
>    Affects Versions: 2.4.0
>            Reporter: Jorge Machado
>            Assignee: Jorge Machado
>            Priority: Major
>             Fix For: 3.0.0
>
>
> Hi guys, 
> I was trying to run the examples on a mesos cluster that uses https. I tried 
> with rest endpoint: 
> {code:java}
> ./spark-submit  --class org.apache.spark.examples.SparkPi --master 
> mesos://<mesos_master_with_https>:5050 --conf spark.master.rest.enabled=true 
> --deploy-mode cluster --supervise --executor-memory 10G 
> --total-executor-cores 100 ../examples/jars/spark-examples_2.11-2.4.0.jar 1000
> {code}
> The error that I get on the host where I started the spark-submit is:
> {code:java}
> 2018-12-10 15:08:39 WARN NativeCodeLoader:62 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 2018-12-10 15:08:39 INFO RestSubmissionClient:54 - Submitting a request to 
> launch an application in mesos://<mesos_master_with_https>:5050.
> 2018-12-10 15:08:39 WARN RestSubmissionClient:66 - Unable to connect to 
> server mesos://<mesos_master_with_https>:5050.
> Exception in thread "main" 
> org.apache.spark.deploy.rest.SubmitRestConnectionException: Unable to connect 
> to server
> at 
> org.apache.spark.deploy.rest.RestSubmissionClient$$anonfun$createSubmission$3.apply(RestSubmissionClient.scala:104)
> at 
> org.apache.spark.deploy.rest.RestSubmissionClient$$anonfun$createSubmission$3.apply(RestSubmissionClient.scala:86)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> at 
> org.apache.spark.deploy.rest.RestSubmissionClient.createSubmission(RestSubmissionClient.scala:86)
> at 
> org.apache.spark.deploy.rest.RestSubmissionClientApp.run(RestSubmissionClient.scala:443)
> at 
> org.apache.spark.deploy.rest.RestSubmissionClientApp.start(RestSubmissionClient.scala:455)
> at 
> org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
> at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
> at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
> at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
> at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: org.apache.spark.deploy.rest.SubmitRestConnectionException: Unable 
> to connect to server
> at 
> org.apache.spark.deploy.rest.RestSubmissionClient.readResponse(RestSubmissionClient.scala:281)
> at 
> org.apache.spark.deploy.rest.RestSubmissionClient.org$apache$spark$deploy$rest$RestSubmissionClient$$postJson(RestSubmissionClient.scala:225)
> at 
> org.apache.spark.deploy.rest.RestSubmissionClient$$anonfun$createSubmission$3.apply(RestSubmissionClient.scala:90)
> ... 15 more
> Caused by: java.net.SocketException: Connection reset
> {code}
> I'm pretty sure this is because of the hardcoded http:// here:
>  
>  
> {code:java}
> RestSubmissionClient.scala
> /** Return the base URL for communicating with the server, including the 
> protocol version. */
> private def getBaseUrl(master: String): String = {
>   var masterUrl = master
>   supportedMasterPrefixes.foreach { prefix =>
>     if (master.startsWith(prefix)) {
>       masterUrl = master.stripPrefix(prefix)
>     }
>   }
>   masterUrl = masterUrl.stripSuffix("/")
>   s"http://$masterUrl/$PROTOCOL_VERSION/submissions"; <--- hardcoded http
> }
> {code}
> Then I tried without the _--deploy-mode cluster_ and I get: 
> {code:java}
> ./spark-submit  --class org.apache.spark.examples.SparkPi --master 
> mesos://<server_using_https>:5050  --supervise --executor-memory 10G 
> --total-executor-cores 100 ../examples/jars/spark-examples_2.11-2.4.0.jar 1000
>  {code}
> On the spark console I get: 
> {code:java}
> 2018-12-10 15:01:05 INFO SparkUI:54 - Bound SparkUI to 0.0.0.0, and started 
> at http://_host:4040
> 2018-12-10 15:01:05 INFO SparkContext:54 - Added JAR 
> file:/home/<user>/spark-2.4.0-bin-hadoop2.7/bin/../examples/jars/spark-examples_2.11-2.4.0.jar
>  at spark://_host:35719/jars/spark-examples_2.11-2.4.0.jar with timestamp 
> 1544450465799
> I1210 15:01:05.963078 37943 sched.cpp:232] Version: 1.3.2
> I1210 15:01:05.966814 37911 sched.cpp:336] New master detected at 
> master@53.54.195.251:5050
> I1210 15:01:05.967010 37911 sched.cpp:352] No credentials provided. 
> Attempting to register without authentication
> E1210 15:01:05.967347 37942 process.cpp:2455] Failed to shutdown socket with 
> fd 307, address 53.54.195.251:45206: Transport endpoint is not connected
> E1210 15:01:05.968212 37942 process.cpp:2369] Failed to shutdown socket with 
> fd 307, address 53.54.195.251:45212: Transport endpoint is not connected
> E1210 15:01:05.969405 37942 process.cpp:2455] Failed to shutdown socket with 
> fd 307, address 53.54.195.251:45222: Transport endpoint is not connected{code}
> On Mesos I get:  
> {code:java}
> E1210 15:01:06.665076  2633 process.cpp:956] Failed to accept socket: Failed 
> accept: connection error: error:1407609C:SSL 
> routines:SSL23_GET_CLIENT_HELLO:http request
> {code}
> I could not found any documentation on how to connect both. Do I need to 
> setup some acls on java_opts for the ssl ?
>  Ok, after setting this envs it worked out: 
> {code:java}
> LIBPROCESS_SSL_VERIFY_CERT=false
> LIBPROCESS_SSL_KEY_FILE=/home/machjor/server_2048.key
> LIBPROCESS_SSL_ENABLED=true
> LIBPROCESS_SSL_CERT_FILE=/home/machjor/server.crt
> {code}
>  *should we update the spark docs ?*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to