Hi

I am running Spark 1.6.3 along with Spark Jobserver. I notice some
interesting behaviors of HttpFileServer.

When I destroy&recreate a SparkContext, HttpFileServer doesn't release the
port. If I don't specify spark.fileserver.port, the next HttpFileServer
binds to a new random port (as expected). However if I do want
HttpFileServer to use a well known port for firewall purpose, the next
HttpFileServer will try to bind to the port, it will fail then try the
port+1 until either it can find an open port or max retries is exceeded.

I feel HttpFileServer should be shutdown when SparkContext is destroyed. Is
this a bug in Spark or SJS?

Reply via email to