Re: Spark on K8s resource staging server timeout

2018-03-29 Thread Jenna Hoole
.ho...@gmail.com> > *Date: *Thursday, March 29, 2018 at 10:37 AM > *To: *"user@spark.apache.org" <user@spark.apache.org> > *Subject: *Re: Spark on K8s resource staging server timeout > > > > I added overkill high timeouts to the OkHttpClient.Builder() in > Retrofit

Re: Spark on K8s resource staging server timeout

2018-03-29 Thread Matt Cheah
ursday, March 29, 2018 at 10:37 AM To: "user@spark.apache.org" <user@spark.apache.org> Subject: Re: Spark on K8s resource staging server timeout I added overkill high timeouts to the OkHttpClient.Builder() in RetrofitClientFactory.scala and I don't seem to be timing

Re: Spark on K8s resource staging server timeout

2018-03-29 Thread Jenna Hoole
I added overkill high timeouts to the OkHttpClient.Builder() in RetrofitClientFactory.scala and I don't seem to be timing out anymore. val okHttpClientBuilder = new OkHttpClient.Builder() .dispatcher(dispatcher) .proxy(resolvedProxy) .connectTimeout(120, TimeUnit.SECONDS)

Spark on K8s resource staging server timeout

2018-03-27 Thread Jenna Hoole
So I'm running into an issue with my resource staging server that's producing a stacktrace like Issue 342 , but I don't think for the same reasons. What's happening is that every time after I start up a resource staging server, the first job