hen we set,
spark.network.crypto.enabled true
to enable AES based encryption, we see RPC timeout error message sporadically.
Kind Regards,
Breeta
-Original Message-
From: Marcelo Vanzin
Sent: Tuesday, March 26, 2019 9:10 PM
To: Sinha, Breeta (Nokia - IN/Bangalore)
Cc: user@spark.apache.o
I don't think "spark.authenticate" works properly with k8s in 2.4
(which would make it impossible to enable encryption since it requires
authentication). I'm pretty sure I fixed it in master, though.
On Tue, Mar 26, 2019 at 2:29 AM Sinha, Breeta (Nokia - IN/Bangalore)
wrote:
>
> Hi All,
>
>
>
>
Hi All,
We are trying to enable RPC encryption between driver and executor. Currently
we're working on Spark 2.4 on Kubernetes.
According to Apache Spark Security document
(https://spark.apache.org/docs/latest/security.html) and our understanding on
the same, it is clear that Spark supports
.nabble.com/Help-Get-Timeout-error-and-FileNotFoundException-when-shuffling-large-files-tp25662.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
Can you please paste the stack trace.
Sudhanshu
)
..Manas
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Help-Get-Timeout-error-and-FileNotFoundException-when-shuffling-large-files-tp25662p25675.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
I am facing same issue, do you have any solution ?
On Mon, Apr 27, 2015 at 9:43 PM, Deepak Gopalakrishnan dgk...@gmail.com
wrote:
Hello All,
I dug a little deeper and found this error :
15/04/27 16:05:39 WARN TransportChannelHandler: Exception in connection from
/10.1.0.90:40590
Hello All,
I dug a little deeper and found this error :
15/04/27 16:05:39 WARN TransportChannelHandler: Exception in
connection from /10.1.0.90:40590
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at
Hello All,
I'm trying to process a 3.5GB file on standalone mode using spark. I could
run my spark job succesfully on a 100MB file and it works as expected. But,
when I try to run it on the 3.5GB file, I run into the below error :
15/04/26 12:45:50 INFO BlockManagerMaster: Updated info of block
I'm not sure what the expected performance should be for this amount of
data, but you could try to increase the timeout with the property
spark.akka.timeout to see if that helps.
Bryan
On Sun, Apr 26, 2015 at 6:57 AM, Deepak Gopalakrishnan dgk...@gmail.com
wrote:
Hello All,
I'm trying to
Hello,
Just to add a bit more context :
I have done that in the code, but I cannot see it change from 30 seconds in
the log.
.set(spark.executor.memory, 10g)
.set(spark.driver.memory, 20g)
.set(spark.akka.timeout,6000)
PS : I understand that
The configuration key should be spark.akka.askTimeout for this timeout.
The time unit is seconds.
Best Regards,
Shixiong(Ryan) Zhu
2015-04-26 15:15 GMT-07:00 Deepak Gopalakrishnan dgk...@gmail.com:
Hello,
Just to add a bit more context :
I have done that in the code, but I cannot see it
12 matches
Mail list logo