Thanks Steve!
I will study about links you mentioned!
--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/
-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
Now, if we use saveAsNewAPIHadoopDataset with speculation enable.It may cause
data loss.
I check the comment of thi api:
We should make sure our tasks are idempotent when speculation is enabled,
i.e. do
* not use output committer that writes data directly.
* There is an example in
Yes,that's the true process.
And i think client is initalized when NettyRpcRef of executor is
deserialized since NettyRpcEndpointRef#readObject is calling.
Am i right?
--
View this message in context:
After looking into code of branch-2.1.
I found out Driver first handle executor connections by NettyRpcHandler.
Once a connection arrive,driver's NettyRpcHandler will call receive,in which
NettyRpcHandler#internalReceive will be called.
And then NettyEnv#deserialize will be called,which will
Yes,i looked into branch 1.6.I will check out branch 2.1
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Why-Executor-have-no-NettyRpcEndpointRef-tp21432p21505.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.
Executor will start an actor system for remoting after rpcenv has been
created.You can refer to SparkEnv.And why not start a netty server?I think,
driver must handle all the executors' connetction but executor no need to.So
an actor system server is enough?
--
View this message in context:
NettyRpcEndpointRef#toString if RpcAddress is null then will print null,like
what you see.And if rpcendpoint is not driver,it will not call
startserver,which init RpcAddress
--
View this message in context: