Github user Maple-Wang commented on the issue:

    https://github.com/apache/spark/pull/20027
  
    use SparkR in the R shell, the master parameter too old,connot run 
"spark-submit --master yarn --deploy-mode client" . I install R on all node.
    when use in this way:
    if (nchar(Sys.getenv("SPARK_HOME")) < 1) {
    Sys.setenv(SPARK_HOME = "/usr/hdp/2.6.1.0-129/spark2")
    }
    library(SparkR, lib.loc = c(file.path(Sys.getenv("SPARK_HOME"), "R", 
"lib")))
    sparkR.session(master = "yarn", sparkConfig = list(spark.driver.memory = 
"10g"))
    it comes out:
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 
in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 
(TID 4, node23.nuctech.com, executor 1): java.net.SocketTimeoutException: 
Accept timed out
    at java.net.PlainSocketImpl.socketAccept(Native Method)
    at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
    at java.net.ServerSocket.implAccept(ServerSocket.java:545)
    at java.net.ServerSocket.accept(ServerSocket.java:513)
    at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
    at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
    at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to