[ https://issues.apache.org/jira/browse/LIVY-505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16803315#comment-16803315 ]
shanyu zhao commented on LIVY-505: ---------------------------------- Root cause: SparkR uses SparkR env variable ".sparkRjsc" as a singleton for existing spark context. However, in Livy's SparkRInterpreter.scala, we created spark context and assigned to SparkR env variable ".sc". sendRequest("""assign(".sc", SparkR:::callJStatic("org.apache.livy.repl.SparkRInterpreter", "getSparkContext"), envir = SparkR:::.sparkREnv)""") This means if we execute sparkR.session() in SparkR notebook, SparkR will overwrite SparkR env variable ".scStartTime" (which was set in Livy's SparkRInterpreter.scala), and render previously created jobj invalid because that jobj$appId is different from SparkR env variable ".scStartTime". Please see the fix in LIVY-505.patch > sparkR.session failed with "invalid jobj 1" error in Spark 2.3 > -------------------------------------------------------------- > > Key: LIVY-505 > URL: https://issues.apache.org/jira/browse/LIVY-505 > Project: Livy > Issue Type: Bug > Components: Interpreter > Affects Versions: 0.5.0, 0.5.1 > Reporter: shanyu zhao > Priority: Major > Attachments: LIVY-505.patch > > > In Spark 2.3 cluster, use Zeppelin with livy2 interpreter, and type: > {code:java} > %sparkr > sparkR.session(){code} > You will see error: > [1] "Error in writeJobj(con, object): invalid jobj 1" > In a successful case with older livy and spark versions, we see something > like this: > Java ref type org.apache.spark.sql.SparkSession id 1 > This indicates isValidJobj() function in Spark code returned false for > SparkSession obj. This is isValidJobj() function in Spark 2.3 code FYI: > {code:java} > isValidJobj <- function(jobj) { > if (exists(".scStartTime", envir = .sparkREnv)) { > jobj$appId == get(".scStartTime", envir = .sparkREnv) > } else { > FALSE > } > }{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)