Meethu Mathew created ZEPPELIN-3126:
---------------------------------------
Summary: More than 2 notebooks in R failing with error sparkr
intrepreter not responding
Key: ZEPPELIN-3126
URL: https://issues.apache.org/jira/browse/ZEPPELIN-3126
Project: Zeppelin
Issue Type: Bug
Components: r-interpreter
Affects Versions: 0.7.2
Environment: spark version 1.6.2
Reporter: Meethu Mathew
Priority: Critical
Spark interpreter is in per note Scoped mode.
Please find the steps below to reproduce the issue:
1. Create a notebook (Note1) and run any r code in a paragraph. I ran the
following code.
%r
rdf <- data.frame(c(1,2,3,4))
colnames(rdf) <- c("myCol")
sdf <- createDataFrame(sqlContext, rdf)
withColumn(sdf, "newCol", sdf$myCol * 2.0)
2. Create another notebook (Note2) and run any r code in a paragraph. I ran
the same code as above.
Till now everything works fine.
3. Create third notebook (Note3) and run any r code in a paragraph. I ran the
same code. This notebook fails with the error
org.apache.zeppelin.interpreter.InterpreterException: sparkr is not responding
The problem will be solved on restarting the sparkr interpreter and another 2
models could be executed successfully. But again, for the third model run using
the sparkr interpreter, the error is thrown.
Once a notebook throws the error, all further notebooks will throw the same
error and each time we run those failed notebooks, a new R shell process will
be started and these processes are not getting killed even if we we delete the
failed notebook.i.e it does not reuse original R shell after failure
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)