[ 
https://issues.apache.org/jira/browse/SPARK-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivaram Venkataraman updated SPARK-6840:
-----------------------------------------
    Target Version/s:   (was: 1.6.0)

> SparkR: private package functions unavailable when using lapplyPartition in 
> package
> -----------------------------------------------------------------------------------
>
>                 Key: SPARK-6840
>                 URL: https://issues.apache.org/jira/browse/SPARK-6840
>             Project: Spark
>          Issue Type: Bug
>          Components: SparkR
>    Affects Versions: 1.4.0
>            Reporter: Shivaram Venkataraman
>
> Developing package that imports SparkR. There is a function in that package 
> that calls lapplyPartition with a function argument that uses in its body 
> some functions private to the package. When run, the computation fails 
> because R can not find the private function (details below). If I fully 
> qualify them with otherpackage:::private.function, the error moves down to 
> the next private function. This used to work some time ago, I've been working 
> on other stuff for a little while. This should also work by regular R scope 
> rules. I apologize I don't have a minimal test case ready, but this was 
> discovered developing plyrmr and the list of dependencies is long enough that 
>  it's a little bit of a burden to make you install it. I think I can put 
> together a toy package to demonstrate the problem, if that helps.
> Error in FUN(part) : could not find function "keys.spark"
> Calls: source ... eval -> eval -> computeFunc -> <Anonymous> -> FUN -> FUN
> Execution halted
> 15/03/19 12:29:16 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
> org.apache.spark.SparkException: R computation failed with
>  Error in FUN(part) : could not find function "keys.spark"
> Calls: source ... eval -> eval -> computeFunc -> <Anonymous> -> FUN -> FUN
> Execution halted
>       at edu.berkeley.cs.amplab.sparkr.BaseRRDD.compute(RRDD.scala:80)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
>       at org.apache.spark.scheduler.Task.run(Task.scala:54)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to