Auto reference counting should already be handled by SparkR already. Can you elaborate on which object and how that would be used?
________________________________ From: Jeremy Liu <[email protected]> Sent: Thursday, March 29, 2018 8:23:58 AM To: Reynold Xin Cc: Felix Cheung; [email protected] Subject: Re: [Spark R] Proposal: Exposing RBackend in RRunner Use case is to cache a reference to the JVM object created by SparkR. On Wed, Mar 28, 2018 at 12:03 PM Reynold Xin <[email protected]<mailto:[email protected]>> wrote: If you need the functionality I would recommend you just copying the code over to your project and use it that way. On Wed, Mar 28, 2018 at 9:02 AM Felix Cheung <[email protected]<mailto:[email protected]>> wrote: I think the difference is py4j is a public library whereas the R backend is specific to SparkR. Can you elaborate what you need JVMObjectTracker for? We have provided R convenient APIs to call into JVM: sparkR.callJMethod for example _____________________________ From: Jeremy Liu <[email protected]<mailto:[email protected]>> Sent: Tuesday, March 27, 2018 12:20 PM Subject: Re: [Spark R] Proposal: Exposing RBackend in RRunner To: <[email protected]<mailto:[email protected]>> Spark Dev, On second thought, the below topic seems more appropriate for spark-dev rather than spark-users: Spark Users, In SparkR, RBackend is created in RRunner.main(). This in particular makes it difficult to control or use the RBackend. For my use case, I am looking to access the JVMObjectTracker that RBackend maintains for SparkR dataframes. Analogously, pyspark starts a py4j.GatewayServer in PythonRunner.main(). It's then possible to start a ClientServer that then has access to the object bindings between Python/Java. Is there something similar for SparkR? Or a reasonable way to expose RBackend? Thanks! -- ----- Jeremy Liu [email protected]<mailto:[email protected]> -- ----- Jeremy Liu [email protected]<mailto:[email protected]>
