PySpark RDDs are (on the Java side) are essentially RDD of pickled objects and mostly (but not entirely) opaque to the JVM. It is possible (by using some internals) to pass a PySpark DataFrame to a Scala library (you may or may not find the talk I gave at Spark Summit useful https://www.youtube.com/watch?v=V6DkTVvy9vk as well as some of the Python examples in https://github.com/high-performance-spark/high-performance-spark-examples ). Good luck! :)
On Wed, Jun 22, 2016 at 7:07 PM, Daniel Imberman <daniel.imber...@gmail.com> wrote: > Hi All, > > I've developed a spark module in scala that I would like to add a python > port for. I want to be able to allow users to create a pyspark RDD and send > it to my system. I've been looking into the pyspark source code as well as > py4J and was wondering if there has been anything like this implemented > before. > > Thank you > -- Cell : 425-233-8271 Twitter: https://twitter.com/holdenkarau