Hi, I've referenced the same problem on stack overflow and can't seem to
find answers.

I have custom spark pipelinestages written in scala that are specific to my
organization. They work well on scala-spark.

However, when I try to wrap them as shown here, so I can use them in
pyspark, I get weird stuff that's happening. mostly around constructors of
the java objects

please refer to the stack overflow question
<https://stackoverflow.com/questions/63439162/referencing-a-scala-java-pipelinestage-from-pyspark-constructor-issues-with-ha>,
it's the most documented.

Thanks, any help is appreciated

-- 
*Aviad Klein*
Director of Data Science

Reply via email to