Hey,

Our ML ETL pipeline has several complex steps that I’d like to address with 
custom Transformers in an ML Pipeline.  Looking at the Tokenizer and HashingTF 
transformers I see these handy traits (HasInputCol, HasLabelCol, HasOutputCol, 
etc.) but they have strict access modifiers.  How can I use these with custom 
Transformer/Estimator implementations?

I’m stuck depositing my implementations in org.apache.spark.ml, which is 
tolerable for now, but I’m wondering if I’m missing some pattern?

Thanks,
mn
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to