Hi,
I’m currently experimenting with models in Metron and came across this link 
https://hortonworks.com/blog/model-service-modern-streaming-data-science-apache-metron/
The linked presentation shows how to use a python/rest service to deploy models 
that are invoked internally via steller. I looked around for more 
documentation/examples but couldn’t really find anything. I was wondering if:


  1.  It is possible to use spark context from a model e.g. via pyspark or 
something similar?
  2.  One of the “future” points mentioned in the blog is “Automatic 
construction of REST endpoints for models that conform to certain 
specifications (e.g., Spark-ML models, PMML, sci-kit learn exported pickle 
files)”. Is this something that is being actively worked at or something that 
will be available in near future?
  3.  If not, is there a standard pattern to deploy a spark streaming model to 
metron?
  4.  Lastly, is there a way to expose the model service URL to external 
clients (e.g. outside metron to other applications)? A use cased could be call 
the service to get a score, pretty much like how enrichment bolt does it.

Any feedback/code samples will be greatly appreciated.

Best regards,
Sanket


Reply via email to