I should point out that if you don't want to take a polyglot approach to
languages and reside solely in the JVM, then you can just use plain old
java serialization on the Model objects that come out of MLlib's APIs from
Java or Scala and load them up in another process and call the relevant
.predict() method when it comes time to serve. The same approach would
probably also work for models trained via MLlib's python APIs, but I
haven't tried that.

Native PMML serialization would be a nice feature to add to MLlib as a
mechanism to transfer models to other environments for further
analysis/serving. There's a JIRA discussion about this here:
https://issues.apache.org/jira/browse/SPARK-1406


On Tue, Jun 10, 2014 at 10:53 AM, filipus <floe...@gmail.com> wrote:

> Thank you very much
>
> the cascading project i didn't recognize it at all till now
>
> this project is very interesting
>
> also I got the idea of the usage of scala as a language for spark - becuase
> i can intergrate jvm based libraries very easy/naturaly when I got it right
>
> mh... but I could also use sparc as a model engine, augustus for the
> serializer and a third party produkt for the prediction engine like using
> jpmml
>
> mh... got the feeling that i need to do java, scala and python at the same
> time...
>
> first things first -> augustus for an pmml output from spark :-)
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/pmml-with-augustus-tp7313p7335.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>

Reply via email to