Hi Sean, On Fri, Jul 22, 2016 at 12:52 PM, Sean Owen <so...@cloudera.com> wrote: > > If you mean, how do you distribute a new model in your application, > then there's no magic to it. Just reference the new model in the > functions you're executing in your driver. > > If you implemented some other manual way of deploying model info, just > do that again. There's no special thing to know. >
Well, because some huge model, we typically bundle both logic (pipeline/application) and models separately. Normally we use a shared stores (e.g., HDFS) or coordinated distribution of the models. But I wanted to know if there is any infrastructure in Spark that specifically addresses such need. Thanks. Cheers, P.S.: sorry Jacek, with "ml" I meant "Machine Learning". I thought is a quite spread acronym. Sorry for the possible confusion. -- Sergio Fernández Partner Technology Manager Redlink GmbH m: +43 6602747925 e: sergio.fernan...@redlink.co w: http://redlink.co