Hi,

I would like to be able to train and run model on different machines.
The reason is, on my dataset, training takes around 16GB of memory and
deploying only needs 8GB.  In order to save money, it would be better
if only a 8GB memory machine is used in production, and only start a
16GB one perhaps weekly for training.  Is it possible with
predictionIO + universal recommender?

I have done some search and found a related guide here:
https://github.com/actionml/docs.actionml.com/blob/master/pio_load_balancing.md
Which copy the whole template directory and then run pio deploy.  But
in their case HBase and elasticsearch cluster are used.  In my case
only a single machine is used with elasticsearch and postgresql.  Will
this work?  (I am flexible about using postresql or localfs or hbase,
but I cannot afford a cluster)

Perhaps another solution to make the 16GB machine as a spark slave,
start it before training start, and the 8GB machine will connect to
it. Then call pio train; pio deploy on the 8GB machine.  Finally
shutdown the 16GB machine.  But I have no idea if it can work.  And if
yes, is there any documentation I can look into?

Any other method is welcome!  Zero downtime is preferred but not necessary.

Thanks in advance.


Best Regards,
Brian

Reply via email to