We do deployments and customize things for users. When we deploy PredictionIO
we typically have one machine that is for only PIO permanent servers. It runs
the PredictionServer (started with `pio deploy`) and the EventServer (started
with `pio eventserver`). These services communicate with
Dear Pat,
Thanks for the detailed guide. It is nice to know it is possible.
But I am not sure if I understand it correctly, so could you please
point out any misunderstanding in the following? (If there is any)
Let's say I have 3 machines.
There is a machine [EventServer and data store)
Yes, this is the recommended config (Postgres is not, but later). Spark is only
needed during training but the `pio train` process creates drives and executors
in Spark. The driver will be the `pio train` machine so you must install pio on
it. You should have 2 Spark machines at least because
Hi,
I would like to be able to train and run model on different machines.
The reason is, on my dataset, training takes around 16GB of memory and
deploying only needs 8GB. In order to save money, it would be better
if only a 8GB memory machine is used in production, and only start a
16GB one