What you have outlined is exactly how PIO is designed to work. The EventServer 
is multi-tenant in that separate datasets and entities can store to it and can 
even be granted different permissions. But the PredictionServer serves one 
engine per process on a single port. This scales in a heavyweight manner, one 
PredictionServer per engine per port.

What template are you using and how many datasets?


On May 25, 2017, at 2:40 AM, Ravi Kiran <[email protected]> wrote:

I have a template code ready. Lets say I have multiple datasets D1, D2, ...,Dn
How to train model M1, M2, ...Mn corresponding to Di dataset and deploy each of 
these model in one engine. (I am assuming running multiple engines is not 
efficient way)

Only solution I found so far was to create different apps and add one dataset 
to one app. Then train and deploy individual model on different ports. But it 
wont be scalable. As no. of models is limited by no. of ports.

Can someone suggest better solution?
 

Reply via email to