There are several things to consider here. One is that the next time you train
the metadata will be re-written from engine.json. This used to happen when you
`pio build` but i think it was moved to train. In any case if you don’t need it
as input to training it should be a part of the model, rig
Hi Pat,
just wanted to follow up on this. I've modified CoreWorkflow to be able to
store alogrithmParams in the engineInstance.
val engineInstances = Storage.getMetaDataEngineInstances
engineInstances.update(engineInstance.copy(
status = "COMPLETED",
endTime = DateTime.now,
algorithmsParams
That would be fine since the model can contain anything. But the real question
is where you want to use those params. If you need to use them the next time
you train, you’ll have to persist them to a place read during training. That is
usually only the metadata store (obviously input events too)
Thank you very much for the answer. I'll try with customizing workflow.
There is a step where Seq of models is returned. My idea is to return model
and model parameters in this step. I'll let you know if it works.
Thanks,
Tihomie
On Feb 12, 2018 23:34, "Pat Ferrel" wrote:
> This is an interesti
This is an interesting question. As we make more mature full featured
engines they will begin to employ hyper parameter search techniques or
reinforcement params. This means that there is a new stage in the workflow
or a feedback loop not already accounted for.
Short answer is no, unless you want