TheNeuralBit commented on PR #17527:
URL: https://github.com/apache/beam/pull/17527#issuecomment-1137526669

   > > It seems like with the API, selecting batch_elements_kwargs is up to 
implementers of `ModelLoader`/`InferenceRunner`.
   > > What if the implementer wants to enable users to set the values as 
appropriate for their model? Then each implementation would need to decide on a 
way to expose it, right?
   > 
   > Yes, although the motivating context for this is that we have a particular 
ModelLoader (in TFX-BSL) for handling pre-batched inputs, and would like a way 
to limit subsequent batching for all users of that ModelLoader. The goal isn't 
really to expose knobs on a per-model basis, although you might want to (e.g., 
if a model is very big).
   
   Got it, thanks. I suppose we can re-visit some shared way to expose this to 
end users if the implementations start doing it independently.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to