damccorm commented on issue #21444:
URL: https://github.com/apache/beam/issues/21444#issuecomment-1248537960

   I think the difference largely will depend on workload. For our examples it 
is probably not significant because we're pretty much just writing the 
inference results immediately, but I can imagine if (a) your input size is 
significantly larger or (b) you're doing more than simple maps post-inference 
(especially if you're doing something that breaks pipeline fusion and causes 
work to be distributed to new workers), then the costs would be higher.
   
   I'm generally still in favor of doing this as a small convenience for users 
since its cheap


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to