damccorm commented on issue #22572:
URL: https://github.com/apache/beam/issues/22572#issuecomment-1239566481

   > The provided function is going to have to take the same arguments in the 
same position as the current inference methods. For the given examples 
discussed this isn't a huge issue (unless HuggingFace users really want to use 
the 30+ optional generate() parameters) and will likely cover a large number of 
use cases, but we'll still have some advanced users who will want more tuning 
and will likely turn to bespoke options.
   
   I'm not 100% sure this is true, for example I could imagine an approach 
where we let users pass in some sort of function like:
   `lambda model, batched_tensors, inference_args: model.generate(...)`. 
Regardless, I think the optional `inference_args` probably give users enough 
flexibility here, though it would be good to validate that against an existing 
model example.
   
   > It also looks like providing the alternate inference function will need to 
be done at run_inference call-time, not handler init-time, since the 
scikit-learn and PyTorch approaches are using functions from specific instances 
of their respective models. Can't specify the function until you have the 
model, unless I'm missing something.
   
   You could probably do something with `getattr` [where you pass in the 
function name via 
string](https://docs.python.org/3/library/functions.html#getattr), though I 
don't love that approach since its not very flexible w/ parameters. You could 
also again let them pass in a function. Its a little more work for a user, but 
might be worth the customizability (and for users that don't need it, their 
function would just be `lambda model, batched_tensors, **inference_args: 
model.doSomething(batched_tensors, **inference_args)`
   
   Thoughts?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to