Eliaaazzz opened a new pull request, #37945:
URL: https://github.com/apache/beam/pull/37945
## Summary
Addresses #37531.
This PR completes the smart bucketing integration for Python `RunInference`
by exposing `batch_length_fn` and `batch_bucket_boundaries` on all concrete
`ModelHandler` implementations.
The underlying batching support already exists in the base layer. The
missing piece was that many user-facing handlers did not surface these options,
which made length-aware batching effectively unavailable for a large part of
the inference API surface. With this change, users can enable smart bucketing
directly from the handler constructor across supported backends.
## What Changed
This change adds `batch_length_fn` and `batch_bucket_boundaries` to 16
concrete handlers across the following backends:
- PyTorch
- HuggingFace
- scikit-learn
- TensorFlow
- ONNX
- XGBoost
- TensorRT
- vLLM
- Vertex AI
- Gemini
Implementation details:
- Handlers that inherit from `ModelHandler` now pass the new parameters
through to `super().__init__()`
- Remote handlers that manage batching kwargs directly (`GeminiModelHandler`
and `VertexAIModelHandlerJSON`) now wire the values into `_batching_kwargs`
## Testing
Added test coverage in `base_test.py` for both behavior and wiring:
- an end-to-end `RunInferenceLengthAwareBatchingTest` that verifies short
and long string inputs are bucketed into separate batches under `FnApiRunner`
- a `HandlerBucketingKwargsForwardingTest` that checks each concrete handler
forwards `batch_length_fn` and `batch_bucket_boundaries` into
`batch_elements_kwargs()`
- follow-up fixes to keep the forwarding tests hermetic, especially for
HuggingFace pipeline validation and Vertex AI endpoint liveness checks
## Context
This is the final integration piece for smart bucketing:
- Part 1: #37532
- Part 2: #37565
Together, these changes make length-aware batching usable through the public
Python inference handlers rather than only at the base implementation layer.
------------------------
Thank you for your contribution! Follow this checklist to help us
incorporate your contribution quickly and easily:
- [x] Mention the appropriate issue in your description (for example:
`addresses #123`), if applicable. This will automatically add a link to the
pull request in the issue. If you would like the issue to automatically close
on merging the pull request, comment `fixes #<ISSUE NUMBER>` instead.
See the [Contributor Guide](https://beam.apache.org/contribute) for more
tips on [how to make review process
smoother](https://github.com/apache/beam/blob/master/CONTRIBUTING.md#make-the-reviewers-job-easier).
To check the build health, please visit
[https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md](https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]