gemini-code-assist[bot] commented on code in PR #18545:
URL: https://github.com/apache/tvm/pull/18545#discussion_r2588855057
##########
python/tvm/relax/pipeline.py:
##########
@@ -177,8 +187,13 @@ def _pipeline(mod: tvm.ir.IRModule, _ctx:
tvm.transform.PassContext) -> tvm.ir.I
*pre_tuning_layout_rewrite,
# Skip tuning if total_trials is 0
(
- transform.MetaScheduleTuneIRMod({}, work_dir,
total_trials)
- if total_trials > 0
+ transform.MetaScheduleTuneIRMod(
+ params={},
+ work_dir=work_dir,
+ max_trials_global=total_trials,
+ max_trials_per_task=max_trials_per_task,
+ )
+ if total_trials > 0 and max_trials_per_task > 0
Review Comment:

The condition `max_trials_per_task > 0` will raise a `TypeError` when
`max_trials_per_task` is `None`, which is its default value. This will cause
the pipeline to crash if `max_trials_per_task` is not explicitly provided. The
condition should be updated to handle the `None` case correctly.
```suggestion
if total_trials > 0 and (max_trials_per_task is None
or max_trials_per_task > 0)
```
##########
docs/how_to/tutorials/e2e_opt_model.py:
##########
@@ -95,13 +95,38 @@
# leverage MetaSchedule to tune the model and store the tuning logs to the
database. We also
# apply the database to the model to get the best performance.
#
+# The ResNet18 model will be divided into 20 independent tuning tasks during
compilation.
+# To ensure each task receives adequate tuning resources in one iteration
while providing
+# early feedback:
+#
+# - To quickly observe tuning progress, each task is allocated a maximum of 4
trials per
+# iteration (controlled by ``MAX_TRIALS_PER_TASK=4``). Setting
``TOTAL_TRIALS`` to at least
+# ``80 (20 tasks * 4 trials)`` ensures every task receives one full
iteration of tuning.
Review Comment:

The comment here states that `MAX_TRIALS_PER_TASK` controls the number of
trials "per iteration". This is misleading, as `MAX_TRIALS_PER_TASK` is a cap
on the *total* number of trials for a given task across all tuning iterations.
The current phrasing, along with "one full iteration of tuning", might confuse
users about the parameter's function. I'd suggest rephrasing for clarity.
```suggestion
# - To quickly observe tuning progress, we cap the tuning for each task at 4
trials
# (controlled by ``MAX_TRIALS_PER_TASK=4``). Setting ``TOTAL_TRIALS`` to
# ``80 (20 tasks * 4 trials)`` ensures each task can be tuned for up to 4
trials.
```
##########
python/tvm/relax/pipeline.py:
##########
@@ -128,6 +129,14 @@ def static_shape_tuning_pipeline(
cpu_weight_prepack : bool
Whether to enable the cpu weight prepack feature.
+ max_trials_per_task : Optional[int]
+ The maximum number of trials to run per task.
+ If not specified, MetaSchedule will use a default value of 64
+ trials per task during the tuning process.
Review Comment:

The docstring states that if `max_trials_per_task` is not specified, it
defaults to 64. This appears to contradict the implementation in
`python/tvm/meta_schedule/tune.py` (lines 104-105), where `max_trials_per_task`
defaults to `max_trials_global` (which is `total_trials` in this context).
Please update the docstring to reflect the correct default behavior to avoid
confusion.
```suggestion
If not specified, it defaults to the value of `total_trials`.
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]