@jwfromm Thanks for your support!
I use the model architecture customized from the Maskrcnn model
<br> Here is my tuning script:
```python
import tvm
from tvm import relay, auto_scheduler
from tvm.runtime.vm import VirtualMachine
TARGET = tvm.target.Target("llvm -mcpu=broadwell")
log_file = "card_extraction-autoschedule.json"
dummy_input = torch.randn(1, 3, 800, 800,device='cpu', requires_grad=True)
model = torch.jit.trace(model, dummy_input)
mod, params = relay.frontend.from_pytorch(model, input_infos=[('input0',
dummy_input.shape)])
print("Extract tasks...")
tasks, task_weights = auto_scheduler.extract_tasks(mod["main"], params, TARGET)
for idx, task in enumerate(tasks):
print("========== Task %d (workload key: %s) ==========" % (idx,
task.workload_key))
print(task.compute_dag)
def run_tuning():
print("Begin tuning...")
tuner = auto_scheduler.TaskScheduler(tasks, task_weights)
tune_option = auto_scheduler.TuningOptions(
num_measure_trials=20000,
runner=auto_scheduler.LocalRunner(repeat=10,
enable_cpu_cache_flush=True),
measure_callbacks=[auto_scheduler.RecordToFile(log_file)],
)
tuner.tune(tune_option)
run_tuning()
# I apply log file here to compiling model
with auto_scheduler.ApplyHistoryBest(log_file):
with tvm.transform.PassContext(opt_level=3,
disabled_pass=["FoldScaleAxis"], config={"relay.backend.use_auto_scheduler":
True}):
vm_exec = relay.vm.compile(mod, target=TARGET, params=params)
dev = tvm.cpu()
vm = VirtualMachine(vm_exec, dev)
start_t = time.time()
vm.set_input("main", **{"input0": sample.cpu().numpy()})
tvm_res = vm.run()
print(tvm_res[0].numpy().tolist())
print("Inference time of model after tuning: {:0.4f}".format(time.time() -
start_t))
```
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-apply-best-history-after-auto-scheduler-for-relay-vm-compile/10908/5)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/email/unsubscribe/bd7b9f2507ea10e6274fc42249ddec529834a6add0a3f518376945e1cfee3573).