[GitHub] [tvm] blackkker commented on pull request #12028: [WIP][Pylint] Making frontend tests pylint compliant

2022-07-26 Thread GitBox


blackkker commented on PR #12028:
URL: https://github.com/apache/tvm/pull/12028#issuecomment-1196289756

   @areusch basically completed, there are still a few questions that need to 
be confirmed with you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo closed pull request #11712: [Docker][Pylint] Use regexes for good names

2022-07-26 Thread GitBox


AndrewZhaoLuo closed pull request #11712: [Docker][Pylint] Use regexes for good 
names
URL: https://github.com/apache/tvm/pull/11712


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on pull request #11712: [Docker][Pylint] Use regexes for good names

2022-07-26 Thread GitBox


AndrewZhaoLuo commented on PR #11712:
URL: https://github.com/apache/tvm/pull/11712#issuecomment-1196281584

   @quic-sanirudh I haven't found time to get to this. I think your plan of 
closing the loophole is a good one. However, it is another major refactor


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on pull request #12145: [TFLite] Fix _test_tflite2_quantized_depthwise_convolution is unused

2022-07-26 Thread GitBox


AndrewZhaoLuo commented on PR #12145:
URL: https://github.com/apache/tvm/pull/12145#issuecomment-1196281676

   I will take a look tomorrow


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] echuraev commented on a diff in pull request #12173: [OpenCL] Fix profiling hang for OpenCL device

2022-07-26 Thread GitBox


echuraev commented on code in PR #12173:
URL: https://github.com/apache/tvm/pull/12173#discussion_r930617226


##
tests/cpp-runtime/opencl/opencl_timer_test.cc:
##
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include 
+#include 
+
+#include "../src/runtime/opencl/opencl_common.h"
+
+using namespace tvm::runtime;
+using namespace tvm::runtime::cl;
+
+#define BUFF_SIZE 1024
+#define NUM_REPEAT 10
+
+TEST(OpenCLTimerNode, nested_timers) {
+  
+  OpenCLWorkspace* workspace = OpenCLWorkspace::Global();
+  OpenCLThreadEntry* thr = workspace->GetThreadEntry();
+  cl_command_queue queue = workspace->GetQueue(thr->device);
+  
+  int err;
+  cl_int* tmp_buf = new cl_int[BUFF_SIZE];
+  int64_t nested_time_sum = 0;
+
+  Timer init_timer = Timer::Start(thr->device);
+  for (int i = 0; i < NUM_REPEAT; ++i) {
+Timer nested_timer = Timer::Start(thr->device);
+//create some events
+cl_event ev = clCreateUserEvent(workspace->context, );
+OPENCL_CHECK_ERROR(err);
+cl_mem cl_buf = clCreateBuffer(workspace->context, CL_MEM_READ_ONLY, 
BUFF_SIZE * sizeof(cl_int), NULL, );
+OPENCL_CHECK_ERROR(err);
+OPENCL_CALL(clEnqueueWriteBuffer(queue, cl_buf, false, 0, BUFF_SIZE * 
sizeof(cl_int), tmp_buf, 0, NULL, ));
+OPENCL_CALL(clReleaseMemObject(cl_buf));
+workspace->events[thr->device.device_id].push_back(ev);
+nested_timer->Stop();
+nested_time_sum += nested_timer->SyncAndGetElapsedNanos();
+  }
+  init_timer->Stop();
+
+  free(tmp_buf);

Review Comment:
   You have allocated this object with `new`, so you must use `delete` instead 
of `free`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] sunjiweiswift commented on a diff in pull request #11599: [Bugfix][Runtime] Fix sched_setaffinity in Android

2022-07-26 Thread GitBox


sunjiweiswift commented on code in PR #11599:
URL: https://github.com/apache/tvm/pull/11599#discussion_r930612103


##
src/runtime/threading_backend.cc:
##
@@ -321,12 +350,25 @@ class ThreadGroup::Impl {
 }
   }
 
+#ifndef __hexagon__
+  pid_t Tid() {
+#if defined(_WIN32)
+return GetCurrentThreadId();
+#else
+return syscall(SYS_gettid);
+#endif
+  }
+  void SetTid(size_t index) { threads_tid_[index] = Tid(); }
+  pid_t GetTid(size_t thread_index) { return threads_tid_[thread_index]; }

Review Comment:
   Returns -1, indicating that the affinity setting failed. In line with 
expectations



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch last-successful updated (ea6ea42757 -> 421f9d756a)

2022-07-26 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch last-successful
in repository https://gitbox.apache.org/repos/asf/tvm.git


from ea6ea42757 TVM Vertical Integration with PyTorch (#11911)
 add eada707a70 [Fix] Fix some errors in unittests (#12170)
 add 421f9d756a [ci] Skip broken android_rpc failures (#12192)

No new revisions were added by this update.

Summary of changes:
 .github/workflows/main.yml |  6 ++
 tests/python/unittest/test_arith_domain_touched.py |  8 
 .../test_tir_analysis_calculate_workspace.py   |  2 +-
 .../test_tir_analysis_get_block_access_region.py   |  2 --
 .../unittest/test_tir_schedule_transform_layout.py |  2 +-
 .../test_tir_transform_compact_buffer_region.py| 22 +++---
 ...test_tir_transform_renormalize_split_pattern.py |  4 ++--
 .../unittest/test_tir_transform_storage_flatten.py |  2 +-
 .../test_tir_usmp_analysis_extract_bufferinfo.py   |  2 +-
 tests/python/unittest/test_tir_usmp_utils.py   |  2 +-
 tests/python/unittest/test_tvmscript_roundtrip.py  | 22 ++
 .../python/unittest/test_tvmscript_syntax_sugar.py |  2 +-
 12 files changed, 39 insertions(+), 37 deletions(-)



[GitHub] [tvm] ganler opened a new pull request, #12197: [UX] highlight tvm script

2022-07-26 Thread GitBox


ganler opened a new pull request, #12197:
URL: https://github.com/apache/tvm/pull/12197

   See https://github.com/tlc-pack/relax/pull/185
   
   cc: @YuchenJin @junrushao1994 @Hzfengsy  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] sunjiweiswift commented on a diff in pull request #11599: [Bugfix][Runtime] Fix sched_setaffinity in Android

2022-07-26 Thread GitBox


sunjiweiswift commented on code in PR #11599:
URL: https://github.com/apache/tvm/pull/11599#discussion_r930591164


##
src/runtime/threading_backend.cc:
##
@@ -321,12 +350,25 @@ class ThreadGroup::Impl {
 }
   }
 
+#ifndef __hexagon__
+  pid_t Tid() {
+#if defined(_WIN32)
+return GetCurrentThreadId();
+#else
+return syscall(SYS_gettid);
+#endif
+  }
+  void SetTid(size_t index) { threads_tid_[index] = Tid(); }
+  pid_t GetTid(size_t thread_index) { return threads_tid_[thread_index]; }

Review Comment:
   If getid() is called before settid(). Gettid will return 0. Call 
sched_setaffinity will return -1.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] wrongtest-intellif commented on pull request #12196: Use std::move to avoid warnings on clang-13

2022-07-26 Thread GitBox


wrongtest-intellif commented on PR #12196:
URL: https://github.com/apache/tvm/pull/12196#issuecomment-1196197309

   Thank you~ It is happy for macos users :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] masahi merged pull request #12192: [skip ci][ci] Skip broken android_rpc failures

2022-07-26 Thread GitBox


masahi merged PR #12192:
URL: https://github.com/apache/tvm/pull/12192


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated: [ci] Skip broken android_rpc failures (#12192)

2022-07-26 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 421f9d756a [ci] Skip broken android_rpc failures (#12192)
421f9d756a is described below

commit 421f9d756a6ac48b9c3b886f7941a14dae133f5d
Author: driazati <9407960+driaz...@users.noreply.github.com>
AuthorDate: Tue Jul 26 18:52:27 2022 -0700

[ci] Skip broken android_rpc failures (#12192)

See #12191

Co-authored-by: driazati 
---
 .github/workflows/main.yml | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml
index 313c440cbd..eb346e4605 100644
--- a/.github/workflows/main.yml
+++ b/.github/workflows/main.yml
@@ -121,26 +121,31 @@ jobs:
   make jvmpkg
   - name: Build android_rpc
 working-directory: apps/android_rpc
+continue-on-error: true
 run: |
   export PATH="${ANDROID_NDK_HOME}:$PATH"
   gradle clean build
   - name: Upload android_rpc APK
 uses: actions/upload-artifact@v2
+continue-on-error: true
 with:
   name: android_rpc-debug.apk
   path: ./apps/android_rpc/app/build/outputs/apk/debug/app-debug.apk
   - name: Build android_deploy
 working-directory: apps/android_deploy
+continue-on-error: true
 run: |
   export PATH="${ANDROID_NDK_HOME}:$PATH"
   gradle clean build
   - name: Upload android_deploy APK
 uses: actions/upload-artifact@v2
+continue-on-error: true
 with:
   name: android_deploy-debug.apk
   path: ./apps/android_deploy/app/build/outputs/apk/debug/app-debug.apk
   - name: Build android_camera
 working-directory: apps/android_camera
+continue-on-error: true
 run: |
   mkdir -p app/src/main/assets/models/
   export 
TVM_NDK_CC=${ANDROID_NDK_HOME}/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android30-clang++
@@ -156,6 +161,7 @@ jobs:
   gradle clean build
   - name: Upload android_camera APK
 uses: actions/upload-artifact@v2
+continue-on-error: true
 with:
   name: android_camera-debug.apk
   path: ./apps/android_camera/app/build/outputs/apk/debug/app-debug.apk
\ No newline at end of file



[GitHub] [tvm] shingjan commented on a diff in pull request #12141: [Meta Schedule][XGBoost] Update the custom callback function of xgboost in meta schedule

2022-07-26 Thread GitBox


shingjan commented on code in PR #12141:
URL: https://github.com/apache/tvm/pull/12141#discussion_r930543004


##
python/tvm/meta_schedule/cost_model/xgb_model.py:
##
@@ -763,3 +768,162 @@ def callback(env: "xgb.core.CallbackEnv"):
 raise EarlyStopException(best_iteration)
 
 return callback
+
+
+class XGBoostCallback(TrainingCallback):
+"""Base class for XGBoost callbacks."""
+
+def __call__(self, env: "xgb.core.CallbackEnv"):
+# Compatibility with xgboost < 1.3
+return self.after_iteration(env.model, env.iteration, 
env.evaluation_result_list)
+
+def after_iteration(self, model: "xgb.Booster", epoch: int, evals_log: 
Dict):
+raise NotImplementedError
+
+
+class XGBoostCustomCallback(XGBoostCallback):
+"""Custom callback class for xgboost to support multiple custom evaluation 
functions"""
+
+def __init__(
+self,
+early_stopping_rounds: int,
+verbose_eval: int,
+fevals: List[Callable],
+evals: List[Tuple["xgb.DMatrix", str]],
+focused_metric: str = "tr-p-rmse",
+cvfolds: List["xgb.training.CVPack"] = None,
+):
+self.early_stopping_rounds = early_stopping_rounds
+self.verbose_eval = verbose_eval
+self.fevals = fevals
+self.evals = evals
+self.state: Dict[str, Any] = {}
+self.focused_metric = focused_metric
+self.sort_key = make_metric_sorter(focused_metric=focused_metric)
+self.cvfolds = cvfolds
+if cvfolds is not None:
+self.aggregated_cv = None
+
+def init(self, model: "xgb.Booster"):
+"""Internal function for intialization"""
+booster: "xgb.Booster" = model
+self.state["best_iteration"] = 0
+self.state["best_score"] = float("inf")
+if booster is None:
+assert self.cvfolds is not None
+return
+if booster.attr("best_score") is not None:
+self.state["best_score"] = float(booster.attr("best_score"))
+self.state["best_iteration"] = int(booster.attr("best_iteration"))
+self.state["best_msg"] = booster.attr("best_msg")
+else:
+booster.set_attr(best_iteration=str(self.state["best_iteration"]))
+booster.set_attr(best_score=str(self.state["best_score"]))
+
+def after_iteration(self, model: "xgb.Booster", epoch: int, evals_log: 
Dict):
+"""Internal function for after_iteration"""
+# pylint:disable = import-outside-toplevel

Review Comment:
   I guess we will need to keep this one disabled as there are other imports 
outside of toplevel in this specific function



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] shingjan commented on a diff in pull request #12141: [Meta Schedule][XGBoost] Update the custom callback function of xgboost in meta schedule

2022-07-26 Thread GitBox


shingjan commented on code in PR #12141:
URL: https://github.com/apache/tvm/pull/12141#discussion_r930542012


##
python/tvm/meta_schedule/cost_model/xgb_model.py:
##
@@ -763,3 +768,162 @@ def callback(env: "xgb.core.CallbackEnv"):
 raise EarlyStopException(best_iteration)
 
 return callback
+
+
+class XGBoostCallback(TrainingCallback):
+"""Base class for XGBoost callbacks."""
+
+def __call__(self, env: "xgb.core.CallbackEnv"):
+# Compatibility with xgboost < 1.3
+return self.after_iteration(env.model, env.iteration, 
env.evaluation_result_list)

Review Comment:
   Unit test is added @zxybazh @Sunny-Island . One thing that we need to pay 
attention to is that we may need to remove this test while we bump the xgboost 
version in CI. Integration test is on its way.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] cconvey commented on pull request #12195: [hexagon][testing] filesystem-friendly test IDs

2022-07-26 Thread GitBox


cconvey commented on PR #12195:
URL: https://github.com/apache/tvm/pull/12195#issuecomment-1196165877

   @tvm-bot rerun


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] masahi commented on a diff in pull request #12171: [TIR] Asynchronous stage in software pipeline

2022-07-26 Thread GitBox


masahi commented on code in PR #12171:
URL: https://github.com/apache/tvm/pull/12171#discussion_r930523406


##
src/tir/transforms/inject_software_pipeline.cc:
##
@@ -494,6 +512,267 @@ class PipelineRewriter : public StmtExprMutator {
 return Buffer(new_buffer);
   }
 
+  // Per-stage states that need to be tracked across pipeline prologue, body, 
and epilogue.
+  struct AsyncStateGlobal {
+// Buffers that this stage asynchronously writes.
+std::unordered_set dst_buffers;
+// An imaginary index that the latest async operation associated with this 
stage has written
+// into. Only valid if all associated predicates are true, so that we can 
count the number of
+// async invocations exactly. When it is valid, it is the "sum of extents 
of loops that have
+// been executed" - 1, e.g. for epilogue it is prologue extent + body 
extent - 1. This
+// is only needed to compute wait count for epilogue without async 
producers.
+Optional producer_head{PrimExpr(-1)};
+
+bool writes(Buffer buf) const { return dst_buffers.count(buf.get()) > 0; }
+  };
+
+  // Per-stage states that are local to each of pipeline prologue, body, and 
epilogue.
+  struct AsyncStateLocal {
+struct {
+  // The index into a list of blocks, where async_wait_queue should be 
attached at the
+  // beginning.
+  int insert_before;
+  // in_flight_count would be a more precise name, but the implementation 
uses wait_count for
+  // brevity.
+  PrimExpr wait_count{nullptr};
+
+  bool valid() const { return wait_count.defined(); }
+} pending_wait;
+
+// Destination buffers of async operations that have been encountered so 
far in the loop
+//
+// for (size_t i = 0; i < new_blocks.size(); ++i) {
+//...
+// }
+//
+// This is for tracking which async operations have been issued at the 
"current" iteration, up
+// until a point where we encounter a consumer of async result buffers. 
This is used to decide
+// if the producer_head of each buffer points to a copy written in the 
current or previous
+// iteration.
+std::unordered_set seen;
+
+// A symbolic expression representing the index the latest async operation 
associated with this
+// stage has written into, at the "current" iteration.
+Optional producer_head;
+// The predicate of BlockRealize containing the async operation of this 
stage.
+Optional predicate;
+// Indices into a list of blocks, where async_commit_queue scope should be 
attached.
+// If multiple async producers are interleaved with their consumer in 
between, we need separate
+// async_commit_queue for each producer. Thus, we need multiple sets of 
indices.
+std::vector> commit_groups;
+
+// This is set to true when we reach a stage that consumes this async 
stage.
+bool consumed{false};
+  };
+
+  /*! Structure holding intermediate information for pipeline loop rewriting. 
*/
+  struct RewrittenBlockInfo {
+int stage;
+PrimExpr predicate;
+Block block;
+PrimExpr access_index;
+bool is_async;
+  };
+
+  // Determine where to insert async_wait and the corresponding wait count.
+  void PopulateWaitCounts(const std::vector& new_blocks,
+  arith::Analyzer* ana_normalized,
+  const std::unordered_map& 
buffer_to_commit_group,
+  std::map* async_states_local) {
+for (size_t i = 0; i < new_blocks.size(); ++i) {
+  if (new_blocks[i].is_async) {
+// Record the fact that we have encountered these write buffers.
+for (auto write_region : new_blocks[i].block->writes) {
+  
(*async_states_local)[new_blocks[i].stage].seen.insert(write_region->buffer.get());
+}
+  }
+
+  int producer_stage_idx = -1;
+  for (auto read_region : new_blocks[i].block->reads) {
+for (auto kv : async_states) {
+  if (kv.first <= new_blocks[i].stage && 
kv.second.writes(read_region->buffer)) {
+// Found an earlier stage where read_region->buffer was 
asynchronously written
+ICHECK(producer_stage_idx == -1 || producer_stage_idx == kv.first)
+<< "A dependency on multiple async stages is not supported";
+producer_stage_idx = kv.first;
+  }
+}
+  }
+
+  if (producer_stage_idx == -1) continue;
+
+  // The following logic has become complicated to handle case like this:
+  //
+  // for i in range(13):
+  // # Stage 0
+  // async_commit_queue(0):
+  //async_scope:
+  //   A_shared[(i + 3) % 4] = A[...]
+  //
+  //
+  // # Stage 1
+  // async_wait_queue(0, 5):
+  //compute(A_shared[i], B_shared[i])
+  //
+  // # Stage 0
+  // async_commit_queue(0)
+  //async_scope:
+  //   B_shared[(i + 3) % 4] = B[...]
+  //
+  //
+  // 

[GitHub] [tvm] masahi commented on a diff in pull request #12171: [TIR] Asynchronous stage in software pipeline

2022-07-26 Thread GitBox


masahi commented on code in PR #12171:
URL: https://github.com/apache/tvm/pull/12171#discussion_r930522785


##
src/tir/transforms/inject_software_pipeline.cc:
##
@@ -530,18 +620,269 @@ class PipelineRewriter : public StmtExprMutator {
   Block new_block = 
Downcast(PipelineBodyRewriter(buffer_data_to_buffer_, buffer_remap_,
  pipeline_loop_, 
max_stage_ != 1,
  
fragment_info_)(block));
-  Map subst_map;
-  if (is_unit_loop) {
-subst_map.Set(pipeline_loop_->loop_var, skewed_loop_var);
-  } else {
-// normalize loop range
-PrimExpr delta = start - pipeline_loop_->min;
-subst_map.Set(pipeline_loop_->loop_var, skewed_loop_var + delta);
+
+  PrimExpr delta = start - pipeline_loop_->min;
+  // This variable corresponds to
+  // - "producer_head" if this stage is an async producer
+  // - "consumer_head" if this stage reads from asynchronously written 
buffers.
+  PrimExpr normalized_access_index = is_unit_loop ? skewed_loop_var : 
skewed_loop_var + delta;
+
+  // Adjust the block predicate and the body according to the final loop 
bound
+  //  [pipeline_loop_->min, extent).
+  if (!is_unit_loop) {
 Var loop_iter = Downcast(new_loop_var);
-inbound = Substitute(inbound, Map{{loop_iter, loop_iter 
+ delta}});
+inbound = Substitute(inbound, {{loop_iter, loop_iter + delta}});
+  }
+
+  new_block = Downcast(
+  Substitute(new_block, {{pipeline_loop_->loop_var, 
normalized_access_index}}));
+
+  if (pipeline_info_[block].async) {

Review Comment:
   ok moved the bulk of logic into two functions. Now `EmitImpl` itself is kept 
short.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] github-actions[bot] commented on pull request #12190: Update to 0.10.0

2022-07-26 Thread GitBox


github-actions[bot] commented on PR #12190:
URL: https://github.com/apache/tvm/pull/12190#issuecomment-1196133919

   
   
   Built docs for commit 1d5474230d3ec1cc3d248cb3949f44deb274636b can be found 
[here](https://pr-docs.tlcpack.ai/PR-12190/2/docs/index.html).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] jwfromm commented on pull request #12124: [Relay][Op] Trilu operator implementation

2022-07-26 Thread GitBox


jwfromm commented on PR #12124:
URL: https://github.com/apache/tvm/pull/12124#issuecomment-1196125814

   I added pytorch testing and integration. Thanks for the recommendation 
@shingjan.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] jwfromm commented on a diff in pull request #12124: [Relay][Op] Trilu operator implementation

2022-07-26 Thread GitBox


jwfromm commented on code in PR #12124:
URL: https://github.com/apache/tvm/pull/12124#discussion_r930506616


##
python/tvm/relay/op/transform.py:
##
@@ -1889,3 +1889,46 @@ def stft(
 window = _make.ones([n_fft], "int32")
 
 return _make.stft(data, n_fft, hop_length, win_length, window, normalized, 
onesided)
+
+
+def trilu(data, k, upper=True):
+"""
+Given a 2-D matrix or batches of 2-D matrices, returns the
+upper or lower triangular part of the tensor.
+
+Parameters
+--
+data: relay.Expr
+The tensor that trilu will be applied to. Must be either
+a 2D matrix or a tensor of batches of 2D matrices.
+
+k: int
+The number of diagonals above or below the main diagonal
+to exclude or include.
+
+upper: bool, optional
+If True, only upper triangular values of input are kept,
+if False, the lower triangular values are kept.
+
+
+Returns
+---
+ret : relay.Expr
+The new tensor with appropriate diagonals set to zero.
+
+Examples
+
+.. code-block:: python
+
+x = [[0, 1, 2],
+ [3, 4, 5],
+ [6, 7, 8]]
+
+relay.trilu(x, True, 0) =
+[[0, 1, 2],
+ [0, 4, 5],
+ [0, 0, 8]]
+"""
+if not isinstance(k, Expr):

Review Comment:
   If it's an `int`, then it will already be cast as a const (since its not an 
`Expr`).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch nightly-docker-update updated (4a74d37ef7 -> 865607305d)

2022-07-26 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch nightly-docker-update
in repository https://gitbox.apache.org/repos/asf/tvm.git


 discard 4a74d37ef7 [ci][docker] Nightly Docker image update
 add ca2ec5429b [CI][docker] Add comment (#11953)
 add 9963b59ffa fix typo (#12183)
 add 9bef7de9f0 [Doc] Fix link error in pipeline executor tutorial (#12185)
 add ea6ea42757 TVM Vertical Integration with PyTorch (#11911)
 add eada707a70 [Fix] Fix some errors in unittests (#12170)
 add 865607305d [ci][docker] Nightly Docker image update

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (4a74d37ef7)
\
 N -- N -- N   refs/heads/nightly-docker-update (865607305d)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 Jenkinsfile|  16 +-
 apps/pt_tvmdsoop/tests/test_as_torch.py| 257 
 apps/pt_tvmdsoop/tests/test_optimize_torch.py  | 161 +
 ci/jenkins/Jenkinsfile.j2  |  16 +-
 docker/bash.sh |   3 +
 .../work_with_relay/using_pipeline_executor.py |   2 +-
 python/tvm/contrib/torch/__init__.py   |  12 +-
 python/tvm/contrib/torch/as_torch.py   | 124 ++
 python/tvm/contrib/torch/optimize_torch.py | 198 
 python/tvm/script/parser.py|  16 +-
 src/contrib/torch/base64.h |  75 ++
 .../torch/pt_call_tvm/RuntimeModuleWrapper.cc  | 259 +
 src/relay/ir/indexed_graph.cc  |   2 +-
 tests/python/unittest/test_arith_domain_touched.py |   8 +-
 .../test_tir_analysis_calculate_workspace.py   |   2 +-
 .../test_tir_analysis_get_block_access_region.py   |   2 -
 .../unittest/test_tir_schedule_transform_layout.py |   2 +-
 .../test_tir_transform_compact_buffer_region.py|  22 +-
 ...test_tir_transform_renormalize_split_pattern.py |   4 +-
 .../unittest/test_tir_transform_storage_flatten.py |   2 +-
 .../test_tir_usmp_analysis_extract_bufferinfo.py   |   2 +-
 tests/python/unittest/test_tir_usmp_utils.py   |   2 +-
 tests/python/unittest/test_tvmscript_roundtrip.py  |  22 +-
 .../python/unittest/test_tvmscript_syntax_sugar.py |   2 +-
 24 files changed, 1153 insertions(+), 58 deletions(-)
 create mode 100644 apps/pt_tvmdsoop/tests/test_as_torch.py
 create mode 100644 apps/pt_tvmdsoop/tests/test_optimize_torch.py
 create mode 100644 python/tvm/contrib/torch/as_torch.py
 create mode 100644 python/tvm/contrib/torch/optimize_torch.py
 create mode 100644 src/contrib/torch/base64.h
 create mode 100644 src/contrib/torch/pt_call_tvm/RuntimeModuleWrapper.cc



[GitHub] [tvm] shingjan commented on a diff in pull request #12124: [Relay][Op] Trilu operator implementation

2022-07-26 Thread GitBox


shingjan commented on code in PR #12124:
URL: https://github.com/apache/tvm/pull/12124#discussion_r930499403


##
python/tvm/relay/op/transform.py:
##
@@ -1889,3 +1889,46 @@ def stft(
 window = _make.ones([n_fft], "int32")
 
 return _make.stft(data, n_fft, hop_length, win_length, window, normalized, 
onesided)
+
+
+def trilu(data, k, upper=True):
+"""
+Given a 2-D matrix or batches of 2-D matrices, returns the
+upper or lower triangular part of the tensor.
+
+Parameters
+--
+data: relay.Expr
+The tensor that trilu will be applied to. Must be either
+a 2D matrix or a tensor of batches of 2D matrices.
+
+k: int
+The number of diagonals above or below the main diagonal
+to exclude or include.
+
+upper: bool, optional
+If True, only upper triangular values of input are kept,
+if False, the lower triangular values are kept.
+
+
+Returns
+---
+ret : relay.Expr
+The new tensor with appropriate diagonals set to zero.
+
+Examples
+
+.. code-block:: python
+
+x = [[0, 1, 2],
+ [3, 4, 5],
+ [6, 7, 8]]
+
+relay.trilu(x, True, 0) =
+[[0, 1, 2],
+ [0, 4, 5],
+ [0, 0, 8]]
+"""
+if not isinstance(k, Expr):

Review Comment:
   Nit: Should we check if k is `int`, do `const(k, "int32")` and throw an 
error otherwise? The reason I am asking is bcoz in the triu/tril op of pytorch 
`k` is actually an int instead of 0D tensor. Could cover the case of wrongly 
typed user input.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] jwfromm commented on a diff in pull request #12124: [Relay][Op] Trilu operator implementation

2022-07-26 Thread GitBox


jwfromm commented on code in PR #12124:
URL: https://github.com/apache/tvm/pull/12124#discussion_r930491682


##
tests/python/frontend/onnx/test_forward.py:
##
@@ -5241,23 +5241,7 @@ def verify_eyelike(indata, dynamic=False):
 "test_training_dropout_mask",
 "test_training_dropout_zero_ratio",
 "test_training_dropout_zero_ratio_mask",
-"test_tril",
-"test_tril_pos",
-"test_tril_square",
-"test_tril_square_neg",
-"test_tril_neg",
-"test_tril_one_row_neg",
-"test_tril_out_neg",
-"test_tril_out_pos",
 "test_tril_zero",

Review Comment:
   Seems like it also doesnt work with nvptx for the same issue with empty 
tensors. I'll add them here and see how it does in CI.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] jwfromm commented on a diff in pull request #12124: [Relay][Op] Trilu operator implementation

2022-07-26 Thread GitBox


jwfromm commented on code in PR #12124:
URL: https://github.com/apache/tvm/pull/12124#discussion_r930491682


##
tests/python/frontend/onnx/test_forward.py:
##
@@ -5241,23 +5241,7 @@ def verify_eyelike(indata, dynamic=False):
 "test_training_dropout_mask",
 "test_training_dropout_zero_ratio",
 "test_training_dropout_zero_ratio_mask",
-"test_tril",
-"test_tril_pos",
-"test_tril_square",
-"test_tril_square_neg",
-"test_tril_neg",
-"test_tril_one_row_neg",
-"test_tril_out_neg",
-"test_tril_out_pos",
 "test_tril_zero",

Review Comment:
   Seems like it also doesnt work with nvptx for the same issue with empty 
tensors.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] jwfromm commented on a diff in pull request #12124: [Relay][Op] Trilu operator implementation

2022-07-26 Thread GitBox


jwfromm commented on code in PR #12124:
URL: https://github.com/apache/tvm/pull/12124#discussion_r930487887


##
tests/python/frontend/onnx/test_forward.py:
##
@@ -5241,23 +5241,7 @@ def verify_eyelike(indata, dynamic=False):
 "test_training_dropout_mask",
 "test_training_dropout_zero_ratio",
 "test_training_dropout_zero_ratio_mask",
-"test_tril",
-"test_tril_pos",
-"test_tril_square",
-"test_tril_square_neg",
-"test_tril_neg",
-"test_tril_one_row_neg",
-"test_tril_out_neg",
-"test_tril_out_pos",
 "test_tril_zero",

Review Comment:
   It actually works on llvm and cuda. I was testing on my macbook and it seems 
like the metal backend in general doesnt support empty tensors. I think for CI 
we could add these cases.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] vinx13 opened a new pull request, #12196: Use std::move to avoid warnings on clang-13

2022-07-26 Thread GitBox


vinx13 opened a new pull request, #12196:
URL: https://github.com/apache/tvm/pull/12196

   Returning an instance of a subclass of the return type disable copy elision, 
`std::move` is needed in this case.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch skip_android_rpc updated (9ab27d35d4 -> 9c45394ca3)

2022-07-26 Thread driazati
This is an automated email from the ASF dual-hosted git repository.

driazati pushed a change to branch skip_android_rpc
in repository https://gitbox.apache.org/repos/asf/tvm.git


 discard 9ab27d35d4 [ci] Skip broken android_rpc failures
 add 9c45394ca3 [ci] Skip broken android_rpc failures

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (9ab27d35d4)
\
 N -- N -- N   refs/heads/skip_android_rpc (9c45394ca3)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .github/workflows/main.yml | 4 
 1 file changed, 4 insertions(+)



[GitHub] [tvm] cconvey commented on pull request #12193: [util] add `tvm.support.dump_for_debug` function

2022-07-26 Thread GitBox


cconvey commented on PR #12193:
URL: https://github.com/apache/tvm/pull/12193#issuecomment-1196041454

   @areusch : This seems like the kind of thing you'd have thoughts on :) 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] github-actions[bot] commented on pull request #12188: [docs] Update tlcpack-sphinx-addon

2022-07-26 Thread GitBox


github-actions[bot] commented on PR #12188:
URL: https://github.com/apache/tvm/pull/12188#issuecomment-1196033210

   
   
   Built docs for commit 3ae82db33c2d5b186f1a3548bcd22d1f36fc4552 can be found 
[here](https://pr-docs.tlcpack.ai/PR-12188/1/docs/index.html).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] cconvey commented on issue #12191: [ci] Build android_rpc fails in CI

2022-07-26 Thread GitBox


cconvey commented on issue #12191:
URL: https://github.com/apache/tvm/issues/12191#issuecomment-1196029860

   Thanks for reporting this @driazati !  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] cconvey commented on pull request #12195: [hexagon][testing] filesystem-friendly test IDs

2022-07-26 Thread GitBox


cconvey commented on PR #12195:
URL: https://github.com/apache/tvm/pull/12195#issuecomment-1196026638

   CC: @mehrdadh 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] cconvey opened a new pull request, #12195: [hexagon][testing] filesystem-friendly test IDs

2022-07-26 Thread GitBox


cconvey opened a new pull request, #12195:
URL: https://github.com/apache/tvm/pull/12195

   - Change the formula used to compute pytest test-ID strings.
 The previous formula included '(' and ')' characters, which
 can cause the filename to require escaping / quoting on the
 Bash command line.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] cconvey commented on pull request #12194: [runtime][hexagon] improved file-copy logic

2022-07-26 Thread GitBox


cconvey commented on PR #12194:
URL: https://github.com/apache/tvm/pull/12194#issuecomment-1196023022

   CC: @kparzysz-quic @csullivan 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] cconvey opened a new pull request, #12194: [runtime][hexagon] improved file-copy logic

2022-07-26 Thread GitBox


cconvey opened a new pull request, #12194:
URL: https://github.com/apache/tvm/pull/12194

   - Add `tvm::runtime::CopyFile` function.
   
   - Change `HexagonModuleNode::SaveToFile` to use new function
 instead of a shell `cp` invocation.
   
 This fixes a problem where the `cp`-based implementation
 couldn't handle certain valid filenames.
   
 This also fixes a bug where `SaveToFile` simply skips the
 file-copying step on Mac OSX.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] github-actions[bot] commented on pull request #12192: [ci] Skip broken android_rpc failures

2022-07-26 Thread GitBox


github-actions[bot] commented on PR #12192:
URL: https://github.com/apache/tvm/pull/12192#issuecomment-1196022432

   
   
   Built docs for commit 9ab27d35d4e56e37fea928e73fe989af985fa40a can be found 
[here](https://pr-docs.tlcpack.ai/PR-12192/2/docs/index.html).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] cconvey commented on pull request #12193: [util] add `tvm.support.dump_for_debug` function

2022-07-26 Thread GitBox


cconvey commented on PR #12193:
URL: https://github.com/apache/tvm/pull/12193#issuecomment-1196019808

   I find this function helpful for my own development work.  I'm not positive 
that `tvm/support.py` is the best home for it, but it would be nice to have it 
around.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] cconvey opened a new pull request, #12193: [util] add `tvm.support.dump_for_debug` function

2022-07-26 Thread GitBox


cconvey opened a new pull request, #12193:
URL: https://github.com/apache/tvm/pull/12193

   - add a utility function to help with debugging / investigation
   
 The function is particularly geared to help programmers examine
 changes to TVM modules and functions as they progress through
 the scheduling and lowering.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch skip_android_rpc updated (815455566e -> 9ab27d35d4)

2022-07-26 Thread driazati
This is an automated email from the ASF dual-hosted git repository.

driazati pushed a change to branch skip_android_rpc
in repository https://gitbox.apache.org/repos/asf/tvm.git


omit 815455566e [ci] Skip broken android_rpc failures
 add 9ab27d35d4 [ci] Skip broken android_rpc failures

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (815455566e)
\
 N -- N -- N   refs/heads/skip_android_rpc (9ab27d35d4)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .github/workflows/main.yml | 22 --
 1 file changed, 12 insertions(+), 10 deletions(-)



[tvm] branch skip_android_rpc created (now 815455566e)

2022-07-26 Thread driazati
This is an automated email from the ASF dual-hosted git repository.

driazati pushed a change to branch skip_android_rpc
in repository https://gitbox.apache.org/repos/asf/tvm.git


  at 815455566e [ci] Skip broken android_rpc failures

No new revisions were added by this update.



[GitHub] [tvm] driazati opened a new pull request, #12192: [ci] Skip broken android_rpc failures

2022-07-26 Thread GitBox


driazati opened a new pull request, #12192:
URL: https://github.com/apache/tvm/pull/12192

   See #12191


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] driazati opened a new issue, #12191: [ci] Build android_rpc fails in CI

2022-07-26 Thread GitBox


driazati opened a new issue, #12191:
URL: https://github.com/apache/tvm/issues/12191

   This has been failing intermittently but with increasing frequency lately:
   
   https://github.com/apache/tvm/runs/7519810828?check_suite_focus=true
   
   cc @argrento 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] gromero commented on a diff in pull request #12182: [microTVM][tutorial] AOT host-driven tutorial with TFLite model

2022-07-26 Thread GitBox


gromero commented on code in PR #12182:
URL: https://github.com/apache/tvm/pull/12182#discussion_r930436112


##
gallery/how_to/work_with_microtvm/micro_aot.py:
##
@@ -125,7 +125,7 @@
 #
 # Now that we have the compiled model as an IRModule, we need to create a 
firmware project
 # to use the compiled model with microTVM. To do this, we use Project API. We 
have defined
-# CRT and Zephyr microTVM template projects which are used for x86 CPU and 
Zephyr platforms
+# CRT and Zephyr microTVM template projects which are used for x86 CPU and 
Zephyr boards
 # respectively.

Review Comment:
   haha I missed it previously too! :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] driazati opened a new pull request, #12190: Update to 0.10.0

2022-07-26 Thread GitBox


driazati opened a new pull request, #12190:
URL: https://github.com/apache/tvm/pull/12190

   This updates the version numbers after the v0.9.0 release and adds a version 
selector option for the v0.9.0 docs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] mehrdadh commented on a diff in pull request #12182: [microTVM][tutorial] AOT host-driven tutorial with TFLite model

2022-07-26 Thread GitBox


mehrdadh commented on code in PR #12182:
URL: https://github.com/apache/tvm/pull/12182#discussion_r930431666


##
gallery/how_to/work_with_microtvm/micro_aot.py:
##
@@ -125,7 +125,7 @@
 #
 # Now that we have the compiled model as an IRModule, we need to create a 
firmware project
 # to use the compiled model with microTVM. To do this, we use Project API. We 
have defined
-# CRT and Zephyr microTVM template projects which are used for x86 CPU and 
Zephyr platforms
+# CRT and Zephyr microTVM template projects which are used for x86 CPU and 
Zephyr boards
 # respectively.

Review Comment:
   there should be, for some reason my eyes where seeing it there but I didn't 
actually put it there lol



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] gromero commented on a diff in pull request #12182: [microTVM][tutorial] AOT host-driven tutorial with TFLite model

2022-07-26 Thread GitBox


gromero commented on code in PR #12182:
URL: https://github.com/apache/tvm/pull/12182#discussion_r930403762


##
gallery/how_to/work_with_microtvm/micro_aot.py:
##
@@ -125,7 +125,7 @@
 #
 # Now that we have the compiled model as an IRModule, we need to create a 
firmware project
 # to use the compiled model with microTVM. To do this, we use Project API. We 
have defined
-# CRT and Zephyr microTVM template projects which are used for x86 CPU and 
Zephyr platforms
+# CRT and Zephyr microTVM template projects which are used for x86 CPU and 
Zephyr boards
 # respectively.

Review Comment:
   I'm not a native English speaker, but I think that ideally there must be a 
comma before "respectively", so it's up to you to add it or not :) I won't 
block on this nit, so just saying in case you need to re-spin the PR after some 
other review comment and if you can confirm that is indeed correct ;)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] mehrdadh commented on pull request #12182: [microTVM][tutorial] AOT host-driven tutorial with TFLite model

2022-07-26 Thread GitBox


mehrdadh commented on PR #12182:
URL: https://github.com/apache/tvm/pull/12182#issuecomment-1195938488

   @gromero thanks for the second review. I addressed comments
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] mehrdadh commented on a diff in pull request #12182: [microTVM][tutorial] AOT host-driven tutorial with TFLite model

2022-07-26 Thread GitBox


mehrdadh commented on code in PR #12182:
URL: https://github.com/apache/tvm/pull/12182#discussion_r930375549


##
gallery/how_to/work_with_microtvm/micro_aot.py:
##
@@ -110,11 +121,11 @@
 
 ##
 # Create a microTVM project
-# ---
+# -
 #
-# Now that we have the comipled model as an IRModule, we need to create a 
project
-# with the compiled model in microTVM. To do this, we use Project API. We have 
defined
-# CRT and Zephyr microTVM template projects which are used for X86 CPU and 
Zephyr platforms
+# Now that we have the compiled model as an IRModule, we need to create a 
firmware project
+# to use the compiled model with microTVM. To do this, we use Project API. We 
have defined
+# CRT and Zephyr microTVM template projects which are used for x86 CPU and 
Zephyr platforms

Review Comment:
   boards sounds good



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] guberti opened a new pull request, #12189: [microTVM] Fix timeout of -1 breaking Arduino transport

2022-07-26 Thread GitBox


guberti opened a new pull request, #12189:
URL: https://github.com/apache/tvm/pull/12189

   Fixes a small bug causing `test_arduino_workflow.py` to fail. Also updates a 
comment in `boards.json` to reflect a GitHub issue that has since been 
fininshed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tqchen commented on issue #11707: [Bug][TVMC] Map is not supported by RPC

2022-07-26 Thread GitBox


tqchen commented on issue #11707:
URL: https://github.com/apache/tvm/issues/11707#issuecomment-1195891572

   This is the intended behavior of RPC for now to restrict the object that we 
can support (so we can support minimum cases like uTVM). Would be a good 
starting pt to document the related behavior


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] zxybazh commented on issue #12135: [Bug] Metaschedule nonexistent error message

2022-07-26 Thread GitBox


zxybazh commented on issue #12135:
URL: https://github.com/apache/tvm/issues/12135#issuecomment-1195889152

   Corrected TIR would not cause the problem, please defined the write buffer 
for root block as follows:
   ```
   @tvm.script.ir_module
   class Module:
   @T.prim_func
   def bad_message(
   B: T.Buffer[(64), "float32"],
   B_pack: T.Buffer[(8, 8), "float32"],
   ):
   with T.block("root"):
   T.reads()
   T.writes()
   for jo in range(8):
   for ji in range(8):
   with T.block():
   vo, vi = T.axis.remap("SS", [jo, ji])
   B_pack[vo, vi] = B[vo * 8 + vi]
   ```
   And still the problem lies in bad message for error reporting, would work on 
that later.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tqchen commented on issue #8473: [RFC][Tracking Issue] Meta Schedule (AutoTIR)

2022-07-26 Thread GitBox


tqchen commented on issue #8473:
URL: https://github.com/apache/tvm/issues/8473#issuecomment-1195888767

   Would be good to get a status update @junrushao1994 . I would suggest we 
move followup non-infra part to separate trackings to keep things tracable.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tqchen commented on issue #9183: [Tracking Issue] C Device API implementation

2022-07-26 Thread GitBox


tqchen commented on issue #9183:
URL: https://github.com/apache/tvm/issues/9183#issuecomment-1195888170

   We should perhaps close this tracking issue?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] zxybazh commented on a diff in pull request #12184: [MetaSchedule] Remove Root Block From Collector

2022-07-26 Thread GitBox


zxybazh commented on code in PR #12184:
URL: https://github.com/apache/tvm/pull/12184#discussion_r930334204


##
src/meta_schedule/space_generator/post_order_apply.cc:
##
@@ -55,8 +55,10 @@ class BlockCollector : public tir::StmtVisitor {
 CHECK(block_names_.count(block->name_hint) == 0)
 << "Duplicated block name " << block->name_hint << " in function " << 
func_name_
 << " not supported!";
-block_names_.insert(block->name_hint);
-blocks_to_collect_.push_back(block->name_hint);
+if (block->name_hint != "root") {

Review Comment:
   Thanks for the suggestion, after some more dig in I found there's no issue 
with the correct TIR so I would simply close the PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] zxybazh closed pull request #12184: [MetaSchedule] Remove Root Block From Collector

2022-07-26 Thread GitBox


zxybazh closed pull request #12184: [MetaSchedule] Remove Root Block From 
Collector
URL: https://github.com/apache/tvm/pull/12184


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] zxybazh commented on pull request #12184: [MetaSchedule] Remove Root Block From Collector

2022-07-26 Thread GitBox


zxybazh commented on PR #12184:
URL: https://github.com/apache/tvm/pull/12184#issuecomment-1195887993

   After some local discussion with Junru I found the problem lies in the tir 
script. As long as we define the write buffer for the root block (should be no 
write block), it would not cause autoinline to work on the root block, thus 
causing no problem. Therefore, I'm closing the PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] gromero commented on a diff in pull request #12182: [microTVM][tutorial] AOT host-driven tutorial with TFLite model

2022-07-26 Thread GitBox


gromero commented on code in PR #12182:
URL: https://github.com/apache/tvm/pull/12182#discussion_r930304436


##
gallery/how_to/work_with_microtvm/micro_aot.py:
##
@@ -24,8 +24,10 @@
 `Alan MacDonald `_
 
 This tutorial is showcasing microTVM host-driven AoT compilation with
-a TFLite model. This tutorial can be executed on a X86 CPU using C runtime 
(CRT)
-or on Zephyr plarform on a microcontroller that supports Zephyr platform.
+a TFLite model. AoTExecutor reduces the overhead of parsing graph at runtime 
+compared to GraphExecutor. Also, we can have better memory management using 
Ahead 

Review Comment:
   I think using lower case here for "ahead" is better, otherwise looks good!



##
gallery/how_to/work_with_microtvm/micro_aot.py:
##
@@ -110,11 +121,11 @@
 
 ##
 # Create a microTVM project
-# ---
+# -
 #
-# Now that we have the comipled model as an IRModule, we need to create a 
project
-# with the compiled model in microTVM. To do this, we use Project API. We have 
defined
-# CRT and Zephyr microTVM template projects which are used for X86 CPU and 
Zephyr platforms
+# Now that we have the compiled model as an IRModule, we need to create a 
firmware project
+# to use the compiled model with microTVM. To do this, we use Project API. We 
have defined
+# CRT and Zephyr microTVM template projects which are used for x86 CPU and 
Zephyr platforms

Review Comment:
   would "Zephyr platform (singular)" be better here? I know the boards in 
Zephyr by themselves can be considered also "platforms" but since we are using 
the Project API and in the TVMC we consider Zephyr and Arduino platforms. Or 
even s/platforms/boards/ ? 



##
gallery/how_to/work_with_microtvm/micro_aot.py:
##
@@ -24,8 +24,10 @@
 `Alan MacDonald `_
 
 This tutorial is showcasing microTVM host-driven AoT compilation with
-a TFLite model. This tutorial can be executed on a X86 CPU using C runtime 
(CRT)
-or on Zephyr plarform on a microcontroller that supports Zephyr platform.
+a TFLite model. AoTExecutor reduces the overhead of parsing graph at runtime 
+compared to GraphExecutor. Also, we can have better memory management using 
Ahead 
+of time compilation. This tutorial can be executed on a x86 CPU using C 
runtime (CRT)
+or on Zephyr platform on a microcontroller that supports Zephyr platform.

Review Comment:
   @mehrdadh I would say rather: "or on Zephyr platform on a microcontroller 
that is supported by Zephyr" or, shorter: "... or on a microcontroller/board 
supported by Zephyr".



##
gallery/how_to/work_with_microtvm/micro_aot.py:
##
@@ -81,20 +85,27 @@
 #
 # Now we need to define the target, runtime and executor. In this tutorial, we 
focused on
 # using AOT host driven executor. We use the host micro target which is for 
running a model
-# on X86 CPU using CRT runtime or running a model with Zephyr platform on 
qemu_x86 simulator
-# board. In the case of a physical microcontoller, we get the target model for 
the physical
-# board (E.g. nucleo_f746zg) and pass it to `tvm.target.target.micro` to 
create a full
+# on x86 CPU using CRT runtime or running a model with Zephyr platform on 
qemu_x86 simulator
+# board. In the case of a physical microcontroller, we get the target model 
for the physical
+# board (E.g. nucleo_l4r5zi) and pass it to `tvm.target.target.micro` to 
create a full
 # micro target.
 #
+
+# Use the C runtime (crt) and enable static linking by setting system-lib to 
True

Review Comment:
   +1 for adding explanation for `system-lib` : ) 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] zxybazh commented on a diff in pull request #12141: [Meta Schedule][XGBoost] Update the custom callback function of xgboost in meta schedule

2022-07-26 Thread GitBox


zxybazh commented on code in PR #12141:
URL: https://github.com/apache/tvm/pull/12141#discussion_r930315114


##
python/tvm/meta_schedule/cost_model/xgb_model.py:
##
@@ -763,3 +768,162 @@ def callback(env: "xgb.core.CallbackEnv"):
 raise EarlyStopException(best_iteration)
 
 return callback
+
+
+class XGBoostCallback(TrainingCallback):
+"""Base class for XGBoost callbacks."""
+
+def __call__(self, env: "xgb.core.CallbackEnv"):
+# Compatibility with xgboost < 1.3
+return self.after_iteration(env.model, env.iteration, 
env.evaluation_result_list)

Review Comment:
   Hi, thanks for keeping me updated, to clarify the unit test requirement:
   1. In CI we have xgboost in a certain version (1.4.2), which should support 
both the new api and the old api.
   2. The current unit tests will test the xgboost model using the new api here 
because it's after version 1.3.
   3. I would like to suggest another stand alone unit test to make sure the 
old api works fine as well. For example, directly calling the class as a 
function to show it can work with earlier versions.
   4. Other than unit tests, I also suggested @shingjan to conduct local 
integration test.
   
   Thanks again for paying close attention! We are prioritizing meta schedule 
so that when we review the auto scheduler we would be aligned on how the tests 
are done and how it impacts the tuning system. On AS side, I would suggest the 
same unit tests to demonstrate its compatibility. AS's cost model unit test 
have a simple regression quality test which should be enough as the test with 
new api.
   
   For integration test, we usually don't put that in CI because it might take 
a lot of time. Let me know if you are interested in how to conduct local tuning 
and integration test, that might take a bit longer but would be fun. We can 
also conduct some local integration test to verify your PR for you if you got 
limited time to work on that : )



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] driazati opened a new pull request, #12188: [docs] Update tlcpack-sphinx-addon

2022-07-26 Thread GitBox


driazati opened a new pull request, #12188:
URL: https://github.com/apache/tvm/pull/12188

   This includes 
https://github.com/tlc-pack/tlcpack-sphinx-addon/commit/545450acaf0ee4e2932d8c5d9ab6e321d0bc86c8
 which fixes the sphinx-gallery cards and closes #12156


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] areusch closed issue #12156: [Docs] Clicking on chapters in User Tutorial doesn't do anything

2022-07-26 Thread GitBox


areusch closed issue #12156: [Docs]  Clicking on chapters in User Tutorial 
doesn't do anything
URL: https://github.com/apache/tvm/issues/12156


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] Sunny-Island commented on a diff in pull request #12141: [Meta Schedule][XGBoost] Update the custom callback function of xgboost in meta schedule

2022-07-26 Thread GitBox


Sunny-Island commented on code in PR #12141:
URL: https://github.com/apache/tvm/pull/12141#discussion_r930296373


##
python/tvm/meta_schedule/cost_model/xgb_model.py:
##
@@ -763,3 +768,162 @@ def callback(env: "xgb.core.CallbackEnv"):
 raise EarlyStopException(best_iteration)
 
 return callback
+
+
+class XGBoostCallback(TrainingCallback):
+"""Base class for XGBoost callbacks."""
+
+def __call__(self, env: "xgb.core.CallbackEnv"):
+# Compatibility with xgboost < 1.3
+return self.after_iteration(env.model, env.iteration, 
env.evaluation_result_list)

Review Comment:
   Hi, I am working on auto_scheduler xgboost upgrade, I have some questions 
about your suggestion 1:
   Do you mean a unit test is necessary to test this new api, and another test 
in Ci to test this new api? Two unit test to test this new api.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] shingjan commented on a diff in pull request #12144: [Auto Scheduler] Upgrade autoscheduler xgboost callback

2022-07-26 Thread GitBox


shingjan commented on code in PR #12144:
URL: https://github.com/apache/tvm/pull/12144#discussion_r930273242


##
python/tvm/auto_scheduler/cost_model/xgb_model.py:
##
@@ -539,125 +539,128 @@ def feval(preds, labels):
 return feval
 
 
-def custom_callback(
-stopping_rounds,
-metric,
-fevals,
-evals=(),
-log_file=None,
-maximize=False,
-verbose_eval=True,
-skip_every=2,
-):
-"""Callback function for xgboost to support multiple custom evaluation 
functions"""
-# pylint: disable=import-outside-toplevel
-from xgboost.core import EarlyStopException
-from xgboost.callback import _fmt_metric
-
-try:
-from xgboost.training import aggcv
-except ImportError:
-from xgboost.callback import _aggcv as aggcv
-
-state = {}
-metric_shortname = metric.split("-")[1]
-
-def init(env):
-"""internal function"""
-bst = env.model
-
-state["maximize_score"] = maximize
-state["best_iteration"] = 0
-if maximize:
-state["best_score"] = float("-inf")
-else:
-state["best_score"] = float("inf")
+class CustomCallback(callback.TrainingCallback):

Review Comment:
   Some unit tests would be appreciated! I am working on a few on the meta 
schedule side



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] driazati commented on issue #12156: [Docs] Clicking on chapters in User Tutorial doesn't do anything

2022-07-26 Thread GitBox


driazati commented on issue #12156:
URL: https://github.com/apache/tvm/issues/12156#issuecomment-1195812894

   Thanks for reporting! Once 
https://github.com/tlc-pack/tlcpack-sphinx-addon/pull/8 is merged (that should 
fix this issue), we can update the version of `tlcpack-sphinx-addon` we use in 
tvm and this should be fixed on the live website


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] driazati closed issue #12156: [Docs] Clicking on chapters in User Tutorial doesn't do anything

2022-07-26 Thread GitBox


driazati closed issue #12156: [Docs]  Clicking on chapters in User Tutorial 
doesn't do anything
URL: https://github.com/apache/tvm/issues/12156


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] mehrdadh commented on a diff in pull request #12182: [microTVM][tutorial] AOT host-driven tutorial with TFLite model

2022-07-26 Thread GitBox


mehrdadh commented on code in PR #12182:
URL: https://github.com/apache/tvm/pull/12182#discussion_r930240740


##
gallery/how_to/work_with_microtvm/micro_aot.py:
##
@@ -0,0 +1,162 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-AoT:
+
+microTVM Host-Driven AoT
+===
+**Authors**:
+`Mehrdad Hessar `_,
+`Alan MacDonald `_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a TFLite model. This tutorial can be executed on a X86 CPU using C runtime 
(CRT)
+or on Zephyr plarform on a microcontroller that supports Zephyr platform.

Review Comment:
   done



##
gallery/how_to/work_with_microtvm/micro_aot.py:
##
@@ -0,0 +1,162 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-AoT:
+
+microTVM Host-Driven AoT
+===
+**Authors**:
+`Mehrdad Hessar `_,
+`Alan MacDonald `_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a TFLite model. This tutorial can be executed on a X86 CPU using C runtime 
(CRT)
+or on Zephyr plarform on a microcontroller that supports Zephyr platform.
+"""
+
+import numpy as np
+import pathlib
+import json
+import os
+
+import tvm
+from tvm import relay
+from tvm.relay.backend import Executor, Runtime
+from tvm.contrib.download import download_testdata
+
+##
+# Import a TFLite model
+# -
+#
+# To begin with, download and import a TFLite model from TinyMLPerf models.

Review Comment:
   I added comment about the origin of the model and fixed MLPerf Tiny name.



##
gallery/how_to/work_with_microtvm/micro_aot.py:
##
@@ -0,0 +1,162 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-AoT:
+
+microTVM Host-Driven AoT
+===
+**Authors**:
+`Mehrdad Hessar `_,
+`Alan MacDonald `_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with

Review Comment:
   added few comments, please take another look.



##
gallery/how_to/work_with_microtvm/micro_aot.py:
##
@@ -0,0 +1,168 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may 

[GitHub] [tvm] mkatanbaf commented on a diff in pull request #12125: Zephyr fvp support

2022-07-26 Thread GitBox


mkatanbaf commented on code in PR #12125:
URL: https://github.com/apache/tvm/pull/12125#discussion_r930227788


##
apps/microtvm/zephyr/template_project/src/host_driven/main.c:
##
@@ -218,11 +303,42 @@ void uart_irq_cb(const struct device* dev, void* 
user_data) {
   }
 }
 
+#ifdef FVP

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] mikepapadim commented on issue #12109: [ci] CPU integration tests are bottleneck-ing CI

2022-07-26 Thread GitBox


mikepapadim commented on issue #12109:
URL: https://github.com/apache/tvm/issues/12109#issuecomment-1195740286

   I am having a look into this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] TejashShah commented on pull request #11878: [Adreno] Add markup pass of relay tensors for static texture planning

2022-07-26 Thread GitBox


TejashShah commented on PR #11878:
URL: https://github.com/apache/tvm/pull/11878#issuecomment-1195739065

@masahi @csullivan @junrushao1994, please take out sometime to review this 
code and offer some feedback to @elvin-n. Thanks.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] driazati commented on a diff in pull request #12178: Build and test TVM under minimal configuration

2022-07-26 Thread GitBox


driazati commented on code in PR #12178:
URL: https://github.com/apache/tvm/pull/12178#discussion_r930197046


##
ci/jenkins/Test.groovy.j2:
##
@@ -199,6 +234,9 @@ stage('Test') {
 {{ method_name }}()
   },
   {% endfor %}
+  'unittest: CPU MINIMAL': {

Review Comment:
   I meant something like 
https://github.com/apache/tvm/blob/main/ci/jenkins/Test.groovy.j2#L202-L218, if 
you do that it will automatically call the test method 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tkonolige commented on a diff in pull request #12138: [FIX,TIR] Handle LetStmt in EstimateTIRFLops

2022-07-26 Thread GitBox


tkonolige commented on code in PR #12138:
URL: https://github.com/apache/tvm/pull/12138#discussion_r930195958


##
tests/python/unittest/test_tir_analysis_estimate_tir_flops.py:
##
@@ -48,5 +50,16 @@ def test_te_workload(workload, flops):
 assert float(flops) == estimate_tir_flops(mod)
 
 
+@T.prim_func
+def flops_with_let(a: T.Buffer[16, "float32"]):
+for i in range(8):
+j = i + 8
+a[j] = a[i]
+
+
+def test_flops_with_let():
+estimate_tir_flops(IRModule({"main": flops_with_let}))

Review Comment:
   Good point, done.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] gigiblender commented on a diff in pull request #12178: Build and test TVM under minimal configuration

2022-07-26 Thread GitBox


gigiblender commented on code in PR #12178:
URL: https://github.com/apache/tvm/pull/12178#discussion_r930113155


##
ci/jenkins/Test.groovy.j2:
##
@@ -199,6 +234,9 @@ stage('Test') {
 {{ method_name }}()
   },
   {% endfor %}
+  'unittest: CPU MINIMAL': {

Review Comment:
   Ok, I am adding a new macro



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] NicolaLancellotti commented on pull request #11519: [TFLite] Support quantized GREATER op in TFLite frontend

2022-07-26 Thread GitBox


NicolaLancellotti commented on PR #11519:
URL: https://github.com/apache/tvm/pull/11519#issuecomment-1195634974

   With tflite 2.9.1 `test_elemwise[_test_greater]` does not fail any more. We 
should wait for #12130 and #12131 to be merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] mbrookhart commented on a diff in pull request #12124: [Relay][Op] Trilu operator implementation

2022-07-26 Thread GitBox


mbrookhart commented on code in PR #12124:
URL: https://github.com/apache/tvm/pull/12124#discussion_r930088467


##
tests/python/frontend/onnx/test_forward.py:
##
@@ -5241,23 +5241,7 @@ def verify_eyelike(indata, dynamic=False):
 "test_training_dropout_mask",
 "test_training_dropout_zero_ratio",
 "test_training_dropout_zero_ratio_mask",
-"test_tril",
-"test_tril_pos",
-"test_tril_square",
-"test_tril_square_neg",
-"test_tril_neg",
-"test_tril_one_row_neg",
-"test_tril_out_neg",
-"test_tril_out_pos",
 "test_tril_zero",

Review Comment:
   I haven't looked at this op at all. How tricky would it be to support the 
zero case? Otherwise LGTM.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] areusch commented on a diff in pull request #12182: [microTVM][tutorial] AOT host-driven tutorial with TFLite model

2022-07-26 Thread GitBox


areusch commented on code in PR #12182:
URL: https://github.com/apache/tvm/pull/12182#discussion_r930059752


##
gallery/how_to/work_with_microtvm/micro_aot.py:
##
@@ -0,0 +1,168 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-AoT:
+
+microTVM Host-Driven AoT
+===
+**Authors**:
+`Mehrdad Hessar `_,
+`Alan MacDonald `_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a TFLite model. This tutorial can be executed on a X86 CPU using C runtime 
(CRT)
+or on Zephyr plarform on a microcontroller that supports Zephyr platform.
+"""
+
+# sphinx_gallery_start_ignore
+from tvm import testing
+
+testing.utils.install_request_hook(depth=3)
+# sphinx_gallery_end_ignore
+
+import numpy as np
+import pathlib
+import json
+import os
+
+import tvm
+from tvm import relay
+from tvm.relay.backend import Executor, Runtime
+from tvm.contrib.download import download_testdata
+
+##
+# Import a TFLite model
+# -
+#
+# To begin with, download and import a TFLite model from TinyMLPerf models.
+#
+# **Note:** By default this tutorial runs on X86 CPU using CRT, if you would 
like to run on Zephyr platform
+# you need to export `TVM_MICRO_USE_HW` environment variable.
+#
+use_physical_hw = bool(os.getenv("TVM_MICRO_USE_HW"))
+MODEL_URL = 
"https://github.com/tlc-pack/web-data/raw/main/testdata/microTVM/model/keyword_spotting_quant.tflite;
+MODEL_PATH = download_testdata(MODEL_URL, "keyword_spotting_quant.tflite", 
module="model")
+SAMPLE_URL = 
"https://github.com/tlc-pack/web-data/raw/main/testdata/microTVM/data/keyword_spotting_int8_6.pyc.npy;
+SAMPLE_PATH = download_testdata(SAMPLE_URL, "keyword_spotting_int8_6.pyc.npy", 
module="data")
+
+tflite_model_buf = open(MODEL_PATH, "rb").read()
+try:
+import tflite
+
+tflite_model = tflite.Model.GetRootAsModel(tflite_model_buf, 0)
+except AttributeError:
+import tflite.Model
+
+tflite_model = tflite.Model.Model.GetRootAsModel(tflite_model_buf, 0)
+
+input_shape = (1, 49, 10, 1)
+INPUT_NAME = "input_1"
+relay_mod, params = relay.frontend.from_tflite(
+tflite_model, shape_dict={INPUT_NAME: input_shape}, 
dtype_dict={INPUT_NAME: "int8"}
+)
+
+##
+# Defining the target
+# ---
+#
+# Now we need to define the target, runtime and executor. In this tutorial, we 
focused on
+# using AOT host driven executor. We use the host micro target which is for 
running a model

Review Comment:
   Suggest to make this super clear:
   ```
   # Use the C runtime (crt) and enable static linking by setting system-lib to 
True
   RUNTIME = Runtime("crt", {"system-lib": True})
   
   # Simulate a microcontroller on the host machine. Uses the main() from 
src/runtime/crt/host/main.cc. To use physical hardware, replace "host" with 
something matching your hardware. See abc location for instructions.
   TARGET = tvm.target.target.micro("host")
   # Use the AOT executor rather than graph or vm executors. Don't use unpacked 
API or C calling style
   EXECUTOR = Executor("aot")



##
gallery/how_to/work_with_microtvm/micro_aot.py:
##
@@ -0,0 +1,168 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-AoT:
+
+microTVM Host-Driven AoT
+===

[GitHub] [tvm] gigiblender commented on a diff in pull request #12178: Build and test TVM under minimal configuration

2022-07-26 Thread GitBox


gigiblender commented on code in PR #12178:
URL: https://github.com/apache/tvm/pull/12178#discussion_r930059275


##
ci/jenkins/Test.groovy.j2:
##
@@ -199,6 +234,9 @@ stage('Test') {
 {{ method_name }}()
   },
   {% endfor %}
+  'unittest: CPU MINIMAL': {

Review Comment:
   I'm not sure how to do that. The `test_step` inlines the body while in this 
case, I am calling `run_unittest_minimal`. I could try to inline the method and 
use the macro, but that might not work due to the 64KB bytecode size/method 
limit in the JVM.
   
   I also guess I could modify the macros. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r930056748


##
tests/scripts/task_python_uma.sh:
##
@@ -0,0 +1,24 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+set -euxo pipefail
+
+source tests/scripts/setup-pytest-env.sh
+
+run_pytest ctypes test_uma tests/python/contrib/test_uma
+run_pytest cython3 test_uma  tests/python/contrib/test_uma

Review Comment:
   we removed this file



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r930047570


##
python/tvm/relay/backend/contrib/uma/_template/backend.py:
##
@@ -0,0 +1,53 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.

Review Comment:
   We will move it to apps



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r930044501


##
python/tvm/relay/backend/contrib/uma/_template/run.py:
##
@@ -0,0 +1,88 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from tvm.micro.testing.aot_test_utils import AOT_DEFAULT_RUNNER
+
+from tvm.testing.aot import compile_and_run, AOTTestModel, AOTTestRunner
+
+import tvm
+from tvm import relay
+from tvm.relay.backend.contrib.uma._template.backend import MyAiHwBackend
+from tvm.relay import transform
+from collections import OrderedDict
+
+import numpy as np
+import tarfile
+from pathlib import Path
+import onnx
+
+from tvm.testing.aot import (
+AOTTestModel,
+AOTTestRunner,
+generate_ref_data,
+compile_and_run,
+)

Review Comment:
   @manupa-arm , this is not supposed to be test code. This is example code for 
the tutorial. We will replace AOTTestRunner by microTVM Project API



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r930041551


##
python/tvm/relay/backend/contrib/uma/uma_cli.py:
##
@@ -0,0 +1,92 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+UMA Command Line Interface (CLI)

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r930040959


##
src/relay/backend/contrib/uma/tir_to_runtime.cc:
##
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "../../../../runtime/file_utils.h"
+#include "../../../../target/source/codegen_c.h"
+#include "../../../../target/source/codegen_c_host.h"
+
+namespace tvm {
+using namespace tir;
+namespace relay {
+namespace contrib {
+namespace uma {
+
+class UMACodegen : public codegen::CodeGenCHost {
+ public:
+  explicit UMACodegen(String target_str) : target_str_(target_str) {}
+
+  void Init(bool output_ssa, bool emit_asserts) {
+auto includes_pf =
+tvm::runtime::Registry::Get("relay.ext.uma.codegen_c_includes_" + 
target_str_);
+ICHECK(includes_pf);
+String includes = (*includes_pf)();
+decl_stream << includes;
+std::unordered_set devices;
+devices.insert(target_str_);
+CodeGenCHost::Init(output_ssa, emit_asserts, target_str_, devices);
+  }
+
+  /*!
+   * \brief Emit code that offloads a subgraph to the UMA target
+   *
+   * \return string of code that offloads a subgraph to the UMA target
+   */
+  void AddFunction(const PrimFunc& prim_func) { 
CodeGenC::AddFunction(prim_func); }
+
+ private:
+  String target_str_;
+
+  using codegen::CodeGenCHost::VisitStmt_;
+
+  /*!  * \brief Emits target specific APIs for every call_extern */
+  void VisitExpr_(const CallNode* op, std::ostream& os) final {
+if (!op->op.same_as(builtin::call_extern())) {
+  CodeGenCHost::VisitExpr_(op, os);
+  return;
+}
+auto replace_call_extern_pf =

Review Comment:
   We discussed this again and reviewed the options.
   The conclusion is to remove this and also remove 
codegen_c_replace_call_extern_{target_str} from Python



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r930030731


##
src/relay/backend/contrib/uma/relay_to_tir.cc:
##
@@ -0,0 +1,174 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file relay/backend/contrib/uma/codegen.cc
+ *
+ * \brief this file contains the target hooks for the Universal Modular 
Accelerator Interface (UMA).
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace uma {
+
+/*!
+ * \brief This mutator outlines functions that are marked with a named
+ * "Compiler" attribute. Functions that do not match this condition remain
+ * unaltered.
+ */
+class OutlineCompilerFunctionsMutator : public MixedModeMutator {
+ public:
+  explicit OutlineCompilerFunctionsMutator(const IRModule& mod, const 
std::string& compiler_name)
+  : mod_(mod), compiler_name_(compiler_name) {}
+
+  Expr VisitExpr_(const LetNode* op) final {
+auto pre_visit = [this](const LetNode* op) {
+  Expr var = this->VisitExpr(op->var);
+  Expr value = this->VisitExpr(op->value);
+
+  // Outlineable function no longer needs let binding
+  if (this->CanOutlineExpr(value)) {
+this->memo_[var] = value;
+  }
+};
+auto post_visit = [this](const LetNode* op) {
+  // Rely on the Memoizer to cache pre-visit values
+  Expr value = this->VisitExpr(op->value);
+  Expr body = this->VisitExpr(op->body);
+  auto expr = GetRef(op);
+
+  // Drop the let binding
+  if (this->CanOutlineExpr(value)) {
+this->memo_[expr] = this->VisitExpr(op->body);
+  } else {
+Var var = Downcast(this->VisitExpr(op->var));
+if (var.same_as(op->var) && value.same_as(op->value) && 
body.same_as(op->body)) {
+  this->memo_[expr] = expr;
+} else {
+  this->memo_[expr] = Let(var, value, body);
+}
+  }
+};
+ExpandANormalForm(op, pre_visit, post_visit);
+return memo_[GetRef(op)];
+  }
+
+  Expr Rewrite_(const CallNode* pre, const Expr& post) override {
+Call call = Downcast(post);
+if (CanOutlineExpr(call->op)) {
+  Function func = Downcast(call->op);
+  auto gv_name = func->GetAttr("global_symbol").value_or("");
+  ICHECK_NE(gv_name, "")
+  << "Function to be outlined must have global_symbol attribute, but 
didn't.";
+  GlobalVar gv(gv_name);
+  if (func->checked_type_.defined()) {
+gv->checked_type_ = func->checked_type();
+  }
+  mod_->Update(gv, func);
+  return Call(gv, call->args, call->attrs, call->type_args);
+}
+return post;
+  }
+
+ private:
+  /*!
+   * \brief Check if the expr is a function and has the same
+   * compiler name as compiler_name_.
+   *
+   * \param expr The input expr.
+   * \return True if is outlineable else False.
+   */
+  bool CanOutlineExpr(const Expr& expr) {
+if (!expr->IsInstance()) {
+  return false;
+}
+Function func = Downcast(expr);
+auto compiler = func->GetAttr(attr::kCompiler);
+if (!compiler.defined()) {
+  return false;
+}
+if (compiler != compiler_name_) {
+  return false;
+}
+return true;
+  }
+
+  /*! \brief The module that the pass will run on. */
+  IRModule mod_;
+  /*! \brief The name of the compiler to enable outlining on external 
functions for. */
+  std::string compiler_name_;
+};
+
+/*!
+ * \brief A pass to outline compiler specific functions.
+ */
+tvm::transform::Pass OutlineCompilerFunctions(const std::string& 
compiler_name) {
+  runtime::TypedPackedFunc 
pass_func =
+  [=](IRModule mod, transform::PassContext ctx) {
+GlobalVar gv = mod->GetGlobalVar("main");
+Function main_func = Downcast(mod->Lookup("main"));
+auto new_main_body =
+OutlineCompilerFunctionsMutator(mod, 
compiler_name).VisitExpr(main_func->body);
+if (!new_main_body.same_as(main_func->body)) {
+  Function new_main_func = WithFields(main_func, main_func->params, 
new_main_body);
+  mod->Update(gv, new_main_func);
+}
+   

[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r930028351


##
src/relay/backend/contrib/uma/relay_to_tir.cc:
##
@@ -0,0 +1,174 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file relay/backend/contrib/uma/codegen.cc
+ *
+ * \brief this file contains the target hooks for the Universal Modular 
Accelerator Interface (UMA).
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace uma {
+
+/*!
+ * \brief This mutator outlines functions that are marked with a named
+ * "Compiler" attribute. Functions that do not match this condition remain
+ * unaltered.
+ */
+class OutlineCompilerFunctionsMutator : public MixedModeMutator {

Review Comment:
   @areusch, should we move this to a seperate PR then?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r930025468


##
tests/python/contrib/test_uma/test_partition.py:
##
@@ -0,0 +1,71 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import pytest
+
+import tvm
+
+from tvm.relay.backend.contrib.uma.api import UMAPartitioner
+from tvm.relay.op.contrib.register import get_pattern_table
+from tvm.relay.testing import resnet, mlp
+
+
+def test_partition_table():
+partitioner = UMAPartitioner("test_partition")
+assert get_pattern_table("test_partition") is None
+
+partitioner.register()
+
+assert get_pattern_table("test_partition") is not None
+
+
+@pytest.mark.parametrize(
+"workload,backend,merge,expected_partitions",
+[
+("resnet", "dnnl", False, 17),
+("resnet", "dnnl", True, 17),

Review Comment:
   @areusch can you elaborate what you mean? We don't fully understand what you 
mean here



##
python/tvm/relay/backend/contrib/uma/tutorial.md:
##
@@ -0,0 +1,195 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Making your hardware accelerator TVM-ready with UMA 

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r930022435


##
tests/python/contrib/test_uma/test_partition.py:
##
@@ -0,0 +1,71 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import pytest
+
+import tvm
+
+from tvm.relay.backend.contrib.uma.api import UMAPartitioner
+from tvm.relay.op.contrib.register import get_pattern_table
+from tvm.relay.testing import resnet, mlp
+
+
+def test_partition_table():
+partitioner = UMAPartitioner("test_partition")
+assert get_pattern_table("test_partition") is None
+
+partitioner.register()
+
+assert get_pattern_table("test_partition") is not None
+
+
+@pytest.mark.parametrize(
+"workload,backend,merge,expected_partitions",
+[
+("resnet", "dnnl", False, 17),
+("resnet", "dnnl", True, 17),
+("mlp", "dnnl", False, 1),
+("resnet", "cutlass", False, 2),
+("resnet", "cutlass", True, 2),
+("mlp", "cutlass", False, 4),
+("mlp", "cutlass", True, 2),
+],
+)
+def test_existing_pattern_tables(workload, backend, merge, 
expected_partitions):
+partitioner = UMAPartitioner(backend + "_uma", merge)
+pattern_table = get_pattern_table(backend)
+
+for entry in pattern_table:
+partitioner.add_pattern(*entry)
+
+if workload == "resnet":
+net = resnet.get_net(1, 10)
+elif workload == "mlp":
+net = mlp.get_net(1, 10)

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r930020506


##
python/tvm/relay/backend/contrib/uma/api/codegen.py:
##
@@ -0,0 +1,53 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Codegen base class of the Universal Modular Accelerator Interface (UMA)"""
+
+from typing import Callable
+import tvm
+
+
+class UMACodegen(object):
+"""
+Codegen base class of the Universal Modular Accelerator Interface (UMA)
+"""
+
+def __init__(self, target_name: str) -> None:
+self.target_name = target_name
+
+def _register_codegen(self, fmt: str = "c", **kwargs) -> None:
+if fmt == "c":
+self._register_c_codegen(**kwargs)
+else:
+raise RuntimeError(f'Unsupported codegen format "{fmt}"')
+
+def _register_c_codegen(
+self,
+includes: Callable[[], str] = None,
+replace_call_extern: Callable[[tvm.ir.container.Array], str] = None,
+) -> None:
+if includes is not None:
+tvm._ffi.register_func(
+
"relay.ext.uma.codegen_c_includes_{}".format(self.target_name), includes

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r930017268


##
python/tvm/relay/backend/contrib/uma/_template/run.py:
##
@@ -0,0 +1,88 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from tvm.micro.testing.aot_test_utils import AOT_DEFAULT_RUNNER
+
+from tvm.testing.aot import compile_and_run, AOTTestModel, AOTTestRunner
+
+import tvm
+from tvm import relay
+from tvm.relay.backend.contrib.uma._template.backend import MyAiHwBackend
+from tvm.relay import transform
+from collections import OrderedDict
+
+import numpy as np
+import tarfile
+from pathlib import Path
+import onnx
+
+from tvm.testing.aot import (
+AOTTestModel,
+AOTTestRunner,
+generate_ref_data,
+compile_and_run,
+)
+
+
+def create_conv2d(groups=1, test_runner=AOT_DEFAULT_RUNNER, weight_shape=32):
+dtype = "float32"
+ishape = (1, 32, 14, 14)
+wshape = (32, weight_shape, 3, 3)
+pass_config = {"tir.usmp.enable": True}
+test_runner = AOTTestRunner(
+makefile=test_runner.makefile,
+prologue=test_runner.prologue,
+epilogue=test_runner.epilogue,
+includes=test_runner.includes,
+parameters=test_runner.parameters,
+pass_config=pass_config,
+)
+data0 = relay.var("data", shape=ishape, dtype=dtype)
+weight0 = relay.var("weight", shape=wshape, dtype=dtype)
+out = relay.nn.conv2d(data0, weight0, kernel_size=(3, 3), padding=(1, 1), 
groups=groups)
+main_f = relay.Function([data0, weight0], out)
+mod = tvm.IRModule()
+mod["main"] = main_f
+mod = transform.InferType()(mod)
+i_data = np.random.uniform(0, 1, ishape).astype(dtype)
+w1_data = np.random.uniform(0, 1, wshape).astype(dtype)
+inputs = OrderedDict([("data", i_data), ("weight", w1_data)])
+output_list = generate_ref_data(mod, inputs)
+return mod, inputs, output_list, test_runner
+
+
+def main():
+mod, inputs, output_list, test_runner = create_conv2d()
+
+uma_backend = MyAiHwBackend()
+uma_backend.register()
+mod = uma_backend.partition(mod)
+target = tvm.target.Target("my_ai_hw", host=tvm.target.Target("c"))
+
+export_directory = tvm.contrib.utils.tempdir(keep_for_debug=True).path
+print(f"Generated files are in {export_directory}")
+compile_and_run(
+AOTTestModel(module=mod, inputs=inputs, outputs=output_list),
+test_runner,
+interface_api="c",
+use_unpacked_api=True,
+target=target,
+test_dir=str(export_directory),
+)
+
+
+if __name__ == "__main__":
+main()

Review Comment:
   we would tend to move anything TVMC and CLI related to the next PR



##
python/tvm/relay/backend/contrib/uma/uma_cli.py:
##
@@ -0,0 +1,92 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+UMA Command Line Interface (CLI)
+
+Tool to create code skeletons for an easy integration of
+new AI hardware accelerators/libraries into TVM using UMA
+"""
+
+import argparse
+import os
+import shutil
+import sys
+from inflection import camelize, underscore
+
+
+def _parse_args():
+parser = argparse.ArgumentParser(description="UMA Interface command line 
interface")
+parser.add_argument(
+"--add_hardware",
+type=str,
+required=True,
+)
+parser.add_argument(
+"--tutorial",
+type=str,
+)
+args = parser.parse_args()
+return args
+
+
+def replace_template_name(
+

[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r930008614


##
python/tvm/relay/backend/contrib/uma/_template/passes.py:
##
@@ -0,0 +1,137 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Transform passes for the my_ai_hw accelerator"""
+
+import tvm
+from tvm import relay, tir
+from tvm.relay.backend.contrib.uma.api.utils import add_llvm_to_block
+
+
+@tvm.tir.transform.prim_func_pass(opt_level=2)
+class MyAiHwConv2dPass:
+def transform_function(
+self, func: tvm.tir.PrimFunc, mod: tvm.ir.IRModule, ctx: 
tvm.ir.transform.PassContext
+) -> tvm.tir.PrimFunc:
+return self._my_ai_hw_conv2d_pass(func, mod, ctx)
+
+@staticmethod
+def _my_ai_hw_conv2d_pass(func, mod, ctx):
+_found_blocks = []
+_loops = dict()
+_handles = []
+_entry_node = None
+_external_function_name = "my_ai_hw_conv2dnchw"
+_tvm_block_match_name = "conv2d_nchw"
+
+def _has_block(name: str, func) -> bool:

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r930006872


##
python/tvm/relay/backend/contrib/uma/_template/passes.py:
##
@@ -0,0 +1,137 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Transform passes for the my_ai_hw accelerator"""
+
+import tvm
+from tvm import relay, tir
+from tvm.relay.backend.contrib.uma.api.utils import add_llvm_to_block
+
+
+@tvm.tir.transform.prim_func_pass(opt_level=2)
+class MyAiHwConv2dPass:
+def transform_function(
+self, func: tvm.tir.PrimFunc, mod: tvm.ir.IRModule, ctx: 
tvm.ir.transform.PassContext
+) -> tvm.tir.PrimFunc:
+return self._my_ai_hw_conv2d_pass(func, mod, ctx)
+
+@staticmethod
+def _my_ai_hw_conv2d_pass(func, mod, ctx):
+_found_blocks = []
+_loops = dict()
+_handles = []
+_entry_node = None
+_external_function_name = "my_ai_hw_conv2dnchw"

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r93397


##
python/tvm/relay/backend/contrib/uma/_template/passes.py:
##
@@ -0,0 +1,137 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Transform passes for the my_ai_hw accelerator"""
+
+import tvm
+from tvm import relay, tir
+from tvm.relay.backend.contrib.uma.api.utils import add_llvm_to_block
+
+
+@tvm.tir.transform.prim_func_pass(opt_level=2)
+class MyAiHwConv2dPass:
+def transform_function(
+self, func: tvm.tir.PrimFunc, mod: tvm.ir.IRModule, ctx: 
tvm.ir.transform.PassContext
+) -> tvm.tir.PrimFunc:
+return self._my_ai_hw_conv2d_pass(func, mod, ctx)
+
+@staticmethod

Review Comment:
   `self` is not used here. the Pycharm lint therefore detects a smell and 
proposes to make it a staticmethod.
   We could go both ways here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r929993795


##
python/tvm/relay/backend/contrib/uma/_template/conv2dnchw.cc:
##
@@ -0,0 +1,76 @@
+/*
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+*/
+#include 
+
+#ifdef __cplusplus
+extern "C"
+#endif
+int
+my_ai_hw_conv2dnchw(float* ifmap, float* weights, float* result, int oc, 
int iw, int ih, int ic,
+int kh, int kw) {
+
+  int kw_low = kw / 2;
+  int kh_low = kh / 2;
+  int kw_high = iw + kw / 2;
+  int kh_high = ih + kh / 2;
+
+  int padded_iw = iw + 2 * kw_low;
+  int padded_ih = ih + 2 * kh_low;
+
+  float* pad_temp = (float*)malloc(

Review Comment:
   Not really, this should imitate a driver/low-level interface to an 
accelerator.
   Therefore, C/C++-only is intended to demonstrate that this part in 
user-specific.
   
   We added some additional comment to clarify



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] gigiblender commented on pull request #11938: [ci][docker] Use RFC image tags only

2022-07-26 Thread GitBox


gigiblender commented on PR #11938:
URL: https://github.com/apache/tvm/pull/11938#issuecomment-1195513761

   Thanks @driazati! LGTM!
   
   @areusch 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r929982909


##
python/tvm/relay/backend/contrib/uma/_template/conv2dnchw.cc:
##
@@ -0,0 +1,76 @@
+/*
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+*/
+#include 
+
+#ifdef __cplusplus
+extern "C"
+#endif
+int

Review Comment:
   Done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r929982499


##
python/tvm/relay/backend/contrib/uma/_template/backend.py:
##
@@ -0,0 +1,53 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""UMA backend for the my_ai_hw accelerator"""
+from .passes import MyAiHwConv2dPass
+from ..api.utils import PassPhase
+from ..backend import UMABackend
+from .codegen import gen_includes, gen_replace_call_extern
+from .patterns import conv2d_pattern
+
+
+class MyAiHwBackend(UMABackend):
+"""UMA backend for the MyAiHw accelerator."""
+
+def __init__(self):
+super().__init__()
+
+###
+# Target configuration
+###
+self._register_target_attr("dimension")
+
+###
+# Relay to Relay function registration
+###
+self._register_pattern("conv2d", conv2d_pattern())
+
+###
+# Relay to TIR function registration
+###
+self._register_tir_pass(PassPhase.TIR_PHASE_0, MyAiHwConv2dPass())

Review Comment:
   @areusch, we don't have strong opinion here.
   If the way you propose is preferred by the others, too, then we can make 
this change
   
   @PaulPalomeroBernardo @cgerum 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] elvin-n commented on a diff in pull request #11878: [Adreno] Add markup pass of relay tensors for static texture planning

2022-07-26 Thread GitBox


elvin-n commented on code in PR #11878:
URL: https://github.com/apache/tvm/pull/11878#discussion_r929977746


##
tests/python/relay/test_conv2d_nchw_texture.py:
##
@@ -435,3 +435,641 @@ def test_conv2d_vgg16_winograd_4d():
 graph = build_run_compare(mod, params1, {"data": input_shape}, dtype, 
target)
 matches = re.findall("winograd", graph)
 assert len(matches) > 0
+
+
+@tvm.testing.requires_opencl
+def test_2conv2d():
+target = "opencl --device=adreno"
+dtype = "float16"
+
+input_shape = (1, 32, 40, 40)
+filter_shape1 = (96, 32, 2, 2)
+filter_shape2 = (32, 96, 2, 2)
+bias_shape1 = (1, 96, 1, 1)
+bias_shape2 = (1, 32, 1, 1)
+A = relay.var("data", shape=input_shape, dtype=dtype)
+W1 = relay.var("weight1", shape=filter_shape1, dtype=dtype)
+B1 = relay.var("bias1", shape=bias_shape1, dtype=dtype)
+W2 = relay.var("weight2", shape=filter_shape2, dtype=dtype)
+B2 = relay.var("bias2", shape=bias_shape2, dtype=dtype)
+
+# C = relay.nn.relu(A)
+conv1 = relay.nn.conv2d(
+A,
+W1,
+data_layout="NCHW",
+kernel_layout="OIHW",
+padding=[0, 0, 0, 0],
+strides=[2, 2],
+out_dtype=dtype,
+channels=96,
+kernel_size=(2, 2),
+)
+D = relay.op.add(conv1, B1)
+D = relay.op.nn.relu(D)
+
+conv2 = relay.nn.conv2d(
+D,
+W2,
+data_layout="NCHW",
+kernel_layout="OIHW",
+padding=[0, 0, 0, 0],
+strides=[2, 2],
+out_dtype=dtype,
+channels=32,
+kernel_size=(2, 2),
+)
+D = relay.op.add(conv2, B2)
+D = relay.op.nn.relu(D)
+
+mod = relay.Function([A, W1, B1, W2, B2], D)
+np.random.seed(0)
+initializer = relay.testing.init.Xavier()
+filter_data1 = np.zeros(filter_shape1).astype(dtype)
+bias_data1 = np.zeros(bias_shape1).astype(dtype)
+initializer("weight", filter_data1)
+initializer("bias", bias_data1)
+filter_data2 = np.zeros(filter_shape2).astype(dtype)
+bias_data2 = np.zeros(bias_shape2).astype(dtype)
+initializer("weight", filter_data2)
+initializer("bias", bias_data2)
+params1 = {
+"weight1": tvm.nd.array(filter_data1),
+"bias1": tvm.nd.array(bias_data1),
+"weight2": tvm.nd.array(filter_data2),
+"bias2": tvm.nd.array(bias_data2),
+}
+
+static_memory_scope = [
+"",
+"global",
+"global.texture-weight",
+"global.texture-weight",
+"global.texture-nhwc",
+"global.texture-weight",
+"global.texture-weight",
+"",
+"",
+]
+
+build_run_compare(mod, params1, {"data": input_shape}, dtype, target, 
static_memory_scope)
+
+
+@tvm.testing.requires_opencl
+def test_residual_block():
+target = "opencl --device=adreno"
+dtype = "float16"
+
+input_shape = (1, 32, 40, 40)
+filter_shape1 = (32, 32, 2, 2)
+filter_shape2 = (32, 32, 1, 1)
+filter_shape3 = (32, 32, 2, 2)
+bias_shape1 = (1, 32, 1, 1)
+# bias_shape2 = (1, 32, 1, 1)

Review Comment:
   done
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] 02/03: This is PR #12130.

2022-07-26 Thread leandron
This is an automated email from the ASF dual-hosted git repository.

leandron pushed a commit to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit 2bcf79b059cb124f2e035a9ba06386d975bd9925
Author: Leandro Nunes 
AuthorDate: Fri Jul 22 16:41:03 2022 +0100

This is PR #12130.
---
 python/tvm/relay/frontend/keras.py   |  8 ---
 tests/python/frontend/tflite/test_forward.py | 35 +---
 2 files changed, 31 insertions(+), 12 deletions(-)

diff --git a/python/tvm/relay/frontend/keras.py 
b/python/tvm/relay/frontend/keras.py
index 3f7a96544a..8c8a4a1ddc 100644
--- a/python/tvm/relay/frontend/keras.py
+++ b/python/tvm/relay/frontend/keras.py
@@ -635,9 +635,11 @@ def _convert_pooling(
 _op.nn.global_max_pool2d(inexpr, **global_pool_params), 
keras_layer, etab, data_layout
 )
 if pool_type == "GlobalAveragePooling2D":
-return _convert_flatten(
-_op.nn.global_avg_pool2d(inexpr, **global_pool_params), 
keras_layer, etab, data_layout
-)
+global_avg_pool2d = _op.nn.global_avg_pool2d(inexpr, 
**global_pool_params)
+keep_dims = len(keras_layer.input.shape) == 
len(keras_layer.output.shape)
+if keep_dims:
+return global_avg_pool2d
+return _convert_flatten(global_avg_pool2d, keras_layer, etab, 
data_layout)
 pool_h, pool_w = keras_layer.pool_size
 stride_h, stride_w = keras_layer.strides
 params = {
diff --git a/tests/python/frontend/tflite/test_forward.py 
b/tests/python/frontend/tflite/test_forward.py
index 6acc8554b4..709ed3f2bf 100644
--- a/tests/python/frontend/tflite/test_forward.py
+++ b/tests/python/frontend/tflite/test_forward.py
@@ -935,7 +935,11 @@ def _test_tflite2_quantized_convolution(
 )
 
 tflite_output = run_tflite_graph(tflite_model_quant, data)
-tvm_output = run_tvm_graph(tflite_model_quant, data, 
data_in.name.replace(":0", ""))
+if tf.__version__ < LooseVersion("2.9"):
+input_node = data_in.name.replace(":0", "")
+else:
+input_node = "serving_default_" + data_in.name + ":0"
+tvm_output = run_tvm_graph(tflite_model_quant, data, input_node)
 tvm.testing.assert_allclose(
 np.squeeze(tvm_output[0]), np.squeeze(tflite_output[0]), rtol=1e-2, 
atol=1e-2
 )
@@ -1934,10 +1938,12 @@ def _test_abs(data, quantized, int_quant_dtype=tf.int8):
 # TFLite 2.6.x upgrade support
 if tf.__version__ < LooseVersion("2.6.1"):
 in_node = ["serving_default_input_int8"]
-else:
+elif tf.__version__ < LooseVersion("2.9"):
 in_node = (
 ["serving_default_input_int16"] if int_quant_dtype == tf.int16 
else ["tfl.quantize"]
 )
+else:
+in_node = "serving_default_input"
 
 tvm_output = run_tvm_graph(tflite_model_quant, data, in_node)
 tvm.testing.assert_allclose(
@@ -1965,8 +1971,10 @@ def _test_rsqrt(data, quantized, 
int_quant_dtype=tf.int8):
 tf.math.rsqrt, data, int_quant_dtype=int_quant_dtype
 )
 tflite_output = run_tflite_graph(tflite_model_quant, data)
-in_node = ["tfl.quantize"]
-
+if tf.__version__ < LooseVersion("2.9"):
+in_node = ["tfl.quantize"]
+else:
+in_node = "serving_default_input"
 tvm_output = run_tvm_graph(tflite_model_quant, data, in_node)
 tvm.testing.assert_allclose(
 np.squeeze(tvm_output[0]), np.squeeze(tflite_output[0]), 
rtol=1e-5, atol=1e-2
@@ -2047,7 +2055,10 @@ def _test_cos(data, quantized, int_quant_dtype=tf.int8):
 tf.math.cos, data, int_quant_dtype=int_quant_dtype
 )
 tflite_output = run_tflite_graph(tflite_model_quant, data)
-in_node = ["tfl.quantize"]
+if tf.__version__ < LooseVersion("2.9"):
+in_node = ["tfl.quantize"]
+else:
+in_node = "serving_default_input"
 tvm_output = run_tvm_graph(tflite_model_quant, data, in_node)
 tvm.testing.assert_allclose(
 np.squeeze(tvm_output[0]), np.squeeze(tflite_output[0]), 
rtol=1e-5, atol=1e-2
@@ -2955,7 +2966,6 @@ def _test_quantize_dequantize(data):
 add = tf.keras.layers.Add()([data_in, relu])
 concat = tf.keras.layers.Concatenate(axis=0)([relu, add])
 keras_model = tf.keras.models.Model(inputs=data_in, outputs=concat)
-input_name = data_in.name.split(":")[0]
 
 # To create quantized values with dynamic range of activations, needs 
representative dataset
 def representative_data_gen():
@@ -2965,7 +2975,11 @@ def _test_quantize_dequantize(data):
 tflite_model_quant = _quantize_keras_model(keras_model, 
representative_data_gen, True, True)
 
 tflite_output = run_tflite_graph(tflite_model_quant, data)
-tvm_output = run_tvm_graph(tflite_model_quant, data, input_name)
+if tf.__version__ < LooseVersion("2.9"):
+in_node = data_in.name.split(":")[0]
+else:
+  

[tvm] 03/03: Update Jenkins ci_arm, ci_cpu, ci_gpu, ci_qemu to use 20220725-085822-8ae520b1b

2022-07-26 Thread leandron
This is an automated email from the ASF dual-hosted git repository.

leandron pushed a commit to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit 112c37eda35ed32f0c8aaf02e7e458a0e3d3935a
Author: Leandro Nunes 
AuthorDate: Tue Jul 26 14:37:02 2022 +0100

Update Jenkins ci_arm, ci_cpu, ci_gpu, ci_qemu to use 
20220725-085822-8ae520b1b
---
 Jenkinsfile   | 10 +-
 ci/jenkins/Jenkinsfile.j2 |  8 
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index c2f6407333..058f18531f 100755
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -45,17 +45,17 @@
 // 'python3 jenkins/generate.py'
 // Note: This timestamp is here to ensure that updates to the Jenkinsfile are
 // always rebased on main before merging:
-// Generated at 2022-07-15T13:35:24.676914
+// Generated at 2022-07-26T14:34:48.559792
 
 import org.jenkinsci.plugins.pipeline.modeldefinition.Utils
 // NOTE: these lines are scanned by docker/dev_common.sh. Please update the 
regex as needed. -->
 ci_lint = 'tlcpack/ci-lint:20220715-060127-37f9d3c49'
-ci_gpu = 'tlcpack/ci-gpu:20220715-060127-37f9d3c49'
-ci_cpu = 'tlcpack/ci-cpu:20220715-060127-37f9d3c49'
+ci_gpu = 'tlcpackstaging/ci-gpu:20220725-085822-8ae520b1b'
+ci_cpu = 'tlcpackstaging/ci-cpu:20220725-085822-8ae520b1b'
 ci_wasm = 'tlcpack/ci-wasm:20220715-060127-37f9d3c49'
 ci_i386 = 'tlcpack/ci-i386:20220715-060127-37f9d3c49'
-ci_qemu = 'tlcpack/ci-qemu:20220630-060117-558ba99c7'
-ci_arm = 'tlcpack/ci-arm:20220715-060127-37f9d3c49'
+ci_qemu = 'tlcpackstaging/ci-qemu:20220725-085822-8ae520b1b'
+ci_arm = 'tlcpackstaging/ci-arm:20220725-085822-8ae520b1b'
 ci_hexagon = 'tlcpack/ci-hexagon:20220715-060127-37f9d3c49'
 // <--- End of regex-scanned config.
 
diff --git a/ci/jenkins/Jenkinsfile.j2 b/ci/jenkins/Jenkinsfile.j2
index 45b7565bf5..779f45fbd1 100644
--- a/ci/jenkins/Jenkinsfile.j2
+++ b/ci/jenkins/Jenkinsfile.j2
@@ -52,12 +52,12 @@ import org.jenkinsci.plugins.pipeline.modeldefinition.Utils
 
 // NOTE: these lines are scanned by docker/dev_common.sh. Please update the 
regex as needed. -->
 ci_lint = 'tlcpack/ci-lint:20220715-060127-37f9d3c49'
-ci_gpu = 'tlcpack/ci-gpu:20220715-060127-37f9d3c49'
-ci_cpu = 'tlcpack/ci-cpu:20220715-060127-37f9d3c49'
+ci_gpu = 'tlcpackstaging/ci-gpu:20220725-085822-8ae520b1b'
+ci_cpu = 'tlcpackstaging/ci-cpu:20220725-085822-8ae520b1b'
 ci_wasm = 'tlcpack/ci-wasm:20220715-060127-37f9d3c49'
 ci_i386 = 'tlcpack/ci-i386:20220715-060127-37f9d3c49'
-ci_qemu = 'tlcpack/ci-qemu:20220630-060117-558ba99c7'
-ci_arm = 'tlcpack/ci-arm:20220715-060127-37f9d3c49'
+ci_qemu = 'tlcpackstaging/ci-qemu:20220725-085822-8ae520b1b'
+ci_arm = 'tlcpackstaging/ci-arm:20220725-085822-8ae520b1b'
 ci_hexagon = 'tlcpack/ci-hexagon:20220715-060127-37f9d3c49'
 // <--- End of regex-scanned config.
 



[tvm] 01/03: [TFLite][CI] Update TensorFlow dependency to 2.9.1

2022-07-26 Thread leandron
This is an automated email from the ASF dual-hosted git repository.

leandron pushed a commit to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit bd4c900925f6a29ac95fd1e17bf6c07903e43230
Author: Leandro Nunes 
AuthorDate: Mon May 23 11:55:08 2022 +0100

[TFLite][CI] Update TensorFlow dependency to 2.9.1

This updates the TF version to be used in TVM CI to 2.9.1,
which brings improvements so that more platforms are supported by
official packages.

ethos-u-vela dependency is also updated, from version 3.2.0 to 3.4.0
so that it is closer to the TensorFlow version being proposed here.

This PR updates the Docker images scripting to install TF and TFLite.
---
 cmake/modules/contrib/TFLite.cmake  |  2 ++
 docker/Dockerfile.ci_cpu|  4 
 docker/Dockerfile.ci_gpu|  3 +++
 docker/Dockerfile.ci_qemu   |  3 +++
 docker/install/ubuntu_install_cmake_source.sh   |  4 ++--
 docker/install/ubuntu_install_python_package.sh |  2 +-
 docker/install/ubuntu_install_tensorflow.sh |  5 ++---
 docker/install/ubuntu_install_tensorflow_aarch64.sh | 11 ++-
 docker/install/ubuntu_install_tflite.sh | 13 +++--
 docker/install/ubuntu_install_vela.sh   |  2 +-
 tests/scripts/task_config_build_cpu.sh  |  2 +-
 11 files changed, 32 insertions(+), 19 deletions(-)

diff --git a/cmake/modules/contrib/TFLite.cmake 
b/cmake/modules/contrib/TFLite.cmake
index 3159710909..b8d6a0daff 100644
--- a/cmake/modules/contrib/TFLite.cmake
+++ b/cmake/modules/contrib/TFLite.cmake
@@ -38,8 +38,10 @@ if(NOT USE_TFLITE STREQUAL "OFF")
 set(USE_TFLITE ${USE_TENSORFLOW_PATH}/tensorflow/lite/tools/make/gen/*/lib)
   endif()
   find_library(TFLITE_CONTRIB_LIB libtensorflow-lite.a ${USE_TFLITE})
+  file(GLOB_RECURSE TFLITE_DEPS "${USE_TFLITE}/*.a")
 
   list(APPEND TVM_RUNTIME_LINKER_LIBS ${TFLITE_CONTRIB_LIB})
+  list(APPEND TVM_RUNTIME_LINKER_LIBS ${TFLITE_DEPS})
 
   if (NOT USE_FLATBUFFERS_PATH STREQUAL "none")
 include_directories(${USE_FLATBUFFERS_PATH}/include)
diff --git a/docker/Dockerfile.ci_cpu b/docker/Dockerfile.ci_cpu
index 2dc075d29b..28a7d89154 100644
--- a/docker/Dockerfile.ci_cpu
+++ b/docker/Dockerfile.ci_cpu
@@ -40,6 +40,9 @@ RUN bash /install/ubuntu_install_python_package.sh
 COPY install/ubuntu1804_install_llvm.sh /install/ubuntu1804_install_llvm.sh
 RUN bash /install/ubuntu1804_install_llvm.sh
 
+COPY install/ubuntu_install_cmake_source.sh 
/install/ubuntu_install_cmake_source.sh
+RUN bash /install/ubuntu_install_cmake_source.sh
+
 COPY install/ubuntu_install_dnnl.sh /install/ubuntu_install_dnnl.sh
 RUN bash /install/ubuntu_install_dnnl.sh
 
@@ -152,3 +155,4 @@ ENV PATH /opt/sccache:$PATH
 # Libxsmm deps
 COPY install/ubuntu_install_libxsmm.sh /install
 RUN bash /install/ubuntu_install_libxsmm.sh
+
diff --git a/docker/Dockerfile.ci_gpu b/docker/Dockerfile.ci_gpu
index f04d8515b8..6f02ab97c0 100644
--- a/docker/Dockerfile.ci_gpu
+++ b/docker/Dockerfile.ci_gpu
@@ -32,6 +32,9 @@ RUN apt-get update --fix-missing
 COPY install/ubuntu_install_core.sh /install/ubuntu_install_core.sh
 RUN bash /install/ubuntu_install_core.sh
 
+COPY install/ubuntu_install_cmake_source.sh 
/install/ubuntu_install_cmake_source.sh
+RUN bash /install/ubuntu_install_cmake_source.sh
+
 COPY install/ubuntu_install_googletest.sh /install/ubuntu_install_googletest.sh
 RUN bash /install/ubuntu_install_googletest.sh
 
diff --git a/docker/Dockerfile.ci_qemu b/docker/Dockerfile.ci_qemu
index 63089f3d65..06e6bd1154 100644
--- a/docker/Dockerfile.ci_qemu
+++ b/docker/Dockerfile.ci_qemu
@@ -26,6 +26,9 @@ RUN apt-get update --fix-missing
 COPY install/ubuntu_install_core.sh /install/ubuntu_install_core.sh
 RUN bash /install/ubuntu_install_core.sh
 
+COPY install/ubuntu_install_cmake_source.sh 
/install/ubuntu_install_cmake_source.sh
+RUN bash /install/ubuntu_install_cmake_source.sh
+
 COPY install/ubuntu_install_googletest.sh /install/ubuntu_install_googletest.sh
 RUN bash /install/ubuntu_install_googletest.sh
 
diff --git a/docker/install/ubuntu_install_cmake_source.sh 
b/docker/install/ubuntu_install_cmake_source.sh
index 18335c98c4..e90ca3d8f1 100755
--- a/docker/install/ubuntu_install_cmake_source.sh
+++ b/docker/install/ubuntu_install_cmake_source.sh
@@ -20,8 +20,8 @@ set -e
 set -u
 set -o pipefail
 
-v=3.14
-version=3.14.7
+v=3.22
+version=3.22.4
 wget https://cmake.org/files/v${v}/cmake-${version}.tar.gz
 tar xvf cmake-${version}.tar.gz
 cd cmake-${version}
diff --git a/docker/install/ubuntu_install_python_package.sh 
b/docker/install/ubuntu_install_python_package.sh
index 8fa9d0c058..fca4bb68e7 100755
--- a/docker/install/ubuntu_install_python_package.sh
+++ b/docker/install/ubuntu_install_python_package.sh
@@ -27,7 +27,7 @@ pip3 install --upgrade \
 cython \
 decorator \
 mypy \
-numpy~=1.19.5 \
+

[tvm] branch ci-docker-staging updated (8ae520b1be -> 112c37eda3)

2022-07-26 Thread leandron
This is an automated email from the ASF dual-hosted git repository.

leandron pushed a change to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git


 discard 8ae520b1be This is PR #12130.
 discard 15cf56e7df [TFLite][CI] Update TensorFlow dependency to 2.9.1
 add 75ec1cffa9 [TVMC] Workspace Pools Parameters (#11427)
 add 19e5ec6576 [hexagon][testing] sequential input tensors (#12168)
 add 21d54f9880 [PyTorch] Add aten::numpy_T (#12179)
 add ca2ec5429b [CI][docker] Add comment (#11953)
 add 9963b59ffa fix typo (#12183)
 add 9bef7de9f0 [Doc] Fix link error in pipeline executor tutorial (#12185)
 add ea6ea42757 TVM Vertical Integration with PyTorch (#11911)
 add eada707a70 [Fix] Fix some errors in unittests (#12170)
 new bd4c900925 [TFLite][CI] Update TensorFlow dependency to 2.9.1
 new 2bcf79b059 This is PR #12130.
 new 112c37eda3 Update Jenkins ci_arm, ci_cpu, ci_gpu, ci_qemu to use 
20220725-085822-8ae520b1b

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (8ae520b1be)
\
 N -- N -- N   refs/heads/ci-docker-staging (112c37eda3)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 Jenkinsfile|  10 +-
 apps/pt_tvmdsoop/tests/test_as_torch.py| 257 +
 apps/pt_tvmdsoop/tests/test_optimize_torch.py  | 161 
 ci/jenkins/Jenkinsfile.j2  |   8 +-
 docker/bash.sh |   3 +
 .../work_with_relay/using_pipeline_executor.py |   2 +-
 include/tvm/ir/memory_pools.h  |   1 +
 python/tvm/contrib/torch/__init__.py   |  12 +-
 python/tvm/contrib/torch/as_torch.py   | 124 +++
 python/tvm/contrib/torch/optimize_torch.py | 198 ++
 python/tvm/driver/tvmc/compiler.py |  30 +-
 python/tvm/driver/tvmc/workspace_pools.py  | 237 
 python/tvm/ir/memory_pools.py  |   2 +-
 python/tvm/relay/frontend/pytorch.py   |  14 +-
 python/tvm/script/parser.py|  16 +-
 src/contrib/torch/base64.h |  75 
 .../torch/pt_call_tvm/RuntimeModuleWrapper.cc  | 259 +
 src/relay/backend/contrib/cmsisnn/target.cc|   4 +-
 src/relay/ir/indexed_graph.cc  |   2 +-
 tests/python/contrib/test_hexagon/pytest_util.py   |  33 +-
 tests/python/driver/tvmc/test_command_line.py  |  22 ++
 tests/python/driver/tvmc/test_compiler.py  |  22 ++
 tests/python/driver/tvmc/test_workspace_pools.py   | 404 +
 tests/python/frontend/pytorch/test_forward.py  |  12 +
 tests/python/unittest/test_arith_domain_touched.py |   8 +-
 .../test_tir_analysis_calculate_workspace.py   |   2 +-
 .../test_tir_analysis_get_block_access_region.py   |   2 -
 .../unittest/test_tir_schedule_transform_layout.py |   2 +-
 .../test_tir_transform_compact_buffer_region.py|  22 +-
 ...test_tir_transform_renormalize_split_pattern.py |   4 +-
 .../unittest/test_tir_transform_storage_flatten.py |   2 +-
 .../test_tir_usmp_analysis_extract_bufferinfo.py   |   2 +-
 tests/python/unittest/test_tir_usmp_utils.py   |   2 +-
 tests/python/unittest/test_tvmscript_roundtrip.py  |  22 +-
 .../python/unittest/test_tvmscript_syntax_sugar.py |   2 +-
 35 files changed, 1912 insertions(+), 66 deletions(-)
 create mode 100644 apps/pt_tvmdsoop/tests/test_as_torch.py
 create mode 100644 apps/pt_tvmdsoop/tests/test_optimize_torch.py
 create mode 100644 python/tvm/contrib/torch/as_torch.py
 create mode 100644 python/tvm/contrib/torch/optimize_torch.py
 create mode 100644 python/tvm/driver/tvmc/workspace_pools.py
 create mode 100644 src/contrib/torch/base64.h
 create mode 100644 src/contrib/torch/pt_call_tvm/RuntimeModuleWrapper.cc
 create mode 100644 tests/python/driver/tvmc/test_workspace_pools.py



[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r929975111


##
python/tvm/relay/backend/contrib/uma/api/utils.py:
##
@@ -0,0 +1,52 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Utility methods for the Universal Modular Accelerator Interface (UMA)"""
+
+from enum import Enum, auto
+import uuid
+import tvm.tir
+from tvm.contrib import utils, clang
+
+
+class PassPhase(Enum):
+"""UMA pass phases."""
+
+PRE_PARTITIONING = auto()
+POST_PARTITIONING_0 = auto()

Review Comment:
   done: see code
   
   ```
   """
   UMA pass phases:
   
   PRE_PARTITIONING: prior to UMA partitioning
   POST_PARTITIONING_0: after UMA partitioning, before Defunctionalization
   POST_PARTITIONING_1: after UMA partitioning and after Defunctionalization
   TIR_PHASE_0: Generates the raw IR and loop levels.
   TIR_PHASE_1: Flattens the array storage.
   TIR_PHASE_2: Transforms loops, like unroll, vectorization and 
thread-binding.
   TIR_PHASE_3: Does some cleanup work.
   
   Reference to TIR phases: src/driver/driver_api.c
   """
   ```
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] kparzysz-quic opened a new pull request, #12187: [JIT] Force finalization of JITed code, expose sf/hf runtime functions

2022-07-26 Thread GitBox


kparzysz-quic opened a new pull request, #12187:
URL: https://github.com/apache/tvm/pull/12187

   Code that handles fp16 and fp32 may end up calling builtins that do the 
conversions between these types. LLVM emits calls to `__truncsfhf2`, and 
`__extendhfsf2`, which are not present in TVM or TVM runtime. This creates two 
problems:
   - Problem 1: JITed code that does the conversions will fail because it calls 
non-existent functions.
   
   Adding these functions to libtvm.so/libtvm_runtime.so solves this part, but 
there is another issue:
   - Problem 2: JITed code may still not call these functions, because the 
generated object may not be fully resolved.
   
   To force full resolution, try to obtain an address of a non-existent symbol.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] MichaelJKlaiber commented on a diff in pull request #12087: [UMA] UMA v1.0

2022-07-26 Thread GitBox


MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r929962126


##
python/tvm/relay/backend/contrib/uma/api/codegen.py:
##
@@ -0,0 +1,53 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Codegen base class of the Universal Modular Accelerator Interface (UMA)"""
+
+from typing import Callable
+import tvm
+
+
+class UMACodegen(object):
+"""
+Codegen base class of the Universal Modular Accelerator Interface (UMA)
+"""
+
+def __init__(self, target_name: str) -> None:
+self.target_name = target_name
+
+def _register_codegen(self, fmt: str = "c", **kwargs) -> None:
+if fmt == "c":
+self._register_c_codegen(**kwargs)
+else:
+raise RuntimeError(f'Unsupported codegen format "{fmt}"')
+
+def _register_c_codegen(
+self,
+includes: Callable[[], str] = None,

Review Comment:
   To generate dynamic strings more easily we would be in favor of passing a 
Callable
   @PaulPalomeroBernardo  @cgerum 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   >