[GitHub] [tvm-rfcs] Meteorix commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


Meteorix commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698206877



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks
+
+So we hope to use TVM to accelerate PyTorch model inference.
+
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+
+For example, we have an end-to-end resnet classification model, consisting of 
3 parts:
+
+1. Image reader
+2. Image transforms
+3. Resnet model inference

Review comment:
   the first step reads the images and decode image binary( in 
`png/jpeg/...`)to tensor




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] Meteorix commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


Meteorix commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698201626



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks

Review comment:
   1.  is `workflows introduce workflow introduce` a typo?
   2. `dynamic models` is a little confusing, maybe we just use `models with 
dynamic control flow`
   3. `real-world irregular models`, actually most models other than `conv` 
model cannot be accelerated very well, like `bert` or `wide and deep` models, 
which are not irregular. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] Meteorix commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


Meteorix commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698200065



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model

Review comment:
   Actually works with both the `torch.jit.trace` and `torch.jit.script` API




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] Meteorix commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


Meteorix commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698199540



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph

Review comment:
   A torchscript module contains other torchscript module. So users can 
choose a module at any level to do the conversion.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] liangfu commented on pull request #8870: [Community] @manupa-arm -> Committer

2021-08-29 Thread GitBox


liangfu commented on pull request #8870:
URL: https://github.com/apache/tvm/pull/8870#issuecomment-908042484


   Congrats @manupa-arm


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (06fc788 -> 421dbf1)

2021-08-29 Thread liangfu
This is an automated email from the ASF dual-hosted git repository.

liangfu pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 06fc788  [RISCV] Add support for llvm parameter -mabi (-target-abi) 
(#8860)
 add 421dbf1  [Community] @manupa-arm -> Committer (#8870)

No new revisions were added by this update.

Summary of changes:
 CONTRIBUTORS.md | 1 +
 1 file changed, 1 insertion(+)


[GitHub] [tvm] liangfu merged pull request #8870: [Community] @manupa-arm -> Committer

2021-08-29 Thread GitBox


liangfu merged pull request #8870:
URL: https://github.com/apache/tvm/pull/8870


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] Meteorix commented on pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


Meteorix commented on pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#issuecomment-908041324


   > I wonder whether this would make the torch fallback op 
([apache/tvm#7401](https://github.com/apache/tvm/pull/7401)) more or less 
useful (it would depend on what you (plan to) do with unsupported ops). I am 
still pondering whether to close it or dust it off.
   
   @t-vi It will help a lot. If we have the torch fallback op support, users 
will less likely to get stuck in frontend conversion phase. We are looking 
forward to this feature. Could you complete the PR? 
   
   > I should note that as far as I know NVidia has a TensorRT front end 
serving a similar goal and there also is one for ONNXRuntime-as-a-module 
(recently featured in the PyTorch blog). There may be useful design insights in 
how they work (or maybe they're too different to what you have in mind).
   
   And thanks for this information. Actually, we are inspired by another NVIDIA 
work: [TRTorch](https://github.com/NVIDIA/TRTorch). 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] Meteorix commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


Meteorix commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698181240



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks
+
+So we hope to use TVM to accelerate PyTorch model inference.
+
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+
+For example, we have an end-to-end resnet classification model, consisting of 
3 parts:
+
+1. Image reader
+2. Image transforms
+3. Resnet model inference
+
+```
+class Predictor(nn.Module):
+
+def __init__(self, tvm_module=None):
+super().__init__()
+self.resnet18 = resnet18(pretrained=True, progress=False).eval()
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+print(x.shape)
+y_pred = self.resnet18(x)
+return y_pred.argmax(dim=1)
+```
+
+We choose to accelerate resnet model with PyTorchTVM
+
+```
+from tvm.contrib.pt_op import PyTorchTVMModule, compile
+
+print("compile...")
+option = {
+"input_infos": [
+("x", (1, 3, 224, 224)),
+],
+"default_dtype": "float16",
+"export_dir": "pytorch_compiled",
+"num_outputs": 1,
+"tuning_n_trials": 0,  # set zero to skip tuning
+"tuning_log_file": "tuning.log",
+}
+x = torch.randn(1, 3, 224, 224).cuda().half()
+resnet_jit = torch.jit.trace(model.resnet18, x)
+resnet_tvm = compile(resnet_jit, option)
+```
+
+Then we can use the accelerated tvm module directly in pytorch, and also use 
`torch.jit.script` together with the other 2 parts.
+
+```
+resnet_tvm = torch.jit.script(resnet_tvm)
+print(resnet_tvm.graph)
+
+
+class PredictorTVM(nn.Module):
+
+def __init__(self):
+super().__init__()
+self.resnet18 = resnet_tvm
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+# y_pred = self.resnet18(x)
+y_pred = self.resnet18([x])[0]
+return y_pred.argmax(dim=1)
+
+
+print("run tvm...")
+model_tvm = PredictorTVM().cuda().half()
+for i in range(20):
+t = time.time()
+model_tvm([image_path])
+torch.cuda.synchronize()
+print(time.time() - t)
+
+torch.jit.script(model_tvm).save("model_tvm.pt")
+```
+
+Finally, we get a TVM accelerated model, which can be loaded and served in 
production.
+
+
+# Reference-level explanation
+[reference-level-explanation]: #reference-level-explanation
+
+We have opened an initial PR: https://github.com/apache/tvm/pull/8777
+
+The essential cpp code is as follows:
+
+```
+// This is just a wrapper class of tvm graph runtime module
+class 

[GitHub] [tvm-rfcs] Meteorix commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


Meteorix commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698179944



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks
+
+So we hope to use TVM to accelerate PyTorch model inference.
+
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+
+For example, we have an end-to-end resnet classification model, consisting of 
3 parts:
+
+1. Image reader
+2. Image transforms
+3. Resnet model inference
+
+```
+class Predictor(nn.Module):
+
+def __init__(self, tvm_module=None):
+super().__init__()
+self.resnet18 = resnet18(pretrained=True, progress=False).eval()
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+print(x.shape)
+y_pred = self.resnet18(x)
+return y_pred.argmax(dim=1)
+```
+
+We choose to accelerate resnet model with PyTorchTVM
+
+```
+from tvm.contrib.pt_op import PyTorchTVMModule, compile
+
+print("compile...")
+option = {
+"input_infos": [
+("x", (1, 3, 224, 224)),
+],
+"default_dtype": "float16",
+"export_dir": "pytorch_compiled",
+"num_outputs": 1,
+"tuning_n_trials": 0,  # set zero to skip tuning
+"tuning_log_file": "tuning.log",
+}
+x = torch.randn(1, 3, 224, 224).cuda().half()
+resnet_jit = torch.jit.trace(model.resnet18, x)
+resnet_tvm = compile(resnet_jit, option)
+```
+
+Then we can use the accelerated tvm module directly in pytorch, and also use 
`torch.jit.script` together with the other 2 parts.
+
+```
+resnet_tvm = torch.jit.script(resnet_tvm)
+print(resnet_tvm.graph)
+
+
+class PredictorTVM(nn.Module):
+
+def __init__(self):
+super().__init__()
+self.resnet18 = resnet_tvm
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+# y_pred = self.resnet18(x)
+y_pred = self.resnet18([x])[0]

Review comment:
   Actually, It's a limitation of our custom op interface. So maybe we 
should keep the diff more obvious to users.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#issuecomment-908030789


   Thanks @Hzfengsy for the PR! I did a pass on the schedule primitive and it 
overall looks pretty good. I nitpicked quite a bit and let's get it merged 
quickly after those comments are addressed. Thanks a lot!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on a change in pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#discussion_r698173196



##
File path: src/tir/schedule/primitive/cache_read_write.cc
##
@@ -0,0 +1,792 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "../utils.h"
+
+namespace tvm {
+namespace tir {
+
+/ Error Classes /
+
+class NotSingleWriteBlock : public ScheduleError {
+ public:
+  explicit NotSingleWriteBlock(IRModule mod, Buffer buffer, Array 
write_blocks)
+  : mod_(std::move(mod)), buffer_(std::move(buffer)) {
+ICHECK_GT(write_blocks.size(), 1);
+write_blocks_.reserve(write_blocks.size());
+for (const StmtSRef& block_sref : write_blocks) {
+  const BlockNode* block = TVM_SREF_TO_BLOCK(block, block_sref);
+  write_blocks_.push_back(GetRef(block));
+}
+  }
+
+  String FastErrorString() const final {
+return "ScheduleError: The buffer is allowed to be written by single 
block.";
+  }
+
+  String DetailRenderTemplate() const final {
+size_t k = write_blocks_.size();
+return "The buffer " + buffer_->name + " is expected to be written by 
single block, but got " +
+   std::to_string(k) + " blocks who write it.";
+  }
+
+  IRModule mod() const final { return mod_; }
+  Array LocationsOfInterest() const final {
+return {write_blocks_.begin(), write_blocks_.end()};
+  }
+
+ private:
+  IRModule mod_;
+  Buffer buffer_;
+  Array write_blocks_;
+};
+
+/ Helper Functions/Classes /
+
+/*! \brief The auxiliary info used for the insertion point and content of the 
cache stage. */
+struct CacheStageInfo {
+  /*! \brief The buffer to be read. */
+  Buffer read_buffer;
+  /*! \brief The buffer to be written. */
+  Buffer write_buffer;
+  /*! \brief The buffer allocation statement to be inserted. */
+  Buffer alloc;
+  /*! \brief The AST node whose body is where the cache stage should be 
inserted. */
+  StmtSRef loc_sref;
+  /*! \brief The index to insert the cache_read/cache_write stage. */
+  size_t loc_pos;
+  /*! \brief The cache_read/cache_write stage to be inserted. */
+  Stmt cache_stage;
+  /*! \brief The map used for ScheduleStateNode::Replace. */
+  Map block_map;
+};
+
+/*! \brief Return the buffer region realted with the buffer */
+Optional RelatedBufferRegion(const Array& 
buffer_regions,
+   const Buffer& buffer) {
+  Optional res = NullOpt;
+  for (const auto& region : buffer_regions) {
+if (region->buffer.same_as(buffer)) {
+  ICHECK(!res.defined());
+  res = region;
+}
+  }
+  return res;
+}
+
+/*!
+ * \brief Create a loop nest that represents cache copy (cache_read / 
cache_write) from read buffer
+ *to write buffer.
+ * \note This function will store the stmt with loop nesting to the 
CacheStageInfo, but only return
+ *the inside block.
+ * \param cache_region The cached copy region.
+ * \param info The cache stage information, which will be updated in the 
function.
+ * \param storage_scope The storage scope of the cached buffer (only used in 
naming here)
+ * \returns A block indicating the body of the loop nesting.
+ */
+Block MakeCacheStage(const BufferRegion& cache_region, CacheStageInfo* info,
+ const String& storage_scope) {
+  // loop variables
+  std::vector loop_vars;
+  // bindings in block realize
+  std::vector iter_values;
+  // Create loop vars and block vars' binding_value
+  for (const Range& axis_range : cache_region->region) {
+Var loop_var("ax" + std::to_string(loop_vars.size()));
+loop_vars.push_back(loop_var);
+iter_values.push_back(axis_range->min + loop_var);
+  }
+  // block variables
+  Array block_vars;
+  // block access region for read/write buffers
+  Region access_region;
+  // indices used in block body
+  Array access_indices;
+  // Create block vars, block's accessed region and accessing indices
+  for (const PrimExpr& dim : cache_region->buffer->shape) {
+Var var("v" + std::to_string(access_indices.size()));
+block_vars.push_back(IterVar(/*dom=*/Range::FromMinExtent(0, dim),
+ /*var=*/var,
+ 

[GitHub] [tvm] junrushao1994 commented on a change in pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#discussion_r698171770



##
File path: src/tir/schedule/primitive/cache_read_write.cc
##
@@ -0,0 +1,792 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "../utils.h"
+
+namespace tvm {
+namespace tir {
+
+/ Error Classes /
+
+class NotSingleWriteBlock : public ScheduleError {
+ public:
+  explicit NotSingleWriteBlock(IRModule mod, Buffer buffer, Array 
write_blocks)
+  : mod_(std::move(mod)), buffer_(std::move(buffer)) {
+ICHECK_GT(write_blocks.size(), 1);
+write_blocks_.reserve(write_blocks.size());
+for (const StmtSRef& block_sref : write_blocks) {
+  const BlockNode* block = TVM_SREF_TO_BLOCK(block, block_sref);
+  write_blocks_.push_back(GetRef(block));
+}
+  }
+
+  String FastErrorString() const final {
+return "ScheduleError: The buffer is allowed to be written by single 
block.";
+  }
+
+  String DetailRenderTemplate() const final {
+size_t k = write_blocks_.size();
+return "The buffer " + buffer_->name + " is expected to be written by 
single block, but got " +
+   std::to_string(k) + " blocks who write it.";
+  }
+
+  IRModule mod() const final { return mod_; }
+  Array LocationsOfInterest() const final {
+return {write_blocks_.begin(), write_blocks_.end()};
+  }
+
+ private:
+  IRModule mod_;
+  Buffer buffer_;
+  Array write_blocks_;
+};
+
+/ Helper Functions/Classes /
+
+/*! \brief The auxiliary info used for the insertion point and content of the 
cache stage. */
+struct CacheStageInfo {
+  /*! \brief The buffer to be read. */
+  Buffer read_buffer;
+  /*! \brief The buffer to be written. */
+  Buffer write_buffer;
+  /*! \brief The buffer allocation statement to be inserted. */
+  Buffer alloc;
+  /*! \brief The AST node whose body is where the cache stage should be 
inserted. */
+  StmtSRef loc_sref;
+  /*! \brief The index to insert the cache_read/cache_write stage. */
+  size_t loc_pos;
+  /*! \brief The cache_read/cache_write stage to be inserted. */
+  Stmt cache_stage;
+  /*! \brief The map used for ScheduleStateNode::Replace. */
+  Map block_map;
+};
+
+/*! \brief Return the buffer region realted with the buffer */
+Optional RelatedBufferRegion(const Array& 
buffer_regions,
+   const Buffer& buffer) {
+  Optional res = NullOpt;
+  for (const auto& region : buffer_regions) {
+if (region->buffer.same_as(buffer)) {
+  ICHECK(!res.defined());
+  res = region;
+}
+  }
+  return res;
+}
+
+/*!
+ * \brief Create a loop nest that represents cache copy (cache_read / 
cache_write) from read buffer
+ *to write buffer.
+ * \note This function will store the stmt with loop nesting to the 
CacheStageInfo, but only return
+ *the inside block.
+ * \param cache_region The cached copy region.
+ * \param info The cache stage information, which will be updated in the 
function.
+ * \param storage_scope The storage scope of the cached buffer (only used in 
naming here)
+ * \returns A block indicating the body of the loop nesting.
+ */
+Block MakeCacheStage(const BufferRegion& cache_region, CacheStageInfo* info,
+ const String& storage_scope) {
+  // loop variables
+  std::vector loop_vars;
+  // bindings in block realize
+  std::vector iter_values;
+  // Create loop vars and block vars' binding_value
+  for (const Range& axis_range : cache_region->region) {
+Var loop_var("ax" + std::to_string(loop_vars.size()));
+loop_vars.push_back(loop_var);
+iter_values.push_back(axis_range->min + loop_var);
+  }
+  // block variables
+  Array block_vars;
+  // block access region for read/write buffers
+  Region access_region;
+  // indices used in block body
+  Array access_indices;
+  // Create block vars, block's accessed region and accessing indices
+  for (const PrimExpr& dim : cache_region->buffer->shape) {
+Var var("v" + std::to_string(access_indices.size()));
+block_vars.push_back(IterVar(/*dom=*/Range::FromMinExtent(0, dim),
+ /*var=*/var,
+ 

[GitHub] [tvm] junrushao1994 commented on a change in pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#discussion_r698170273



##
File path: src/tir/schedule/primitive/cache_read_write.cc
##
@@ -0,0 +1,792 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "../utils.h"
+
+namespace tvm {
+namespace tir {
+
+/ Error Classes /
+
+class NotSingleWriteBlock : public ScheduleError {
+ public:
+  explicit NotSingleWriteBlock(IRModule mod, Buffer buffer, Array 
write_blocks)
+  : mod_(std::move(mod)), buffer_(std::move(buffer)) {
+ICHECK_GT(write_blocks.size(), 1);
+write_blocks_.reserve(write_blocks.size());
+for (const StmtSRef& block_sref : write_blocks) {
+  const BlockNode* block = TVM_SREF_TO_BLOCK(block, block_sref);
+  write_blocks_.push_back(GetRef(block));
+}
+  }
+
+  String FastErrorString() const final {
+return "ScheduleError: The buffer is allowed to be written by single 
block.";
+  }
+
+  String DetailRenderTemplate() const final {
+size_t k = write_blocks_.size();
+return "The buffer " + buffer_->name + " is expected to be written by 
single block, but got " +
+   std::to_string(k) + " blocks who write it.";
+  }
+
+  IRModule mod() const final { return mod_; }
+  Array LocationsOfInterest() const final {
+return {write_blocks_.begin(), write_blocks_.end()};
+  }
+
+ private:
+  IRModule mod_;
+  Buffer buffer_;
+  Array write_blocks_;
+};
+
+/ Helper Functions/Classes /
+
+/*! \brief The auxiliary info used for the insertion point and content of the 
cache stage. */
+struct CacheStageInfo {
+  /*! \brief The buffer to be read. */
+  Buffer read_buffer;
+  /*! \brief The buffer to be written. */
+  Buffer write_buffer;
+  /*! \brief The buffer allocation statement to be inserted. */
+  Buffer alloc;
+  /*! \brief The AST node whose body is where the cache stage should be 
inserted. */
+  StmtSRef loc_sref;
+  /*! \brief The index to insert the cache_read/cache_write stage. */
+  size_t loc_pos;
+  /*! \brief The cache_read/cache_write stage to be inserted. */
+  Stmt cache_stage;
+  /*! \brief The map used for ScheduleStateNode::Replace. */
+  Map block_map;
+};
+
+/*! \brief Return the buffer region realted with the buffer */
+Optional RelatedBufferRegion(const Array& 
buffer_regions,

Review comment:
   im looking for a better name and it might make more sense
   
   ```suggestion
   Optional GetBufferRegionFromBuffer(const Array& 
buffer_regions,
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on a change in pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#discussion_r698149724



##
File path: src/tir/schedule/primitive/cache_read_write.cc
##
@@ -0,0 +1,792 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "../utils.h"
+
+namespace tvm {
+namespace tir {
+
+/ Error Classes /
+
+class NotSingleWriteBlock : public ScheduleError {
+ public:
+  explicit NotSingleWriteBlock(IRModule mod, Buffer buffer, Array 
write_blocks)
+  : mod_(std::move(mod)), buffer_(std::move(buffer)) {
+ICHECK_GT(write_blocks.size(), 1);
+write_blocks_.reserve(write_blocks.size());
+for (const StmtSRef& block_sref : write_blocks) {
+  const BlockNode* block = TVM_SREF_TO_BLOCK(block, block_sref);
+  write_blocks_.push_back(GetRef(block));
+}
+  }
+
+  String FastErrorString() const final {
+return "ScheduleError: The buffer is allowed to be written by single 
block.";
+  }
+
+  String DetailRenderTemplate() const final {
+size_t k = write_blocks_.size();
+return "The buffer " + buffer_->name + " is expected to be written by 
single block, but got " +
+   std::to_string(k) + " blocks who write it.";
+  }
+
+  IRModule mod() const final { return mod_; }
+  Array LocationsOfInterest() const final {
+return {write_blocks_.begin(), write_blocks_.end()};
+  }
+
+ private:
+  IRModule mod_;
+  Buffer buffer_;
+  Array write_blocks_;
+};
+
+/ Helper Functions/Classes /
+
+/*! \brief The auxiliary info used for the insertion point and content of the 
cache stage. */
+struct CacheStageInfo {
+  /*! \brief The buffer to be read. */
+  Buffer read_buffer;
+  /*! \brief The buffer to be written. */
+  Buffer write_buffer;
+  /*! \brief The buffer allocation statement to be inserted. */

Review comment:
   To clarify a bit given it's not a statement
   
   ```suggestion
 /*! \brief The buffer allocation to be inserted into the block signature. 
*/
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on a change in pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#discussion_r698146835



##
File path: src/tir/schedule/primitive/cache_read_write.cc
##
@@ -0,0 +1,792 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "../utils.h"
+
+namespace tvm {
+namespace tir {
+
+/ Error Classes /
+
+class NotSingleWriteBlock : public ScheduleError {
+ public:
+  explicit NotSingleWriteBlock(IRModule mod, Buffer buffer, Array 
write_blocks)
+  : mod_(std::move(mod)), buffer_(std::move(buffer)) {
+ICHECK_GT(write_blocks.size(), 1);
+write_blocks_.reserve(write_blocks.size());
+for (const StmtSRef& block_sref : write_blocks) {
+  const BlockNode* block = TVM_SREF_TO_BLOCK(block, block_sref);
+  write_blocks_.push_back(GetRef(block));
+}
+  }
+
+  String FastErrorString() const final {
+return "ScheduleError: The buffer is allowed to be written by single 
block.";
+  }
+
+  String DetailRenderTemplate() const final {
+size_t k = write_blocks_.size();
+return "The buffer " + buffer_->name + " is expected to be written by 
single block, but got " +
+   std::to_string(k) + " blocks who write it.";
+  }
+
+  IRModule mod() const final { return mod_; }
+  Array LocationsOfInterest() const final {
+return {write_blocks_.begin(), write_blocks_.end()};
+  }
+
+ private:
+  IRModule mod_;
+  Buffer buffer_;
+  Array write_blocks_;
+};
+
+/ Helper Functions/Classes /
+
+/*! \brief The auxiliary info used for the insertion point and content of the 
cache stage. */
+struct CacheStageInfo {
+  /*! \brief The buffer to be read. */
+  Buffer read_buffer;
+  /*! \brief The buffer to be written. */
+  Buffer write_buffer;
+  /*! \brief The buffer allocation statement to be inserted. */
+  Buffer alloc;
+  /*! \brief The AST node whose body is where the cache stage should be 
inserted. */
+  StmtSRef loc_sref;
+  /*! \brief The index to insert the cache_read/cache_write stage. */
+  size_t loc_pos;
+  /*! \brief The cache_read/cache_write stage to be inserted. */
+  Stmt cache_stage;
+  /*! \brief The map used for ScheduleStateNode::Replace. */
+  Map block_map;
+};
+
+/*! \brief Return the buffer region realted with the buffer */
+Optional RelatedBufferRegion(const Array& 
buffer_regions,
+   const Buffer& buffer) {
+  Optional res = NullOpt;
+  for (const auto& region : buffer_regions) {
+if (region->buffer.same_as(buffer)) {
+  ICHECK(!res.defined());
+  res = region;
+}
+  }
+  return res;
+}
+
+/*!
+ * \brief Create a loop nest that represents cache copy (cache_read / 
cache_write) from read buffer
+ *to write buffer.
+ * \note This function will store the stmt with loop nesting to the 
CacheStageInfo, but only return
+ *the inside block.
+ * \param cache_region The cached copy region.
+ * \param info The cache stage information, which will be updated in the 
function.
+ * \param storage_scope The storage scope of the cached buffer (only used in 
naming here)
+ * \returns A block indicating the body of the loop nesting.
+ */
+Block MakeCacheStage(const BufferRegion& cache_region, CacheStageInfo* info,
+ const String& storage_scope) {
+  // loop variables
+  std::vector loop_vars;
+  // bindings in block realize
+  std::vector iter_values;
+  // Create loop vars and block vars' binding_value
+  for (const Range& axis_range : cache_region->region) {
+Var loop_var("ax" + std::to_string(loop_vars.size()));
+loop_vars.push_back(loop_var);
+iter_values.push_back(axis_range->min + loop_var);
+  }
+  // block variables
+  Array block_vars;
+  // block access region for read/write buffers
+  Region access_region;
+  // indices used in block body
+  Array access_indices;
+  // Create block vars, block's accessed region and accessing indices
+  for (const PrimExpr& dim : cache_region->buffer->shape) {
+Var var("v" + std::to_string(access_indices.size()));
+block_vars.push_back(IterVar(/*dom=*/Range::FromMinExtent(0, dim),
+ /*var=*/var,
+ 

[GitHub] [tvm] junrushao1994 commented on a change in pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#discussion_r698146419



##
File path: src/tir/schedule/primitive/cache_read_write.cc
##
@@ -0,0 +1,792 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "../utils.h"
+
+namespace tvm {
+namespace tir {
+
+/ Error Classes /
+
+class NotSingleWriteBlock : public ScheduleError {
+ public:
+  explicit NotSingleWriteBlock(IRModule mod, Buffer buffer, Array 
write_blocks)
+  : mod_(std::move(mod)), buffer_(std::move(buffer)) {
+ICHECK_GT(write_blocks.size(), 1);
+write_blocks_.reserve(write_blocks.size());
+for (const StmtSRef& block_sref : write_blocks) {
+  const BlockNode* block = TVM_SREF_TO_BLOCK(block, block_sref);
+  write_blocks_.push_back(GetRef(block));
+}
+  }
+
+  String FastErrorString() const final {
+return "ScheduleError: The buffer is allowed to be written by single 
block.";
+  }
+
+  String DetailRenderTemplate() const final {
+size_t k = write_blocks_.size();
+return "The buffer " + buffer_->name + " is expected to be written by 
single block, but got " +
+   std::to_string(k) + " blocks who write it.";
+  }
+
+  IRModule mod() const final { return mod_; }
+  Array LocationsOfInterest() const final {
+return {write_blocks_.begin(), write_blocks_.end()};
+  }
+
+ private:
+  IRModule mod_;
+  Buffer buffer_;
+  Array write_blocks_;
+};
+
+/ Helper Functions/Classes /
+
+/*! \brief The auxiliary info used for the insertion point and content of the 
cache stage. */
+struct CacheStageInfo {
+  /*! \brief The buffer to be read. */
+  Buffer read_buffer;
+  /*! \brief The buffer to be written. */
+  Buffer write_buffer;
+  /*! \brief The buffer allocation statement to be inserted. */
+  Buffer alloc;
+  /*! \brief The AST node whose body is where the cache stage should be 
inserted. */
+  StmtSRef loc_sref;
+  /*! \brief The index to insert the cache_read/cache_write stage. */
+  size_t loc_pos;
+  /*! \brief The cache_read/cache_write stage to be inserted. */
+  Stmt cache_stage;
+  /*! \brief The map used for ScheduleStateNode::Replace. */
+  Map block_map;
+};
+
+/*! \brief Return the buffer region realted with the buffer */
+Optional RelatedBufferRegion(const Array& 
buffer_regions,
+   const Buffer& buffer) {
+  Optional res = NullOpt;
+  for (const auto& region : buffer_regions) {
+if (region->buffer.same_as(buffer)) {
+  ICHECK(!res.defined());
+  res = region;
+}
+  }
+  return res;
+}
+
+/*!
+ * \brief Create a loop nest that represents cache copy (cache_read / 
cache_write) from read buffer
+ *to write buffer.
+ * \note This function will store the stmt with loop nesting to the 
CacheStageInfo, but only return
+ *the inside block.
+ * \param cache_region The cached copy region.
+ * \param info The cache stage information, which will be updated in the 
function.
+ * \param storage_scope The storage scope of the cached buffer (only used in 
naming here)
+ * \returns A block indicating the body of the loop nesting.
+ */
+Block MakeCacheStage(const BufferRegion& cache_region, CacheStageInfo* info,
+ const String& storage_scope) {
+  // loop variables
+  std::vector loop_vars;
+  // bindings in block realize
+  std::vector iter_values;
+  // Create loop vars and block vars' binding_value
+  for (const Range& axis_range : cache_region->region) {
+Var loop_var("ax" + std::to_string(loop_vars.size()));
+loop_vars.push_back(loop_var);
+iter_values.push_back(axis_range->min + loop_var);
+  }
+  // block variables
+  Array block_vars;
+  // block access region for read/write buffers
+  Region access_region;
+  // indices used in block body
+  Array access_indices;
+  // Create block vars, block's accessed region and accessing indices
+  for (const PrimExpr& dim : cache_region->buffer->shape) {
+Var var("v" + std::to_string(access_indices.size()));
+block_vars.push_back(IterVar(/*dom=*/Range::FromMinExtent(0, dim),
+ /*var=*/var,
+ 

[GitHub] [tvm] junrushao1994 commented on a change in pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#discussion_r698146015



##
File path: src/tir/schedule/primitive/cache_read_write.cc
##
@@ -0,0 +1,792 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "../utils.h"
+
+namespace tvm {
+namespace tir {
+
+/ Error Classes /
+
+class NotSingleWriteBlock : public ScheduleError {
+ public:
+  explicit NotSingleWriteBlock(IRModule mod, Buffer buffer, Array 
write_blocks)
+  : mod_(std::move(mod)), buffer_(std::move(buffer)) {
+ICHECK_GT(write_blocks.size(), 1);
+write_blocks_.reserve(write_blocks.size());
+for (const StmtSRef& block_sref : write_blocks) {
+  const BlockNode* block = TVM_SREF_TO_BLOCK(block, block_sref);
+  write_blocks_.push_back(GetRef(block));
+}
+  }
+
+  String FastErrorString() const final {
+return "ScheduleError: The buffer is allowed to be written by single 
block.";
+  }
+
+  String DetailRenderTemplate() const final {
+size_t k = write_blocks_.size();
+return "The buffer " + buffer_->name + " is expected to be written by 
single block, but got " +
+   std::to_string(k) + " blocks who write it.";
+  }
+
+  IRModule mod() const final { return mod_; }
+  Array LocationsOfInterest() const final {
+return {write_blocks_.begin(), write_blocks_.end()};
+  }
+
+ private:
+  IRModule mod_;
+  Buffer buffer_;
+  Array write_blocks_;
+};
+
+/ Helper Functions/Classes /
+
+/*! \brief The auxiliary info used for the insertion point and content of the 
cache stage. */
+struct CacheStageInfo {
+  /*! \brief The buffer to be read. */
+  Buffer read_buffer;
+  /*! \brief The buffer to be written. */
+  Buffer write_buffer;
+  /*! \brief The buffer allocation statement to be inserted. */
+  Buffer alloc;
+  /*! \brief The AST node whose body is where the cache stage should be 
inserted. */
+  StmtSRef loc_sref;
+  /*! \brief The index to insert the cache_read/cache_write stage. */
+  size_t loc_pos;
+  /*! \brief The cache_read/cache_write stage to be inserted. */
+  Stmt cache_stage;
+  /*! \brief The map used for ScheduleStateNode::Replace. */
+  Map block_map;
+};
+
+/*! \brief Return the buffer region realted with the buffer */
+Optional RelatedBufferRegion(const Array& 
buffer_regions,
+   const Buffer& buffer) {
+  Optional res = NullOpt;
+  for (const auto& region : buffer_regions) {
+if (region->buffer.same_as(buffer)) {
+  ICHECK(!res.defined());
+  res = region;
+}
+  }
+  return res;
+}
+
+/*!
+ * \brief Create a loop nest that represents cache copy (cache_read / 
cache_write) from read buffer
+ *to write buffer.
+ * \note This function will store the stmt with loop nesting to the 
CacheStageInfo, but only return
+ *the inside block.
+ * \param cache_region The cached copy region.
+ * \param info The cache stage information, which will be updated in the 
function.
+ * \param storage_scope The storage scope of the cached buffer (only used in 
naming here)
+ * \returns A block indicating the body of the loop nesting.
+ */
+Block MakeCacheStage(const BufferRegion& cache_region, CacheStageInfo* info,
+ const String& storage_scope) {
+  // loop variables
+  std::vector loop_vars;
+  // bindings in block realize
+  std::vector iter_values;
+  // Create loop vars and block vars' binding_value
+  for (const Range& axis_range : cache_region->region) {
+Var loop_var("ax" + std::to_string(loop_vars.size()));
+loop_vars.push_back(loop_var);
+iter_values.push_back(axis_range->min + loop_var);
+  }
+  // block variables
+  Array block_vars;
+  // block access region for read/write buffers
+  Region access_region;
+  // indices used in block body
+  Array access_indices;
+  // Create block vars, block's accessed region and accessing indices
+  for (const PrimExpr& dim : cache_region->buffer->shape) {
+Var var("v" + std::to_string(access_indices.size()));
+block_vars.push_back(IterVar(/*dom=*/Range::FromMinExtent(0, dim),
+ /*var=*/var,
+ 

[GitHub] [tvm] junrushao1994 commented on a change in pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#discussion_r698141962



##
File path: src/tir/schedule/primitive/cache_read_write.cc
##
@@ -0,0 +1,792 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "../utils.h"
+
+namespace tvm {
+namespace tir {
+
+/ Error Classes /
+
+class NotSingleWriteBlock : public ScheduleError {
+ public:
+  explicit NotSingleWriteBlock(IRModule mod, Buffer buffer, Array 
write_blocks)
+  : mod_(std::move(mod)), buffer_(std::move(buffer)) {
+ICHECK_GT(write_blocks.size(), 1);
+write_blocks_.reserve(write_blocks.size());
+for (const StmtSRef& block_sref : write_blocks) {
+  const BlockNode* block = TVM_SREF_TO_BLOCK(block, block_sref);
+  write_blocks_.push_back(GetRef(block));
+}
+  }
+
+  String FastErrorString() const final {
+return "ScheduleError: The buffer is allowed to be written by single 
block.";
+  }
+
+  String DetailRenderTemplate() const final {
+size_t k = write_blocks_.size();
+return "The buffer " + buffer_->name + " is expected to be written by 
single block, but got " +
+   std::to_string(k) + " blocks who write it.";
+  }
+
+  IRModule mod() const final { return mod_; }
+  Array LocationsOfInterest() const final {
+return {write_blocks_.begin(), write_blocks_.end()};
+  }
+
+ private:
+  IRModule mod_;
+  Buffer buffer_;
+  Array write_blocks_;
+};
+
+/ Helper Functions/Classes /
+
+/*! \brief The auxiliary info used for the insertion point and content of the 
cache stage. */
+struct CacheStageInfo {
+  /*! \brief The buffer to be read. */
+  Buffer read_buffer;
+  /*! \brief The buffer to be written. */
+  Buffer write_buffer;
+  /*! \brief The buffer allocation statement to be inserted. */
+  Buffer alloc;
+  /*! \brief The AST node whose body is where the cache stage should be 
inserted. */
+  StmtSRef loc_sref;
+  /*! \brief The index to insert the cache_read/cache_write stage. */
+  size_t loc_pos;
+  /*! \brief The cache_read/cache_write stage to be inserted. */
+  Stmt cache_stage;
+  /*! \brief The map used for ScheduleStateNode::Replace. */
+  Map block_map;
+};
+
+/*! \brief Return the buffer region realted with the buffer */
+Optional RelatedBufferRegion(const Array& 
buffer_regions,
+   const Buffer& buffer) {
+  Optional res = NullOpt;
+  for (const auto& region : buffer_regions) {
+if (region->buffer.same_as(buffer)) {
+  ICHECK(!res.defined());
+  res = region;
+}
+  }
+  return res;
+}
+
+/*!
+ * \brief Create a loop nest that represents cache copy (cache_read / 
cache_write) from read buffer
+ *to write buffer.
+ * \note This function will store the stmt with loop nesting to the 
CacheStageInfo, but only return
+ *the inside block.
+ * \param cache_region The cached copy region.
+ * \param info The cache stage information, which will be updated in the 
function.
+ * \param storage_scope The storage scope of the cached buffer (only used in 
naming here)
+ * \returns A block indicating the body of the loop nesting.
+ */
+Block MakeCacheStage(const BufferRegion& cache_region, CacheStageInfo* info,
+ const String& storage_scope) {
+  // loop variables
+  std::vector loop_vars;
+  // bindings in block realize
+  std::vector iter_values;
+  // Create loop vars and block vars' binding_value
+  for (const Range& axis_range : cache_region->region) {
+Var loop_var("ax" + std::to_string(loop_vars.size()));
+loop_vars.push_back(loop_var);
+iter_values.push_back(axis_range->min + loop_var);
+  }
+  // block variables
+  Array block_vars;
+  // block access region for read/write buffers
+  Region access_region;
+  // indices used in block body
+  Array access_indices;
+  // Create block vars, block's accessed region and accessing indices
+  for (const PrimExpr& dim : cache_region->buffer->shape) {
+Var var("v" + std::to_string(access_indices.size()));
+block_vars.push_back(IterVar(/*dom=*/Range::FromMinExtent(0, dim),
+ /*var=*/var,
+ 

[GitHub] [tvm] junrushao1994 commented on a change in pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#discussion_r698141716



##
File path: src/tir/schedule/primitive/cache_read_write.cc
##
@@ -0,0 +1,792 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "../utils.h"
+
+namespace tvm {
+namespace tir {
+
+/ Error Classes /
+
+class NotSingleWriteBlock : public ScheduleError {
+ public:
+  explicit NotSingleWriteBlock(IRModule mod, Buffer buffer, Array 
write_blocks)
+  : mod_(std::move(mod)), buffer_(std::move(buffer)) {
+ICHECK_GT(write_blocks.size(), 1);
+write_blocks_.reserve(write_blocks.size());
+for (const StmtSRef& block_sref : write_blocks) {
+  const BlockNode* block = TVM_SREF_TO_BLOCK(block, block_sref);
+  write_blocks_.push_back(GetRef(block));
+}
+  }
+
+  String FastErrorString() const final {
+return "ScheduleError: The buffer is allowed to be written by single 
block.";
+  }
+
+  String DetailRenderTemplate() const final {
+size_t k = write_blocks_.size();
+return "The buffer " + buffer_->name + " is expected to be written by 
single block, but got " +
+   std::to_string(k) + " blocks who write it.";
+  }
+
+  IRModule mod() const final { return mod_; }
+  Array LocationsOfInterest() const final {
+return {write_blocks_.begin(), write_blocks_.end()};
+  }
+
+ private:
+  IRModule mod_;
+  Buffer buffer_;
+  Array write_blocks_;
+};
+
+/ Helper Functions/Classes /
+
+/*! \brief The auxiliary info used for the insertion point and content of the 
cache stage. */
+struct CacheStageInfo {
+  /*! \brief The buffer to be read. */
+  Buffer read_buffer;
+  /*! \brief The buffer to be written. */
+  Buffer write_buffer;
+  /*! \brief The buffer allocation statement to be inserted. */
+  Buffer alloc;
+  /*! \brief The AST node whose body is where the cache stage should be 
inserted. */
+  StmtSRef loc_sref;
+  /*! \brief The index to insert the cache_read/cache_write stage. */
+  size_t loc_pos;
+  /*! \brief The cache_read/cache_write stage to be inserted. */
+  Stmt cache_stage;
+  /*! \brief The map used for ScheduleStateNode::Replace. */
+  Map block_map;
+};
+
+/*! \brief Return the buffer region realted with the buffer */
+Optional RelatedBufferRegion(const Array& 
buffer_regions,
+   const Buffer& buffer) {
+  Optional res = NullOpt;
+  for (const auto& region : buffer_regions) {
+if (region->buffer.same_as(buffer)) {
+  ICHECK(!res.defined());
+  res = region;
+}
+  }
+  return res;
+}
+
+/*!
+ * \brief Create a loop nest that represents cache copy (cache_read / 
cache_write) from read buffer
+ *to write buffer.
+ * \note This function will store the stmt with loop nesting to the 
CacheStageInfo, but only return
+ *the inside block.
+ * \param cache_region The cached copy region.
+ * \param info The cache stage information, which will be updated in the 
function.
+ * \param storage_scope The storage scope of the cached buffer (only used in 
naming here)
+ * \returns A block indicating the body of the loop nesting.
+ */
+Block MakeCacheStage(const BufferRegion& cache_region, CacheStageInfo* info,
+ const String& storage_scope) {
+  // loop variables
+  std::vector loop_vars;
+  // bindings in block realize
+  std::vector iter_values;
+  // Create loop vars and block vars' binding_value
+  for (const Range& axis_range : cache_region->region) {
+Var loop_var("ax" + std::to_string(loop_vars.size()));
+loop_vars.push_back(loop_var);
+iter_values.push_back(axis_range->min + loop_var);
+  }
+  // block variables
+  Array block_vars;
+  // block access region for read/write buffers
+  Region access_region;
+  // indices used in block body
+  Array access_indices;
+  // Create block vars, block's accessed region and accessing indices
+  for (const PrimExpr& dim : cache_region->buffer->shape) {
+Var var("v" + std::to_string(access_indices.size()));
+block_vars.push_back(IterVar(/*dom=*/Range::FromMinExtent(0, dim),
+ /*var=*/var,
+ 

[GitHub] [tvm] junrushao1994 commented on a change in pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#discussion_r698141528



##
File path: src/tir/schedule/primitive/cache_read_write.cc
##
@@ -0,0 +1,792 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "../utils.h"
+
+namespace tvm {
+namespace tir {
+
+/ Error Classes /
+
+class NotSingleWriteBlock : public ScheduleError {
+ public:
+  explicit NotSingleWriteBlock(IRModule mod, Buffer buffer, Array 
write_blocks)
+  : mod_(std::move(mod)), buffer_(std::move(buffer)) {
+ICHECK_GT(write_blocks.size(), 1);
+write_blocks_.reserve(write_blocks.size());
+for (const StmtSRef& block_sref : write_blocks) {
+  const BlockNode* block = TVM_SREF_TO_BLOCK(block, block_sref);
+  write_blocks_.push_back(GetRef(block));
+}
+  }
+
+  String FastErrorString() const final {
+return "ScheduleError: The buffer is allowed to be written by single 
block.";
+  }
+
+  String DetailRenderTemplate() const final {
+size_t k = write_blocks_.size();
+return "The buffer " + buffer_->name + " is expected to be written by 
single block, but got " +
+   std::to_string(k) + " blocks who write it.";
+  }
+
+  IRModule mod() const final { return mod_; }
+  Array LocationsOfInterest() const final {
+return {write_blocks_.begin(), write_blocks_.end()};
+  }
+
+ private:
+  IRModule mod_;
+  Buffer buffer_;
+  Array write_blocks_;
+};
+
+/ Helper Functions/Classes /
+
+/*! \brief The auxiliary info used for the insertion point and content of the 
cache stage. */
+struct CacheStageInfo {
+  /*! \brief The buffer to be read. */
+  Buffer read_buffer;
+  /*! \brief The buffer to be written. */
+  Buffer write_buffer;
+  /*! \brief The buffer allocation statement to be inserted. */
+  Buffer alloc;
+  /*! \brief The AST node whose body is where the cache stage should be 
inserted. */
+  StmtSRef loc_sref;
+  /*! \brief The index to insert the cache_read/cache_write stage. */
+  size_t loc_pos;
+  /*! \brief The cache_read/cache_write stage to be inserted. */
+  Stmt cache_stage;
+  /*! \brief The map used for ScheduleStateNode::Replace. */
+  Map block_map;
+};
+
+/*! \brief Return the buffer region realted with the buffer */
+Optional RelatedBufferRegion(const Array& 
buffer_regions,
+   const Buffer& buffer) {
+  Optional res = NullOpt;
+  for (const auto& region : buffer_regions) {
+if (region->buffer.same_as(buffer)) {
+  ICHECK(!res.defined());
+  res = region;
+}
+  }
+  return res;
+}
+
+/*!
+ * \brief Create a loop nest that represents cache copy (cache_read / 
cache_write) from read buffer
+ *to write buffer.
+ * \note This function will store the stmt with loop nesting to the 
CacheStageInfo, but only return
+ *the inside block.
+ * \param cache_region The cached copy region.
+ * \param info The cache stage information, which will be updated in the 
function.
+ * \param storage_scope The storage scope of the cached buffer (only used in 
naming here)
+ * \returns A block indicating the body of the loop nesting.
+ */
+Block MakeCacheStage(const BufferRegion& cache_region, CacheStageInfo* info,
+ const String& storage_scope) {
+  // loop variables
+  std::vector loop_vars;
+  // bindings in block realize
+  std::vector iter_values;
+  // Create loop vars and block vars' binding_value
+  for (const Range& axis_range : cache_region->region) {
+Var loop_var("ax" + std::to_string(loop_vars.size()));
+loop_vars.push_back(loop_var);
+iter_values.push_back(axis_range->min + loop_var);
+  }
+  // block variables
+  Array block_vars;
+  // block access region for read/write buffers
+  Region access_region;
+  // indices used in block body
+  Array access_indices;
+  // Create block vars, block's accessed region and accessing indices
+  for (const PrimExpr& dim : cache_region->buffer->shape) {
+Var var("v" + std::to_string(access_indices.size()));
+block_vars.push_back(IterVar(/*dom=*/Range::FromMinExtent(0, dim),
+ /*var=*/var,
+ 

[GitHub] [tvm] junrushao1994 commented on a change in pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#discussion_r698141420



##
File path: src/tir/schedule/primitive/cache_read_write.cc
##
@@ -0,0 +1,792 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "../utils.h"
+
+namespace tvm {
+namespace tir {
+
+/ Error Classes /
+
+class NotSingleWriteBlock : public ScheduleError {
+ public:
+  explicit NotSingleWriteBlock(IRModule mod, Buffer buffer, Array 
write_blocks)
+  : mod_(std::move(mod)), buffer_(std::move(buffer)) {
+ICHECK_GT(write_blocks.size(), 1);
+write_blocks_.reserve(write_blocks.size());
+for (const StmtSRef& block_sref : write_blocks) {
+  const BlockNode* block = TVM_SREF_TO_BLOCK(block, block_sref);
+  write_blocks_.push_back(GetRef(block));
+}
+  }
+
+  String FastErrorString() const final {
+return "ScheduleError: The buffer is allowed to be written by single 
block.";
+  }
+
+  String DetailRenderTemplate() const final {
+size_t k = write_blocks_.size();
+return "The buffer " + buffer_->name + " is expected to be written by 
single block, but got " +
+   std::to_string(k) + " blocks who write it.";
+  }
+
+  IRModule mod() const final { return mod_; }
+  Array LocationsOfInterest() const final {
+return {write_blocks_.begin(), write_blocks_.end()};
+  }
+
+ private:
+  IRModule mod_;
+  Buffer buffer_;
+  Array write_blocks_;
+};
+
+/ Helper Functions/Classes /
+
+/*! \brief The auxiliary info used for the insertion point and content of the 
cache stage. */
+struct CacheStageInfo {
+  /*! \brief The buffer to be read. */
+  Buffer read_buffer;
+  /*! \brief The buffer to be written. */
+  Buffer write_buffer;
+  /*! \brief The buffer allocation statement to be inserted. */
+  Buffer alloc;
+  /*! \brief The AST node whose body is where the cache stage should be 
inserted. */
+  StmtSRef loc_sref;
+  /*! \brief The index to insert the cache_read/cache_write stage. */
+  size_t loc_pos;
+  /*! \brief The cache_read/cache_write stage to be inserted. */
+  Stmt cache_stage;
+  /*! \brief The map used for ScheduleStateNode::Replace. */
+  Map block_map;
+};
+
+/*! \brief Return the buffer region realted with the buffer */
+Optional RelatedBufferRegion(const Array& 
buffer_regions,
+   const Buffer& buffer) {
+  Optional res = NullOpt;
+  for (const auto& region : buffer_regions) {
+if (region->buffer.same_as(buffer)) {
+  ICHECK(!res.defined());
+  res = region;
+}
+  }
+  return res;
+}
+
+/*!
+ * \brief Create a loop nest that represents cache copy (cache_read / 
cache_write) from read buffer
+ *to write buffer.
+ * \note This function will store the stmt with loop nesting to the 
CacheStageInfo, but only return
+ *the inside block.
+ * \param cache_region The cached copy region.
+ * \param info The cache stage information, which will be updated in the 
function.
+ * \param storage_scope The storage scope of the cached buffer (only used in 
naming here)
+ * \returns A block indicating the body of the loop nesting.
+ */
+Block MakeCacheStage(const BufferRegion& cache_region, CacheStageInfo* info,
+ const String& storage_scope) {
+  // loop variables
+  std::vector loop_vars;
+  // bindings in block realize
+  std::vector iter_values;
+  // Create loop vars and block vars' binding_value
+  for (const Range& axis_range : cache_region->region) {
+Var loop_var("ax" + std::to_string(loop_vars.size()));
+loop_vars.push_back(loop_var);
+iter_values.push_back(axis_range->min + loop_var);
+  }
+  // block variables
+  Array block_vars;
+  // block access region for read/write buffers
+  Region access_region;
+  // indices used in block body
+  Array access_indices;
+  // Create block vars, block's accessed region and accessing indices
+  for (const PrimExpr& dim : cache_region->buffer->shape) {
+Var var("v" + std::to_string(access_indices.size()));
+block_vars.push_back(IterVar(/*dom=*/Range::FromMinExtent(0, dim),
+ /*var=*/var,
+ 

[GitHub] [tvm] junrushao1994 commented on a change in pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#discussion_r698141000



##
File path: src/tir/schedule/primitive/cache_read_write.cc
##
@@ -0,0 +1,792 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "../utils.h"
+
+namespace tvm {
+namespace tir {
+
+/ Error Classes /
+
+class NotSingleWriteBlock : public ScheduleError {
+ public:
+  explicit NotSingleWriteBlock(IRModule mod, Buffer buffer, Array 
write_blocks)
+  : mod_(std::move(mod)), buffer_(std::move(buffer)) {
+ICHECK_GT(write_blocks.size(), 1);
+write_blocks_.reserve(write_blocks.size());
+for (const StmtSRef& block_sref : write_blocks) {
+  const BlockNode* block = TVM_SREF_TO_BLOCK(block, block_sref);
+  write_blocks_.push_back(GetRef(block));
+}
+  }
+
+  String FastErrorString() const final {
+return "ScheduleError: The buffer is allowed to be written by single 
block.";
+  }
+
+  String DetailRenderTemplate() const final {
+size_t k = write_blocks_.size();
+return "The buffer " + buffer_->name + " is expected to be written by 
single block, but got " +
+   std::to_string(k) + " blocks who write it.";
+  }
+
+  IRModule mod() const final { return mod_; }
+  Array LocationsOfInterest() const final {
+return {write_blocks_.begin(), write_blocks_.end()};
+  }
+
+ private:
+  IRModule mod_;
+  Buffer buffer_;
+  Array write_blocks_;
+};
+
+/ Helper Functions/Classes /
+
+/*! \brief The auxiliary info used for the insertion point and content of the 
cache stage. */
+struct CacheStageInfo {
+  /*! \brief The buffer to be read. */
+  Buffer read_buffer;
+  /*! \brief The buffer to be written. */
+  Buffer write_buffer;
+  /*! \brief The buffer allocation statement to be inserted. */
+  Buffer alloc;
+  /*! \brief The AST node whose body is where the cache stage should be 
inserted. */
+  StmtSRef loc_sref;
+  /*! \brief The index to insert the cache_read/cache_write stage. */
+  size_t loc_pos;
+  /*! \brief The cache_read/cache_write stage to be inserted. */
+  Stmt cache_stage;
+  /*! \brief The map used for ScheduleStateNode::Replace. */
+  Map block_map;
+};
+
+/*! \brief Return the buffer region realted with the buffer */
+Optional RelatedBufferRegion(const Array& 
buffer_regions,
+   const Buffer& buffer) {
+  Optional res = NullOpt;
+  for (const auto& region : buffer_regions) {
+if (region->buffer.same_as(buffer)) {
+  ICHECK(!res.defined());
+  res = region;
+}
+  }
+  return res;
+}
+
+/*!
+ * \brief Create a loop nest that represents cache copy (cache_read / 
cache_write) from read buffer
+ *to write buffer.
+ * \note This function will store the stmt with loop nesting to the 
CacheStageInfo, but only return
+ *the inside block.
+ * \param cache_region The cached copy region.
+ * \param info The cache stage information, which will be updated in the 
function.
+ * \param storage_scope The storage scope of the cached buffer (only used in 
naming here)
+ * \returns A block indicating the body of the loop nesting.
+ */
+Block MakeCacheStage(const BufferRegion& cache_region, CacheStageInfo* info,
+ const String& storage_scope) {
+  // loop variables
+  std::vector loop_vars;
+  // bindings in block realize
+  std::vector iter_values;
+  // Create loop vars and block vars' binding_value
+  for (const Range& axis_range : cache_region->region) {
+Var loop_var("ax" + std::to_string(loop_vars.size()));
+loop_vars.push_back(loop_var);
+iter_values.push_back(axis_range->min + loop_var);
+  }
+  // block variables
+  Array block_vars;
+  // block access region for read/write buffers
+  Region access_region;
+  // indices used in block body
+  Array access_indices;
+  // Create block vars, block's accessed region and accessing indices
+  for (const PrimExpr& dim : cache_region->buffer->shape) {
+Var var("v" + std::to_string(access_indices.size()));
+block_vars.push_back(IterVar(/*dom=*/Range::FromMinExtent(0, dim),
+ /*var=*/var,
+ 

[GitHub] [tvm] junrushao1994 commented on a change in pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#discussion_r698140436



##
File path: src/tir/schedule/primitive/cache_read_write.cc
##
@@ -0,0 +1,792 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "../utils.h"
+
+namespace tvm {
+namespace tir {
+
+/ Error Classes /
+
+class NotSingleWriteBlock : public ScheduleError {
+ public:
+  explicit NotSingleWriteBlock(IRModule mod, Buffer buffer, Array 
write_blocks)
+  : mod_(std::move(mod)), buffer_(std::move(buffer)) {
+ICHECK_GT(write_blocks.size(), 1);
+write_blocks_.reserve(write_blocks.size());
+for (const StmtSRef& block_sref : write_blocks) {
+  const BlockNode* block = TVM_SREF_TO_BLOCK(block, block_sref);
+  write_blocks_.push_back(GetRef(block));
+}
+  }
+
+  String FastErrorString() const final {
+return "ScheduleError: The buffer is allowed to be written by single 
block.";
+  }
+
+  String DetailRenderTemplate() const final {
+size_t k = write_blocks_.size();
+return "The buffer " + buffer_->name + " is expected to be written by 
single block, but got " +
+   std::to_string(k) + " blocks who write it.";
+  }
+
+  IRModule mod() const final { return mod_; }
+  Array LocationsOfInterest() const final {
+return {write_blocks_.begin(), write_blocks_.end()};
+  }
+
+ private:
+  IRModule mod_;
+  Buffer buffer_;
+  Array write_blocks_;
+};
+
+/ Helper Functions/Classes /
+
+/*! \brief The auxiliary info used for the insertion point and content of the 
cache stage. */
+struct CacheStageInfo {
+  /*! \brief The buffer to be read. */
+  Buffer read_buffer;
+  /*! \brief The buffer to be written. */
+  Buffer write_buffer;
+  /*! \brief The buffer allocation statement to be inserted. */
+  Buffer alloc;
+  /*! \brief The AST node whose body is where the cache stage should be 
inserted. */
+  StmtSRef loc_sref;
+  /*! \brief The index to insert the cache_read/cache_write stage. */
+  size_t loc_pos;
+  /*! \brief The cache_read/cache_write stage to be inserted. */
+  Stmt cache_stage;
+  /*! \brief The map used for ScheduleStateNode::Replace. */
+  Map block_map;
+};
+
+/*! \brief Return the buffer region realted with the buffer */
+Optional RelatedBufferRegion(const Array& 
buffer_regions,
+   const Buffer& buffer) {
+  Optional res = NullOpt;
+  for (const auto& region : buffer_regions) {
+if (region->buffer.same_as(buffer)) {
+  ICHECK(!res.defined());
+  res = region;
+}
+  }
+  return res;
+}
+
+/*!
+ * \brief Create a loop nest that represents cache copy (cache_read / 
cache_write) from read buffer
+ *to write buffer.
+ * \note This function will store the stmt with loop nesting to the 
CacheStageInfo, but only return
+ *the inside block.
+ * \param cache_region The cached copy region.
+ * \param info The cache stage information, which will be updated in the 
function.
+ * \param storage_scope The storage scope of the cached buffer (only used in 
naming here)
+ * \returns A block indicating the body of the loop nesting.
+ */
+Block MakeCacheStage(const BufferRegion& cache_region, CacheStageInfo* info,
+ const String& storage_scope) {
+  // loop variables
+  std::vector loop_vars;
+  // bindings in block realize
+  std::vector iter_values;
+  // Create loop vars and block vars' binding_value
+  for (const Range& axis_range : cache_region->region) {
+Var loop_var("ax" + std::to_string(loop_vars.size()));
+loop_vars.push_back(loop_var);
+iter_values.push_back(axis_range->min + loop_var);
+  }
+  // block variables
+  Array block_vars;
+  // block access region for read/write buffers
+  Region access_region;
+  // indices used in block body
+  Array access_indices;
+  // Create block vars, block's accessed region and accessing indices
+  for (const PrimExpr& dim : cache_region->buffer->shape) {
+Var var("v" + std::to_string(access_indices.size()));
+block_vars.push_back(IterVar(/*dom=*/Range::FromMinExtent(0, dim),
+ /*var=*/var,
+ 

[GitHub] [tvm] junrushao1994 commented on a change in pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#discussion_r698138163



##
File path: src/tir/schedule/primitive/cache_read_write.cc
##
@@ -0,0 +1,792 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "../utils.h"
+
+namespace tvm {
+namespace tir {
+
+/ Error Classes /
+
+class NotSingleWriteBlock : public ScheduleError {
+ public:
+  explicit NotSingleWriteBlock(IRModule mod, Buffer buffer, Array 
write_blocks)
+  : mod_(std::move(mod)), buffer_(std::move(buffer)) {
+ICHECK_GT(write_blocks.size(), 1);
+write_blocks_.reserve(write_blocks.size());
+for (const StmtSRef& block_sref : write_blocks) {
+  const BlockNode* block = TVM_SREF_TO_BLOCK(block, block_sref);
+  write_blocks_.push_back(GetRef(block));
+}
+  }
+
+  String FastErrorString() const final {
+return "ScheduleError: The buffer is allowed to be written by single 
block.";
+  }
+
+  String DetailRenderTemplate() const final {
+size_t k = write_blocks_.size();
+return "The buffer " + buffer_->name + " is expected to be written by 
single block, but got " +
+   std::to_string(k) + " blocks who write it.";
+  }
+
+  IRModule mod() const final { return mod_; }
+  Array LocationsOfInterest() const final {
+return {write_blocks_.begin(), write_blocks_.end()};
+  }
+
+ private:
+  IRModule mod_;
+  Buffer buffer_;
+  Array write_blocks_;
+};
+
+/ Helper Functions/Classes /
+
+/*! \brief The auxiliary info used for the insertion point and content of the 
cache stage. */
+struct CacheStageInfo {
+  /*! \brief The buffer to be read. */
+  Buffer read_buffer;
+  /*! \brief The buffer to be written. */
+  Buffer write_buffer;
+  /*! \brief The buffer allocation statement to be inserted. */
+  Buffer alloc;
+  /*! \brief The AST node whose body is where the cache stage should be 
inserted. */
+  StmtSRef loc_sref;
+  /*! \brief The index to insert the cache_read/cache_write stage. */
+  size_t loc_pos;
+  /*! \brief The cache_read/cache_write stage to be inserted. */
+  Stmt cache_stage;
+  /*! \brief The map used for ScheduleStateNode::Replace. */
+  Map block_map;

Review comment:
   Let's use `block_reuse` instead to be consistent with other schedule 
primitives
   
   ```suggestion
 Map block_reuse;
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on a change in pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#discussion_r698136293



##
File path: src/tir/schedule/transform.cc
##
@@ -31,5 +31,44 @@ Block WithAnnotation(const BlockNode* block, const String& 
attr_key, const Objec
   return Block(new_block);
 }
 
+/ Buffer Related /
+Buffer WithScope(const Buffer& buffer, const String& scope) {
+  auto n = make_object(*buffer.get());
+  auto new_ptr = make_object(*n->data.get());
+  const auto* ptr_type = new_ptr->type_annotation.as();
+  ICHECK(ptr_type);

Review comment:
   ```suggestion
 ObjectPtr new_buffer = make_object(*buffer.get());
 ObjectPtr new_var = make_object(buffer->data.get());
 const auto* ptr_type = TVM_TYPE_AS(ptr_type, 
buffer->data->type_annotation, PointerTypeNode);
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on a change in pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#discussion_r698136293



##
File path: src/tir/schedule/transform.cc
##
@@ -31,5 +31,44 @@ Block WithAnnotation(const BlockNode* block, const String& 
attr_key, const Objec
   return Block(new_block);
 }
 
+/ Buffer Related /
+Buffer WithScope(const Buffer& buffer, const String& scope) {
+  auto n = make_object(*buffer.get());
+  auto new_ptr = make_object(*n->data.get());
+  const auto* ptr_type = new_ptr->type_annotation.as();
+  ICHECK(ptr_type);

Review comment:
   ```suggestion
 ObjectPtr n = make_object(*buffer.get());
 ObjectPtr new_ptr = make_object(*n->data.get());
 const auto* ptr_type = TVM_TYPE_AS(ptr_type, 
buffer->data->type_annotation, PointerTypeNode);
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] coder-sjn opened a new issue #8879: v0.7 document?

2021-08-29 Thread GitBox


coder-sjn opened a new issue #8879:
URL: https://github.com/apache/tvm/issues/8879


   Is there a 0.7 version of the document for reference? thanks
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on a change in pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#discussion_r698110223



##
File path: src/tir/schedule/analysis.h
##
@@ -56,6 +56,14 @@ void VerifyCachedFlags(const ScheduleState& self);
 const PrimFuncNode* GetRootPrimFunc(const IRModule& mod, const StmtNode* 
root_block,
 GlobalVar* result_g_var);
 
+/ SRef Tree Related /
+/*!
+ * \brief Get the root node of the sref tree, which is the root block of the 
PrimFunc.
+ * \param sref The given sref.
+ * \return The root node of the sref tree which contains the given node.
+ */
+StmtSRef GetSRefTreeRoot(const StmtSRef& sref);

Review comment:
   Move this into section "IR Module" given it is IR related




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on a change in pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#discussion_r698108905



##
File path: include/tvm/tir/schedule/state.h
##
@@ -128,6 +128,11 @@ class ScheduleStateNode : public Object {
*/
   TVM_DLL void Replace(const tir::StmtSRef& src_sref, const Stmt& tgt_stmt,
const Map& block_sref_reuse);
+  /*!
+   * \brief Recalculate the `affine_binding` flag of the scope block info.
+   * \param scope_sref The sref to the interested scope block.
+   */
+  TVM_DLL void UpdateAffineFlag(const StmtSRef& scope_sref);

Review comment:
   TQ and I intentionally removed this method, because in most of the 
schedule primitives, it is known almost known whether a block binding is affine 
or not. In our particular case, trivial bindings are always affine AFAICT




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 edited a comment on pull request #8816: [ONNX][TOPI] Support select_last_index for argmin/max

2021-08-29 Thread GitBox


junrushao1994 edited a comment on pull request #8816:
URL: https://github.com/apache/tvm/pull/8816#issuecomment-907938696


   Would be nice if someone could help review the PR next week. Thanks a lot! 
CC @jcf94 @mbrookhart @jwfromm @masahi 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 edited a comment on pull request #8816: [ONNX][TOPI] Support select_last_index for argmin/max

2021-08-29 Thread GitBox


junrushao1994 edited a comment on pull request #8816:
URL: https://github.com/apache/tvm/pull/8816#issuecomment-907938696


   Would be nice if someone could help review the PR next week. CC @jcf94 
@mbrookhart @jwfromm


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #8816: [ONNX][TOPI] Support select_last_index for argmin/max

2021-08-29 Thread GitBox


junrushao1994 commented on pull request #8816:
URL: https://github.com/apache/tvm/pull/8816#issuecomment-907938696


   Would be nice if someone could help review the PR! CC @jcf94 @mbrookhart 
@jwfromm


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on a change in pull request #8835: Change target string to Target object in the TE compiler and interpreter

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #8835:
URL: https://github.com/apache/tvm/pull/8835#discussion_r698103474



##
File path: include/tvm/target/target.h
##
@@ -203,5 +204,59 @@ void CheckAndUpdateHostConsistency(Map* 
target, Target* host);
  * \param host The Target typed object for target host to be updated
  */
 void CheckAndUpdateHostConsistency(Map* target, Target* 
host);
+
+// TODO(@electriclilies): Move to somewhere in backend and add note about 
appropriate use

Review comment:
   Hey what about moving these methods temporarily to 
`src/relay/backend/utils.h` instead? Given these are only used in relay backend 
right now, I think it would be helpful to sort of prevent future developers to 
use them :-)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (27d3d60 -> 06fc788)

2021-08-29 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 27d3d60  [TIR] GetBlockReadWriteRegion (#8875)
 add 06fc788  [RISCV] Add support for llvm parameter -mabi (-target-abi) 
(#8860)

No new revisions were added by this update.

Summary of changes:
 python/tvm/target/__init__.py  | 17 +-
 python/tvm/target/target.py| 50 ++
 src/target/llvm/llvm_common.cc |  6 +
 src/target/llvm/llvm_module.cc | 14 ++--
 src/target/target_kind.cc  |  1 +
 5 files changed, 85 insertions(+), 3 deletions(-)


[GitHub] [tvm] junrushao1994 merged pull request #8860: [RISCV] Add support for llvm parameter -mabi (aka -target-abi)

2021-08-29 Thread GitBox


junrushao1994 merged pull request #8860:
URL: https://github.com/apache/tvm/pull/8860


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on pull request #23: [RFC][TIR] TIR Pinned Memory Representation

2021-08-29 Thread GitBox


junrushao1994 commented on pull request #23:
URL: https://github.com/apache/tvm-rfcs/pull/23#issuecomment-907922556


   To clarify a bit, @tqchen and I are referring to a solution that we are 
still using "global" as the storage scope, which is served as the property of 
the storage, but add extra annotations (e.g. "sram") to further indicate the 
exact place where the buffer is stored. This way, the system is able to 
schedule correctly


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on pull request #22: [RFC][TIR] TIR Non-scalar Constants

2021-08-29 Thread GitBox


junrushao1994 commented on pull request #22:
URL: https://github.com/apache/tvm-rfcs/pull/22#issuecomment-907921273


   Thanks @manupa-arm for the nice RFC! I left a few questions, and happy to 
discuss more!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #22: [RFC][TIR] TIR Non-scalar Constants

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #22:
URL: https://github.com/apache/tvm-rfcs/pull/22#discussion_r698099374



##
File path: rfcs/0022-tir-non-scalar-constants.md
##
@@ -0,0 +1,107 @@
+
+- Feature Name: tir_non_scalar_constants
+- Start Date: 2021-06-01
+- RFC PR: https://github.com/apache/tvm-rfcs/pull/22
+- GitHub Issue: TBD
+
+# 1. Summary
+
+This RFC proposes how non-scalar constants could be represented in TIR and 
used by passes in the lowering process.
+
+# 2. Motivation 
+
+Currently, the non-scalar constants could be represented in Relay 
(relay.Constant) to be used by relay passes but not in TIR. Therefore, when 
performing lowering using TIR passes, we have to maintain a side-channel of 
tir::Var to constant non-scalar data mapping to perform transformations that 
could use the knowledge where some of the data are constants.
+
+Few example scenarios as further motivation :
+
+## Weight compression
+
+When lowering for accelerators (E.g. : [Arm(R) Ethos(TM)-U 
NPU](https://github.com/apache/tvm-rfcs/pull/11)), certain operations will need 
to get tiled to co-optimize performance and memory utilization. Such tiling 
patterns create slices of weights that need compressing that will end up with 
varying sizes. Therefore, the knowledge of some tir::Vars refer to constants 
are critical in the level of TIR to perform this.
+
+## Memory Planning
+
+The TIR program has the ability to express both inter and intra operator 
memory requirement, post-scheduling as explained further by [Unified Static 
Memory Planning RFC](https://github.com/apache/tvm-rfcs/pull/9). It would be 
better if the constants could be embedded to the TIR PrimFunc. Moreover, this 
allows various [target-dependent 
lowerings](https://github.com/apache/tvm-rfcs/pull/10), to produce TIR 
PrimFuncs with constants in it.
+
+## Winograd Constants
+
+The Winograd transformation (used for fast GEMMs) involves multiplication by a 
hard-coded constant tensor. This is currently accomplished in TE using a 
complicated TE compute expression with many nested selects. Being able to 
directly express a constant tensor here would significantly simplify this code.
+
+
+# 3. Guide-level explanation
+
+This is not particularly a user-facing feature and this will allow constants 
to be 'linked' to TIR. Initially, we are planning to use this with gated on 
'-link-params' argument for relay.build and TVMC.
+
+# 4. Reference-level explanation
+
+The proposal is quite simple and it could be explained as follows :
+
+```
+@tvm.script.tir
+def myfunc():   
+   param = tir.allocate_const([1, 1, 1, 1, 1, 1, 1, 1, 1, 1], "int32", [10])
+```
+
+This follows closely the semantics of tir.allocate and the difference being it 
represent a buffer filled with constants.

Review comment:
   Perhaps it is worthwhile to discuss a bit more about the semantics :-)
   
   Will the constants be allocated on stack or on heap? Is this designed for 
small matrices (e.g. the small matrix in winograd), or relatively larger 
matrices (e.g. the weight that needs prefetching)? How will lowering and code 
generation be affected? Does it work for GPU and other devices? How does it 
affect linkers' job?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #22: [RFC][TIR] TIR Non-scalar Constants

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #22:
URL: https://github.com/apache/tvm-rfcs/pull/22#discussion_r698099022



##
File path: rfcs/0022-tir-non-scalar-constants.md
##
@@ -0,0 +1,107 @@
+
+- Feature Name: tir_non_scalar_constants
+- Start Date: 2021-06-01
+- RFC PR: https://github.com/apache/tvm-rfcs/pull/22
+- GitHub Issue: TBD
+
+# 1. Summary
+
+This RFC proposes how non-scalar constants could be represented in TIR and 
used by passes in the lowering process.
+
+# 2. Motivation 
+
+Currently, the non-scalar constants could be represented in Relay 
(relay.Constant) to be used by relay passes but not in TIR. Therefore, when 
performing lowering using TIR passes, we have to maintain a side-channel of 
tir::Var to constant non-scalar data mapping to perform transformations that 
could use the knowledge where some of the data are constants.
+
+Few example scenarios as further motivation :
+
+## Weight compression
+
+When lowering for accelerators (E.g. : [Arm(R) Ethos(TM)-U 
NPU](https://github.com/apache/tvm-rfcs/pull/11)), certain operations will need 
to get tiled to co-optimize performance and memory utilization. Such tiling 
patterns create slices of weights that need compressing that will end up with 
varying sizes. Therefore, the knowledge of some tir::Vars refer to constants 
are critical in the level of TIR to perform this.
+
+## Memory Planning
+
+The TIR program has the ability to express both inter and intra operator 
memory requirement, post-scheduling as explained further by [Unified Static 
Memory Planning RFC](https://github.com/apache/tvm-rfcs/pull/9). It would be 
better if the constants could be embedded to the TIR PrimFunc. Moreover, this 
allows various [target-dependent 
lowerings](https://github.com/apache/tvm-rfcs/pull/10), to produce TIR 
PrimFuncs with constants in it.
+
+## Winograd Constants
+
+The Winograd transformation (used for fast GEMMs) involves multiplication by a 
hard-coded constant tensor. This is currently accomplished in TE using a 
complicated TE compute expression with many nested selects. Being able to 
directly express a constant tensor here would significantly simplify this code.
+
+
+# 3. Guide-level explanation

Review comment:
   Perhaps it is worthwhile to discuss about the semantics of a TIR 
constant as well :-)
   
   Will the constants be allocated on stack or on heap? Is this designed for 
small matrices (e.g. the small matrix in winograd), or relatively larger 
matrices (e.g. the weight that needs prefetching)? How will lowering and code 
generation be affected? Does it work for GPU and other devices? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #22: [RFC][TIR] TIR Non-scalar Constants

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #22:
URL: https://github.com/apache/tvm-rfcs/pull/22#discussion_r698098329



##
File path: rfcs/0022-tir-non-scalar-constants.md
##
@@ -0,0 +1,107 @@
+
+- Feature Name: tir_non_scalar_constants
+- Start Date: 2021-06-01
+- RFC PR: https://github.com/apache/tvm-rfcs/pull/22
+- GitHub Issue: TBD
+
+# 1. Summary
+
+This RFC proposes how non-scalar constants could be represented in TIR and 
used by passes in the lowering process.
+
+# 2. Motivation 
+
+Currently, the non-scalar constants could be represented in Relay 
(relay.Constant) to be used by relay passes but not in TIR. Therefore, when 
performing lowering using TIR passes, we have to maintain a side-channel of 
tir::Var to constant non-scalar data mapping to perform transformations that 
could use the knowledge where some of the data are constants.
+
+Few example scenarios as further motivation :
+
+## Weight compression
+
+When lowering for accelerators (E.g. : [Arm(R) Ethos(TM)-U 
NPU](https://github.com/apache/tvm-rfcs/pull/11)), certain operations will need 
to get tiled to co-optimize performance and memory utilization. Such tiling 
patterns create slices of weights that need compressing that will end up with 
varying sizes. Therefore, the knowledge of some tir::Vars refer to constants 
are critical in the level of TIR to perform this.
+
+## Memory Planning
+
+The TIR program has the ability to express both inter and intra operator 
memory requirement, post-scheduling as explained further by [Unified Static 
Memory Planning RFC](https://github.com/apache/tvm-rfcs/pull/9). It would be 
better if the constants could be embedded to the TIR PrimFunc. Moreover, this 
allows various [target-dependent 
lowerings](https://github.com/apache/tvm-rfcs/pull/10), to produce TIR 
PrimFuncs with constants in it.
+
+## Winograd Constants
+
+The Winograd transformation (used for fast GEMMs) involves multiplication by a 
hard-coded constant tensor. This is currently accomplished in TE using a 
complicated TE compute expression with many nested selects. Being able to 
directly express a constant tensor here would significantly simplify this code.
+
+
+# 3. Guide-level explanation
+
+This is not particularly a user-facing feature and this will allow constants 
to be 'linked' to TIR. Initially, we are planning to use this with gated on 
'-link-params' argument for relay.build and TVMC.

Review comment:
   add `tvm.build` as well (i suppose)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #22: [RFC][TIR] TIR Non-scalar Constants

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #22:
URL: https://github.com/apache/tvm-rfcs/pull/22#discussion_r698098329



##
File path: rfcs/0022-tir-non-scalar-constants.md
##
@@ -0,0 +1,107 @@
+
+- Feature Name: tir_non_scalar_constants
+- Start Date: 2021-06-01
+- RFC PR: https://github.com/apache/tvm-rfcs/pull/22
+- GitHub Issue: TBD
+
+# 1. Summary
+
+This RFC proposes how non-scalar constants could be represented in TIR and 
used by passes in the lowering process.
+
+# 2. Motivation 
+
+Currently, the non-scalar constants could be represented in Relay 
(relay.Constant) to be used by relay passes but not in TIR. Therefore, when 
performing lowering using TIR passes, we have to maintain a side-channel of 
tir::Var to constant non-scalar data mapping to perform transformations that 
could use the knowledge where some of the data are constants.
+
+Few example scenarios as further motivation :
+
+## Weight compression
+
+When lowering for accelerators (E.g. : [Arm(R) Ethos(TM)-U 
NPU](https://github.com/apache/tvm-rfcs/pull/11)), certain operations will need 
to get tiled to co-optimize performance and memory utilization. Such tiling 
patterns create slices of weights that need compressing that will end up with 
varying sizes. Therefore, the knowledge of some tir::Vars refer to constants 
are critical in the level of TIR to perform this.
+
+## Memory Planning
+
+The TIR program has the ability to express both inter and intra operator 
memory requirement, post-scheduling as explained further by [Unified Static 
Memory Planning RFC](https://github.com/apache/tvm-rfcs/pull/9). It would be 
better if the constants could be embedded to the TIR PrimFunc. Moreover, this 
allows various [target-dependent 
lowerings](https://github.com/apache/tvm-rfcs/pull/10), to produce TIR 
PrimFuncs with constants in it.
+
+## Winograd Constants
+
+The Winograd transformation (used for fast GEMMs) involves multiplication by a 
hard-coded constant tensor. This is currently accomplished in TE using a 
complicated TE compute expression with many nested selects. Being able to 
directly express a constant tensor here would significantly simplify this code.
+
+
+# 3. Guide-level explanation
+
+This is not particularly a user-facing feature and this will allow constants 
to be 'linked' to TIR. Initially, we are planning to use this with gated on 
'-link-params' argument for relay.build and TVMC.

Review comment:
   + `tvm.build` (i suppose)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #22: [RFC][TIR] TIR Non-scalar Constants

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #22:
URL: https://github.com/apache/tvm-rfcs/pull/22#discussion_r698098179



##
File path: rfcs/0022-tir-non-scalar-constants.md
##
@@ -0,0 +1,107 @@
+
+- Feature Name: tir_non_scalar_constants
+- Start Date: 2021-06-01
+- RFC PR: https://github.com/apache/tvm-rfcs/pull/22
+- GitHub Issue: TBD
+
+# 1. Summary
+
+This RFC proposes how non-scalar constants could be represented in TIR and 
used by passes in the lowering process.
+
+# 2. Motivation 
+
+Currently, the non-scalar constants could be represented in Relay 
(relay.Constant) to be used by relay passes but not in TIR. Therefore, when 
performing lowering using TIR passes, we have to maintain a side-channel of 
tir::Var to constant non-scalar data mapping to perform transformations that 
could use the knowledge where some of the data are constants.
+
+Few example scenarios as further motivation :
+
+## Weight compression
+
+When lowering for accelerators (E.g. : [Arm(R) Ethos(TM)-U 
NPU](https://github.com/apache/tvm-rfcs/pull/11)), certain operations will need 
to get tiled to co-optimize performance and memory utilization. Such tiling 
patterns create slices of weights that need compressing that will end up with 
varying sizes. Therefore, the knowledge of some tir::Vars refer to constants 
are critical in the level of TIR to perform this.
+
+## Memory Planning
+
+The TIR program has the ability to express both inter and intra operator 
memory requirement, post-scheduling as explained further by [Unified Static 
Memory Planning RFC](https://github.com/apache/tvm-rfcs/pull/9). It would be 
better if the constants could be embedded to the TIR PrimFunc. Moreover, this 
allows various [target-dependent 
lowerings](https://github.com/apache/tvm-rfcs/pull/10), to produce TIR 
PrimFuncs with constants in it.

Review comment:
   I am sorry but I am not sure if I understand this claim correctly. Do we 
need to understand the actual values inside the constants to do memory 
planning? My understanding is that if we know the shapes/dtypes, and the 
address to which the constants are stored in memory, then we can do all sorts 
of memory planning and prefetching stuff. Is that correct? Thanks a lot!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #22: [RFC][TIR] TIR Non-scalar Constants

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #22:
URL: https://github.com/apache/tvm-rfcs/pull/22#discussion_r698097817



##
File path: rfcs/0022-tir-non-scalar-constants.md
##
@@ -0,0 +1,107 @@
+
+- Feature Name: tir_non_scalar_constants
+- Start Date: 2021-06-01
+- RFC PR: https://github.com/apache/tvm-rfcs/pull/22
+- GitHub Issue: TBD
+
+# 1. Summary
+
+This RFC proposes how non-scalar constants could be represented in TIR and 
used by passes in the lowering process.
+
+# 2. Motivation 
+
+Currently, the non-scalar constants could be represented in Relay 
(relay.Constant) to be used by relay passes but not in TIR. Therefore, when 
performing lowering using TIR passes, we have to maintain a side-channel of 
tir::Var to constant non-scalar data mapping to perform transformations that 
could use the knowledge where some of the data are constants.
+
+Few example scenarios as further motivation :
+
+## Weight compression
+
+When lowering for accelerators (E.g. : [Arm(R) Ethos(TM)-U 
NPU](https://github.com/apache/tvm-rfcs/pull/11)), certain operations will need 
to get tiled to co-optimize performance and memory utilization. Such tiling 
patterns create slices of weights that need compressing that will end up with 
varying sizes. Therefore, the knowledge of some tir::Vars refer to constants 
are critical in the level of TIR to perform this.

Review comment:
   Hey I have question here. I wonder if the values inside these constant 
weights actually matter, or what matters is their shapes/dtypes? If we only 
care about the shapes/dtypes, we don't need to link the actual values into TIR, 
is that correct? Thanks!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #22: [RFC][TIR] TIR Non-scalar Constants

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #22:
URL: https://github.com/apache/tvm-rfcs/pull/22#discussion_r698097515



##
File path: rfcs/0022-tir-non-scalar-constants.md
##
@@ -0,0 +1,107 @@
+
+- Feature Name: tir_non_scalar_constants
+- Start Date: 2021-06-01
+- RFC PR: https://github.com/apache/tvm-rfcs/pull/22
+- GitHub Issue: TBD
+
+# 1. Summary
+
+This RFC proposes how non-scalar constants could be represented in TIR and 
used by passes in the lowering process.
+
+# 2. Motivation 
+
+Currently, the non-scalar constants could be represented in Relay 
(relay.Constant) to be used by relay passes but not in TIR. Therefore, when 
performing lowering using TIR passes, we have to maintain a side-channel of 
tir::Var to constant non-scalar data mapping to perform transformations that 
could use the knowledge where some of the data are constants.
+
+Few example scenarios as further motivation :
+
+## Weight compression
+
+When lowering for accelerators (E.g. : [Arm(R) Ethos(TM)-U 
NPU](https://github.com/apache/tvm-rfcs/pull/11)), certain operations will need 
to get tiled to co-optimize performance and memory utilization. Such tiling 
patterns create slices of weights that need compressing that will end up with 
varying sizes. Therefore, the knowledge of some tir::Vars refer to constants 
are critical in the level of TIR to perform this.
+
+## Memory Planning
+
+The TIR program has the ability to express both inter and intra operator 
memory requirement, post-scheduling as explained further by [Unified Static 
Memory Planning RFC](https://github.com/apache/tvm-rfcs/pull/9). It would be 
better if the constants could be embedded to the TIR PrimFunc. Moreover, this 
allows various [target-dependent 
lowerings](https://github.com/apache/tvm-rfcs/pull/10), to produce TIR 
PrimFuncs with constants in it.
+
+## Winograd Constants
+
+The Winograd transformation (used for fast GEMMs) involves multiplication by a 
hard-coded constant tensor. This is currently accomplished in TE using a 
complicated TE compute expression with many nested selects. Being able to 
directly express a constant tensor here would significantly simplify this code.

Review comment:
   The constants in wingrad convolutions are a relatively small matrix, 
which are always unrolled in TIR scheduling, inlined in generated binary and 
thus won't incur any extra storage like what `relay.Constant` does. So my 
question is, how does this RFC handle inlining such constants into 
corresponding unrolled operations?
   
   As a concrete example, suppose we have the code snippet below:
   
   ```python
   tir.constant(c, [1, 2, 3])  # a temporary syntax for declaring constants
   for i in tir.unroll(3):
 a[i] = b[i] * c[i]
   ```
   
   Does this RFC consider the pass that inlines these constants into the code 
and transform into the following TIR:
   
   ```python
   a[0] = b[0] * 1
   a[1] = b[1] * 2
   a[2] = b[2] * 3
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 removed a comment on pull request #22: [RFC][TIR] TIR Non-scalar Constants

2021-08-29 Thread GitBox


junrushao1994 removed a comment on pull request #22:
URL: https://github.com/apache/tvm-rfcs/pull/22#issuecomment-900483549


   Thank you so much for the RFC!! Will read tomorrow ❤️


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#issuecomment-907911081


   Thanks again for the contribution! I have no doubt that it is going to be 
hugely important piece of work. Just made some suggestions in terms of wording.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698095269



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks
+
+So we hope to use TVM to accelerate PyTorch model inference.
+
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+
+For example, we have an end-to-end resnet classification model, consisting of 
3 parts:
+
+1. Image reader
+2. Image transforms
+3. Resnet model inference
+
+```
+class Predictor(nn.Module):
+
+def __init__(self, tvm_module=None):
+super().__init__()
+self.resnet18 = resnet18(pretrained=True, progress=False).eval()
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+print(x.shape)
+y_pred = self.resnet18(x)
+return y_pred.argmax(dim=1)
+```
+
+We choose to accelerate resnet model with PyTorchTVM
+
+```
+from tvm.contrib.pt_op import PyTorchTVMModule, compile
+
+print("compile...")
+option = {
+"input_infos": [
+("x", (1, 3, 224, 224)),
+],
+"default_dtype": "float16",
+"export_dir": "pytorch_compiled",
+"num_outputs": 1,
+"tuning_n_trials": 0,  # set zero to skip tuning
+"tuning_log_file": "tuning.log",
+}
+x = torch.randn(1, 3, 224, 224).cuda().half()
+resnet_jit = torch.jit.trace(model.resnet18, x)
+resnet_tvm = compile(resnet_jit, option)
+```
+
+Then we can use the accelerated tvm module directly in pytorch, and also use 
`torch.jit.script` together with the other 2 parts.
+
+```
+resnet_tvm = torch.jit.script(resnet_tvm)
+print(resnet_tvm.graph)
+
+
+class PredictorTVM(nn.Module):
+
+def __init__(self):
+super().__init__()
+self.resnet18 = resnet_tvm
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+# y_pred = self.resnet18(x)
+y_pred = self.resnet18([x])[0]
+return y_pred.argmax(dim=1)
+
+
+print("run tvm...")
+model_tvm = PredictorTVM().cuda().half()
+for i in range(20):
+t = time.time()
+model_tvm([image_path])
+torch.cuda.synchronize()
+print(time.time() - t)
+
+torch.jit.script(model_tvm).save("model_tvm.pt")
+```
+
+Finally, we get a TVM accelerated model, which can be loaded and served in 
production.
+
+
+# Reference-level explanation
+[reference-level-explanation]: #reference-level-explanation
+
+We have opened an initial PR: https://github.com/apache/tvm/pull/8777
+
+The essential cpp code is as follows:
+
+```
+// This is just a wrapper class of tvm graph runtime module
+class 

[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698095199



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks
+
+So we hope to use TVM to accelerate PyTorch model inference.
+
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+
+For example, we have an end-to-end resnet classification model, consisting of 
3 parts:
+
+1. Image reader
+2. Image transforms
+3. Resnet model inference
+
+```
+class Predictor(nn.Module):
+
+def __init__(self, tvm_module=None):
+super().__init__()
+self.resnet18 = resnet18(pretrained=True, progress=False).eval()
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+print(x.shape)
+y_pred = self.resnet18(x)
+return y_pred.argmax(dim=1)
+```
+
+We choose to accelerate resnet model with PyTorchTVM
+
+```
+from tvm.contrib.pt_op import PyTorchTVMModule, compile
+
+print("compile...")
+option = {
+"input_infos": [
+("x", (1, 3, 224, 224)),
+],
+"default_dtype": "float16",
+"export_dir": "pytorch_compiled",
+"num_outputs": 1,
+"tuning_n_trials": 0,  # set zero to skip tuning
+"tuning_log_file": "tuning.log",
+}
+x = torch.randn(1, 3, 224, 224).cuda().half()
+resnet_jit = torch.jit.trace(model.resnet18, x)
+resnet_tvm = compile(resnet_jit, option)
+```
+
+Then we can use the accelerated tvm module directly in pytorch, and also use 
`torch.jit.script` together with the other 2 parts.
+
+```
+resnet_tvm = torch.jit.script(resnet_tvm)
+print(resnet_tvm.graph)
+
+
+class PredictorTVM(nn.Module):
+
+def __init__(self):
+super().__init__()
+self.resnet18 = resnet_tvm
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+# y_pred = self.resnet18(x)
+y_pred = self.resnet18([x])[0]
+return y_pred.argmax(dim=1)
+
+
+print("run tvm...")
+model_tvm = PredictorTVM().cuda().half()
+for i in range(20):
+t = time.time()
+model_tvm([image_path])
+torch.cuda.synchronize()
+print(time.time() - t)
+
+torch.jit.script(model_tvm).save("model_tvm.pt")
+```
+
+Finally, we get a TVM accelerated model, which can be loaded and served in 
production.
+
+
+# Reference-level explanation
+[reference-level-explanation]: #reference-level-explanation
+
+We have opened an initial PR: https://github.com/apache/tvm/pull/8777
+
+The essential cpp code is as follows:
+
+```
+// This is just a wrapper class of tvm graph runtime module
+class 

[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698095090



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks
+
+So we hope to use TVM to accelerate PyTorch model inference.
+
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+
+For example, we have an end-to-end resnet classification model, consisting of 
3 parts:
+
+1. Image reader
+2. Image transforms
+3. Resnet model inference
+
+```
+class Predictor(nn.Module):
+
+def __init__(self, tvm_module=None):
+super().__init__()
+self.resnet18 = resnet18(pretrained=True, progress=False).eval()
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+print(x.shape)
+y_pred = self.resnet18(x)
+return y_pred.argmax(dim=1)
+```
+
+We choose to accelerate resnet model with PyTorchTVM
+
+```
+from tvm.contrib.pt_op import PyTorchTVMModule, compile
+
+print("compile...")
+option = {
+"input_infos": [
+("x", (1, 3, 224, 224)),
+],
+"default_dtype": "float16",
+"export_dir": "pytorch_compiled",
+"num_outputs": 1,
+"tuning_n_trials": 0,  # set zero to skip tuning
+"tuning_log_file": "tuning.log",
+}
+x = torch.randn(1, 3, 224, 224).cuda().half()
+resnet_jit = torch.jit.trace(model.resnet18, x)
+resnet_tvm = compile(resnet_jit, option)
+```
+
+Then we can use the accelerated tvm module directly in pytorch, and also use 
`torch.jit.script` together with the other 2 parts.
+
+```
+resnet_tvm = torch.jit.script(resnet_tvm)
+print(resnet_tvm.graph)
+
+
+class PredictorTVM(nn.Module):
+
+def __init__(self):
+super().__init__()
+self.resnet18 = resnet_tvm
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+# y_pred = self.resnet18(x)
+y_pred = self.resnet18([x])[0]
+return y_pred.argmax(dim=1)
+
+
+print("run tvm...")
+model_tvm = PredictorTVM().cuda().half()
+for i in range(20):
+t = time.time()
+model_tvm([image_path])
+torch.cuda.synchronize()
+print(time.time() - t)
+
+torch.jit.script(model_tvm).save("model_tvm.pt")
+```
+
+Finally, we get a TVM accelerated model, which can be loaded and served in 
production.
+
+
+# Reference-level explanation
+[reference-level-explanation]: #reference-level-explanation
+
+We have opened an initial PR: https://github.com/apache/tvm/pull/8777
+
+The essential cpp code is as follows:
+
+```
+// This is just a wrapper class of tvm graph runtime module
+class 

[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698094948



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks
+
+So we hope to use TVM to accelerate PyTorch model inference.
+
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+
+For example, we have an end-to-end resnet classification model, consisting of 
3 parts:
+
+1. Image reader
+2. Image transforms
+3. Resnet model inference
+
+```
+class Predictor(nn.Module):
+
+def __init__(self, tvm_module=None):
+super().__init__()
+self.resnet18 = resnet18(pretrained=True, progress=False).eval()
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+print(x.shape)
+y_pred = self.resnet18(x)
+return y_pred.argmax(dim=1)
+```
+
+We choose to accelerate resnet model with PyTorchTVM
+
+```
+from tvm.contrib.pt_op import PyTorchTVMModule, compile
+
+print("compile...")
+option = {
+"input_infos": [
+("x", (1, 3, 224, 224)),
+],
+"default_dtype": "float16",
+"export_dir": "pytorch_compiled",
+"num_outputs": 1,
+"tuning_n_trials": 0,  # set zero to skip tuning
+"tuning_log_file": "tuning.log",
+}
+x = torch.randn(1, 3, 224, 224).cuda().half()
+resnet_jit = torch.jit.trace(model.resnet18, x)
+resnet_tvm = compile(resnet_jit, option)
+```
+
+Then we can use the accelerated tvm module directly in pytorch, and also use 
`torch.jit.script` together with the other 2 parts.
+
+```
+resnet_tvm = torch.jit.script(resnet_tvm)
+print(resnet_tvm.graph)
+
+
+class PredictorTVM(nn.Module):
+
+def __init__(self):
+super().__init__()
+self.resnet18 = resnet_tvm
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+# y_pred = self.resnet18(x)
+y_pred = self.resnet18([x])[0]
+return y_pred.argmax(dim=1)
+
+
+print("run tvm...")
+model_tvm = PredictorTVM().cuda().half()
+for i in range(20):
+t = time.time()
+model_tvm([image_path])
+torch.cuda.synchronize()
+print(time.time() - t)
+
+torch.jit.script(model_tvm).save("model_tvm.pt")
+```
+
+Finally, we get a TVM accelerated model, which can be loaded and served in 
production.

Review comment:
   ```suggestion
   Note that the script above provides a seamless serializable solution that 
allows TVM acceleration to be embedded into TorchScript and thus served in 
online production without extra effort.
   ```




-- 
This is an automated message from the 

[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698094649



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks
+
+So we hope to use TVM to accelerate PyTorch model inference.
+
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+
+For example, we have an end-to-end resnet classification model, consisting of 
3 parts:
+
+1. Image reader
+2. Image transforms
+3. Resnet model inference
+
+```
+class Predictor(nn.Module):
+
+def __init__(self, tvm_module=None):
+super().__init__()
+self.resnet18 = resnet18(pretrained=True, progress=False).eval()
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+print(x.shape)
+y_pred = self.resnet18(x)
+return y_pred.argmax(dim=1)
+```
+
+We choose to accelerate resnet model with PyTorchTVM
+
+```
+from tvm.contrib.pt_op import PyTorchTVMModule, compile
+
+print("compile...")
+option = {
+"input_infos": [
+("x", (1, 3, 224, 224)),
+],
+"default_dtype": "float16",
+"export_dir": "pytorch_compiled",
+"num_outputs": 1,
+"tuning_n_trials": 0,  # set zero to skip tuning
+"tuning_log_file": "tuning.log",
+}
+x = torch.randn(1, 3, 224, 224).cuda().half()
+resnet_jit = torch.jit.trace(model.resnet18, x)
+resnet_tvm = compile(resnet_jit, option)
+```
+
+Then we can use the accelerated tvm module directly in pytorch, and also use 
`torch.jit.script` together with the other 2 parts.
+
+```
+resnet_tvm = torch.jit.script(resnet_tvm)
+print(resnet_tvm.graph)
+
+
+class PredictorTVM(nn.Module):
+
+def __init__(self):
+super().__init__()
+self.resnet18 = resnet_tvm
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+# y_pred = self.resnet18(x)
+y_pred = self.resnet18([x])[0]

Review comment:
   To make it more natural
   
   ```suggestion
   y_pred, = self.resnet18([x])
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698094475



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks
+
+So we hope to use TVM to accelerate PyTorch model inference.
+
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+
+For example, we have an end-to-end resnet classification model, consisting of 
3 parts:
+
+1. Image reader
+2. Image transforms
+3. Resnet model inference
+
+```
+class Predictor(nn.Module):
+
+def __init__(self, tvm_module=None):
+super().__init__()
+self.resnet18 = resnet18(pretrained=True, progress=False).eval()
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+print(x.shape)
+y_pred = self.resnet18(x)
+return y_pred.argmax(dim=1)
+```
+
+We choose to accelerate resnet model with PyTorchTVM
+
+```
+from tvm.contrib.pt_op import PyTorchTVMModule, compile
+
+print("compile...")
+option = {
+"input_infos": [
+("x", (1, 3, 224, 224)),
+],
+"default_dtype": "float16",
+"export_dir": "pytorch_compiled",
+"num_outputs": 1,
+"tuning_n_trials": 0,  # set zero to skip tuning
+"tuning_log_file": "tuning.log",
+}
+x = torch.randn(1, 3, 224, 224).cuda().half()
+resnet_jit = torch.jit.trace(model.resnet18, x)
+resnet_tvm = compile(resnet_jit, option)
+```
+
+Then we can use the accelerated tvm module directly in pytorch, and also use 
`torch.jit.script` together with the other 2 parts.

Review comment:
   ```suggestion
   The TVM-accelerated `resnet_tvm` module can be used directly in PyTorch, or 
integrated into TorchScript with `torch.jit.script` along with all other 
PyTorch-native operations.
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698094294



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks
+
+So we hope to use TVM to accelerate PyTorch model inference.
+
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+
+For example, we have an end-to-end resnet classification model, consisting of 
3 parts:
+
+1. Image reader
+2. Image transforms
+3. Resnet model inference
+
+```
+class Predictor(nn.Module):
+
+def __init__(self, tvm_module=None):
+super().__init__()
+self.resnet18 = resnet18(pretrained=True, progress=False).eval()
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+print(x.shape)
+y_pred = self.resnet18(x)
+return y_pred.argmax(dim=1)
+```
+
+We choose to accelerate resnet model with PyTorchTVM

Review comment:
   ```suggestion
   With PyTorchTVM, we are able to compile the ResNet with TVM and embed it 
back to PyTorch seamlessly with a few lines of code:
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698093978



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks
+
+So we hope to use TVM to accelerate PyTorch model inference.
+
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+
+For example, we have an end-to-end resnet classification model, consisting of 
3 parts:
+
+1. Image reader
+2. Image transforms
+3. Resnet model inference
+
+```
+class Predictor(nn.Module):
+
+def __init__(self, tvm_module=None):
+super().__init__()
+self.resnet18 = resnet18(pretrained=True, progress=False).eval()
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()

Review comment:
   Shall we conclude these lines to an abstract data load to be consistent 
with our description? like, self.load_data()




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698093978



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks
+
+So we hope to use TVM to accelerate PyTorch model inference.
+
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+
+For example, we have an end-to-end resnet classification model, consisting of 
3 parts:
+
+1. Image reader
+2. Image transforms
+3. Resnet model inference
+
+```
+class Predictor(nn.Module):
+
+def __init__(self, tvm_module=None):
+super().__init__()
+self.resnet18 = resnet18(pretrained=True, progress=False).eval()
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()

Review comment:
   Shall we conclude these lines to an abstract data load to be consistent 
with our description?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698093866



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks
+
+So we hope to use TVM to accelerate PyTorch model inference.
+
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+
+For example, we have an end-to-end resnet classification model, consisting of 
3 parts:
+
+1. Image reader
+2. Image transforms
+3. Resnet model inference
+
+```
+class Predictor(nn.Module):
+
+def __init__(self, tvm_module=None):
+super().__init__()
+self.resnet18 = resnet18(pretrained=True, progress=False).eval()
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+print(x.shape)

Review comment:
   Shall we remove the print here?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698093806



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks
+
+So we hope to use TVM to accelerate PyTorch model inference.
+
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+
+For example, we have an end-to-end resnet classification model, consisting of 
3 parts:
+
+1. Image reader
+2. Image transforms
+3. Resnet model inference
+

Review comment:
   ```suggestion
   Below is a snippet that illustrates the workflow of this pipeline:
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698093680



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks
+
+So we hope to use TVM to accelerate PyTorch model inference.
+
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+
+For example, we have an end-to-end resnet classification model, consisting of 
3 parts:
+
+1. Image reader
+2. Image transforms
+3. Resnet model inference

Review comment:
   ```suggestion
   As an example, an end-to-end ResNet-based image classifier contains 3 major 
parts in its pipeline:
   1. A data loader that reads the input images from disk, memory or network
   2. A sequence of image transformation that normalizes the input images, 
including resize, crop, type conversions, etc
   3. Finally, a ResNet that maps a batch of input images to their class labels 
accordingly
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698092596



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks
+
+So we hope to use TVM to accelerate PyTorch model inference.

Review comment:
   We should discuss more about the benefit of PyTorchTVM here




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698092449



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks

Review comment:
   ```suggestion
   Below are the two classic acceleration workflows as the status quo:
   - PyTorch -> ONNX -> TensorRT/TVM
   - PyTorch -> TorchScript -> TensorRT/TVM
   
   However, both workflows introduce workflow introduce one level of 
indirection, which means flaws of either levels are inherited in the pipeline. 
For example:
   - ONNX offers no support for models with dynamic control flow, so the first 
workflow is unable to support dynamic models
   - The coverage of TensorRT is often limited to a range of standard neural 
networks, so both of the workflows, if offloaded to TensorRT, are hard to be 
effective on real-world irregular models.
   
   Furthermore, both of the existing workflows don't provide any benefit of an 
interface that is practical enough for researchers to widely adopt and reuse. 
For example, it requires deep knowledge of TVM runtime modules to load the 
exported binary artifacts back to python and use it together with PyTorch.
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698090652



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:

Review comment:
   ```suggestion
   PyTorch enjoys increasing popularity among machine learning research 
community as well as in industrial production environment. However, it is still 
a missing piece as a generic, comprehensive and effective toolchain to 
accelerate real-world models and workloads in PyTorch, which raises primary 
concern in performance-critical production environments.
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698092449



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks

Review comment:
   Below are the two classic acceleration workflows as the status quo:
   - PyTorch -> ONNX -> TensorRT/TVM
   - PyTorch -> TorchScript -> TensorRT/TVM
   
   However, both workflows introduce workflow introduce one level of 
indirection, which means flaws of either levels are inherited in the pipeline. 
For example:
   - ONNX offers no support for models with dynamic control flow, so the first 
workflow is unable to support dynamic models
   - The coverage of TensorRT is often limited to a range of standard neural 
networks, so both of the workflows, if offloaded to TensorRT, are hard to be 
effective on real-world irregular models.
   
   Furthermore, both of the existing workflows don't provide any benefit of an 
interface that is practical enough for researchers to widely adopt and reuse. 
For example, it requires deep knowledge of TVM runtime modules to load the 
exported binary artifacts back to python and use it together with PyTorch.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698090652



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:

Review comment:
   PyTorch enjoys increasing popularity among machine learning research 
community as well as in industrial production environment. However, it is still 
a missing piece as a generic, comprehensive and effective toolchain to 
accelerate real-world models and workloads in PyTorch, which raises primary 
concern in performance-critical production environments.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] junrushao1994 commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


junrushao1994 commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r698088691



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:

Review comment:
   ```suggestion
   To help boost model performance and enhance TVM adoption for machine 
learning practitioners who often use PyTorch, `PyTorchTVM` is proposed for 
seamless integration for TVM in TorchScript, and its workflow is demonstrated 
as follows:
   ```

##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.

Review comment:
   Just wanted to make sure I understand what it exactly means, let me know 
if the sentence below is consistent with the proposal. Thanks!
   
   ```suggestion
   This RFC add a `PyTorchTVM` module to support: offload subgraphs of 
TorchScript to TVM, and then embed those TVM-accelerated subgraphs back to 
TorchScript for runtime execution.
   ```

##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op

Review comment:
   ```suggestion
   3. Export and embed the optimized TVM module as a PyTorch custom op
   ```

##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph

Review comment:
   ```suggestion
   2. Optimize and compile the TVM graph with auto-tuning
   ```

##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph

Review comment:
   To clarify, is it an entire module (full graph) or a subgraph?
   
   ```suggestion
   1. Convert a TorchScript module to TVM graph (Relay)
   ```

##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model

Review comment:
   ```suggestion
   4. The embedded custom op works smoothly with TorchScript (the 
`torch.jit.trace` API), without tangible difference with normal PyTorch models, 
i.e. it can be saved to disk, loaded back and served online with no change in 
the overall workflow
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub 

[GitHub] [tvm] tqchen edited a comment on issue #8876: Unit Test java GPU failed - java.io.IOException: java.lang.RuntimeException: Failed to serialize

2021-08-29 Thread GitBox


tqchen edited a comment on issue #8876:
URL: https://github.com/apache/tvm/issues/8876#issuecomment-907877041


   seems to be a known flaky error, we can dig a bit more, in the meantime 
please retrigger


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on issue #8876: Unit Test java GPU failed - java.io.IOException: java.lang.RuntimeException: Failed to serialize

2021-08-29 Thread GitBox


tqchen commented on issue #8876:
URL: https://github.com/apache/tvm/issues/8876#issuecomment-907877041


   seems to be a known flaky error, we can dig a bit more


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] huajsj commented on a change in pull request #8702: [Runtime] Pipeline Executor Initial patch.

2021-08-29 Thread GitBox


huajsj commented on a change in pull request #8702:
URL: https://github.com/apache/tvm/pull/8702#discussion_r698068143



##
File path: python/tvm/contrib/pipeline_executor.py
##
@@ -0,0 +1,352 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Pipeline executor that executes pipeline containing TVM PackedFunc."""
+import json
+import tvm._ffi
+from tvm import relay
+from tvm.contrib import graph_executor
+
+
+def pipeline_executor_enabled():
+"""check if pipeline executor enabled.
+Return
+--
+enable: bool
+return pipeline executor get enabled or not
+"""
+pipeline_enabled = False
+try:
+pipelinecreate = 
tvm._ffi.get_global_func("tvm.pipeline_executor.create")
+assert pipelinecreate
+pipeline_enabled = True
+except ValueError:
+print("pipeline executor not enabled!")
+
+return pipeline_enabled
+
+
+def build_pipeline(mod_n_configs):
+"""build module list that can use for pipeline execution.
+
+Parameters
+--
+mod_n_configs: Dict[IRModule, Dict[str, Any]]
+build configuration informaton, structure like following.
+{IRModule: {"target":target,
+"target_host":target_host,
+"params":params,
+"mod_name"mod_name,
+"build":build}}
+
+Returns
+---
+ret: List[IRModule]
+list of IRModule
+string_config: Dict[int, Dict[str, any]]
+pipeline configuration
+"""
+mods = {}
+config_len = len(mod_n_configs)
+string_config = [{} for _ in range(config_len)]
+for _, (ir_mod, mod_config) in enumerate(mod_n_configs.items()):
+# init lib_name and json_name params with empty
+lib_name = ""
+json_name = ""
+params_name = ""
+# Get module configuration
+assert "pipeline" in mod_config and "mod_indx" in 
mod_config["pipeline"]
+# Get module index in pipeline configuration
+mconf = mod_config["pipeline"].copy()
+# Get mod device config
+dev = mod_config["dev"]
+mod_indx = mconf["mod_indx"] - 1
+target = mod_config["target"]
+assert mod_indx < config_len
+build_func = relay.build
+# if there is a self defined build function then use it.
+if "build" in mod_config and mod_config["build"]:
+build_func = mod_config["build"]
+
+# build IRModule
+mod = build_func(
+ir_mod,
+target,
+params=mod_config["params"],
+target_host=mod_config["target_host"],
+mod_name=mod_config["mod_name"],
+)
+
+mconf["lib_name"] = lib_name
+mconf["json_name"] = json_name
+mconf["params_name"] = params_name
+mconf["dev"] = "{},{}".format(dev.device_type, dev.device_id)
+# Create pipeline configuration
+string_config[mod_indx] = mconf
+# associate mod with device
+mods[mod] = {"dev": dev}
+
+# return IRModule list and pipeline configuration
+return mods, string_config
+
+
+def create(pipeline_mods, mod_config):
+"""Create a pipeline runtime executor.
+
+Parameters
+--
+pipeline_mods : List[IRModule]
+list of IRModule
+
+mod_config : Dict[int, Dict[str, Any]]
+modules and modules dependency configuration informaiton.
+
+Returns
+---
+submodule : PipelineModule
+Runtime pipeline module.
+"""
+
+submodule = PipelineModule(pipeline_mods, mod_config)
+return submodule
+
+
+class PipelineModule(object):
+"""Wrapper runtime module. This is a thin wrapper of the underlying TVM 
module.
+Parameters
+--
+pipeline_mods : List[GraphModule]
+The internal tvm module that holds the actual graph functions.
+
+pipeline_config : Dict[IRModule, Dict[str, Any]]
+modules and modules dependency configuration informaiton.
+
+"""
+
+def __init__(self, pipeline_mods, pipeline_config):
+self.pipeline_mods = pipeline_mods
+self.mod_config = pipeline_config
+mods, config = self.graph_executor_create(pipeline_mods, 
pipeline_config)
+

[GitHub] [tvm] huajsj commented on a change in pull request #8702: [Runtime] Pipeline Executor Initial patch.

2021-08-29 Thread GitBox


huajsj commented on a change in pull request #8702:
URL: https://github.com/apache/tvm/pull/8702#discussion_r698058090



##
File path: CMakeLists.txt
##
@@ -388,6 +388,21 @@ if(GTEST_INCLUDE_DIR AND GTEST_LIB)
   include(GoogleTest)
 endif()
 
+if(USE_PIPELINE_EXECUTOR)
+  message(STATUS "Build with Pipeline Executor support...")
+  file(GLOB RUNTIME_PIPELINE_SRCS src/runtime/pipeline/*.cc)
+  list(APPEND RUNTIME_SRCS ${RUNTIME_PIPELINE_SRCS})
+endif(USE_PIPELINE_EXECUTOR)
+
+# Enable ctest if gtest is available
+find_path(GTEST_INCLUDE_DIR gtest/gtest.h)
+find_library(GTEST_LIB gtest "$ENV{GTEST_LIB}")
+if(GTEST_INCLUDE_DIR AND GTEST_LIB)
+  enable_testing()
+  include(CTest)
+  include(GoogleTest)
+endif()

Review comment:
   fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] huajsj commented on a change in pull request #8702: [Runtime] Pipeline Executor Initial patch.

2021-08-29 Thread GitBox


huajsj commented on a change in pull request #8702:
URL: https://github.com/apache/tvm/pull/8702#discussion_r698057295



##
File path: python/tvm/contrib/pipeline_executor.py
##
@@ -0,0 +1,352 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Pipeline executor that executes pipeline containing TVM PackedFunc."""
+import json
+import tvm._ffi
+from tvm import relay
+from tvm.contrib import graph_executor
+
+
+def pipeline_executor_enabled():
+"""check if pipeline executor enabled.
+Return
+--
+enable: bool
+return pipeline executor get enabled or not
+"""
+pipeline_enabled = False
+try:
+pipelinecreate = 
tvm._ffi.get_global_func("tvm.pipeline_executor.create")
+assert pipelinecreate
+pipeline_enabled = True
+except ValueError:
+print("pipeline executor not enabled!")
+
+return pipeline_enabled
+
+
+def build_pipeline(mod_n_configs):
+"""build module list that can use for pipeline execution.
+
+Parameters
+--
+mod_n_configs: Dict[IRModule, Dict[str, Any]]
+build configuration informaton, structure like following.
+{IRModule: {"target":target,
+"target_host":target_host,
+"params":params,
+"mod_name"mod_name,
+"build":build}}
+
+Returns
+---
+ret: List[IRModule]
+list of IRModule
+string_config: Dict[int, Dict[str, any]]
+pipeline configuration
+"""
+mods = {}
+config_len = len(mod_n_configs)
+string_config = [{} for _ in range(config_len)]
+for _, (ir_mod, mod_config) in enumerate(mod_n_configs.items()):
+# init lib_name and json_name params with empty
+lib_name = ""
+json_name = ""
+params_name = ""
+# Get module configuration
+assert "pipeline" in mod_config and "mod_indx" in 
mod_config["pipeline"]
+# Get module index in pipeline configuration
+mconf = mod_config["pipeline"].copy()
+# Get mod device config
+dev = mod_config["dev"]
+mod_indx = mconf["mod_indx"] - 1
+target = mod_config["target"]
+assert mod_indx < config_len
+build_func = relay.build
+# if there is a self defined build function then use it.
+if "build" in mod_config and mod_config["build"]:
+build_func = mod_config["build"]
+
+# build IRModule
+mod = build_func(
+ir_mod,
+target,
+params=mod_config["params"],
+target_host=mod_config["target_host"],
+mod_name=mod_config["mod_name"],
+)
+
+mconf["lib_name"] = lib_name
+mconf["json_name"] = json_name
+mconf["params_name"] = params_name
+mconf["dev"] = "{},{}".format(dev.device_type, dev.device_id)
+# Create pipeline configuration
+string_config[mod_indx] = mconf
+# associate mod with device
+mods[mod] = {"dev": dev}
+
+# return IRModule list and pipeline configuration
+return mods, string_config

Review comment:
   fixed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] huajsj commented on a change in pull request #8702: [Runtime] Pipeline Executor Initial patch.

2021-08-29 Thread GitBox


huajsj commented on a change in pull request #8702:
URL: https://github.com/apache/tvm/pull/8702#discussion_r698057236



##
File path: python/tvm/contrib/pipeline_executor.py
##
@@ -0,0 +1,352 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Pipeline executor that executes pipeline containing TVM PackedFunc."""
+import json
+import tvm._ffi
+from tvm import relay
+from tvm.contrib import graph_executor
+
+
+def pipeline_executor_enabled():
+"""check if pipeline executor enabled.
+Return
+--
+enable: bool
+return pipeline executor get enabled or not
+"""
+pipeline_enabled = False
+try:
+pipelinecreate = 
tvm._ffi.get_global_func("tvm.pipeline_executor.create")
+assert pipelinecreate
+pipeline_enabled = True
+except ValueError:
+print("pipeline executor not enabled!")
+
+return pipeline_enabled
+
+
+def build_pipeline(mod_n_configs):
+"""build module list that can use for pipeline execution.
+
+Parameters
+--
+mod_n_configs: Dict[IRModule, Dict[str, Any]]
+build configuration informaton, structure like following.
+{IRModule: {"target":target,
+"target_host":target_host,
+"params":params,
+"mod_name"mod_name,
+"build":build}}
+
+Returns
+---
+ret: List[IRModule]
+list of IRModule
+string_config: Dict[int, Dict[str, any]]
+pipeline configuration
+"""
+mods = {}
+config_len = len(mod_n_configs)
+string_config = [{} for _ in range(config_len)]
+for _, (ir_mod, mod_config) in enumerate(mod_n_configs.items()):
+# init lib_name and json_name params with empty
+lib_name = ""
+json_name = ""
+params_name = ""
+# Get module configuration
+assert "pipeline" in mod_config and "mod_indx" in 
mod_config["pipeline"]
+# Get module index in pipeline configuration
+mconf = mod_config["pipeline"].copy()
+# Get mod device config
+dev = mod_config["dev"]
+mod_indx = mconf["mod_indx"] - 1
+target = mod_config["target"]
+assert mod_indx < config_len
+build_func = relay.build
+# if there is a self defined build function then use it.
+if "build" in mod_config and mod_config["build"]:
+build_func = mod_config["build"]
+
+# build IRModule
+mod = build_func(
+ir_mod,
+target,
+params=mod_config["params"],
+target_host=mod_config["target_host"],
+mod_name=mod_config["mod_name"],
+)
+
+mconf["lib_name"] = lib_name
+mconf["json_name"] = json_name
+mconf["params_name"] = params_name
+mconf["dev"] = "{},{}".format(dev.device_type, dev.device_id)
+# Create pipeline configuration
+string_config[mod_indx] = mconf
+# associate mod with device
+mods[mod] = {"dev": dev}
+
+# return IRModule list and pipeline configuration
+return mods, string_config
+
+
+def create(pipeline_mods, mod_config):
+"""Create a pipeline runtime executor.
+
+Parameters
+--
+pipeline_mods : List[IRModule]
+list of IRModule
+
+mod_config : Dict[int, Dict[str, Any]]
+modules and modules dependency configuration informaiton.
+
+Returns
+---
+submodule : PipelineModule
+Runtime pipeline module.
+"""
+
+submodule = PipelineModule(pipeline_mods, mod_config)
+return submodule
+
+
+class PipelineModule(object):
+"""Wrapper runtime module. This is a thin wrapper of the underlying TVM 
module.
+Parameters
+--
+pipeline_mods : List[GraphModule]
+The internal tvm module that holds the actual graph functions.
+
+pipeline_config : Dict[IRModule, Dict[str, Any]]
+modules and modules dependency configuration informaiton.
+
+"""
+
+def __init__(self, pipeline_mods, pipeline_config):

Review comment:
   fixed.

##
File path: src/runtime/pipeline/pipeline_executor.h
##
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation 

[GitHub] [tvm] huajsj commented on a change in pull request #8702: [Runtime] Pipeline Executor Initial patch.

2021-08-29 Thread GitBox


huajsj commented on a change in pull request #8702:
URL: https://github.com/apache/tvm/pull/8702#discussion_r698057179



##
File path: python/tvm/contrib/pipeline_executor.py
##
@@ -0,0 +1,352 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Pipeline executor that executes pipeline containing TVM PackedFunc."""
+import json
+import tvm._ffi
+from tvm import relay
+from tvm.contrib import graph_executor
+
+
+def pipeline_executor_enabled():
+"""check if pipeline executor enabled.
+Return
+--
+enable: bool
+return pipeline executor get enabled or not
+"""
+pipeline_enabled = False
+try:
+pipelinecreate = 
tvm._ffi.get_global_func("tvm.pipeline_executor.create")
+assert pipelinecreate
+pipeline_enabled = True
+except ValueError:
+print("pipeline executor not enabled!")
+
+return pipeline_enabled
+
+
+def build_pipeline(mod_n_configs):
+"""build module list that can use for pipeline execution.
+
+Parameters
+--
+mod_n_configs: Dict[IRModule, Dict[str, Any]]
+build configuration informaton, structure like following.
+{IRModule: {"target":target,
+"target_host":target_host,
+"params":params,
+"mod_name"mod_name,
+"build":build}}
+
+Returns
+---
+ret: List[IRModule]
+list of IRModule
+string_config: Dict[int, Dict[str, any]]
+pipeline configuration
+"""
+mods = {}
+config_len = len(mod_n_configs)
+string_config = [{} for _ in range(config_len)]
+for _, (ir_mod, mod_config) in enumerate(mod_n_configs.items()):
+# init lib_name and json_name params with empty
+lib_name = ""
+json_name = ""
+params_name = ""
+# Get module configuration
+assert "pipeline" in mod_config and "mod_indx" in 
mod_config["pipeline"]
+# Get module index in pipeline configuration
+mconf = mod_config["pipeline"].copy()
+# Get mod device config
+dev = mod_config["dev"]
+mod_indx = mconf["mod_indx"] - 1
+target = mod_config["target"]
+assert mod_indx < config_len

Review comment:
   this is old logic, removed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm-vta] branch ci-docker-staging updated: add devcommon

2021-08-29 Thread vega
This is an automated email from the ASF dual-hosted git repository.

vega pushed a commit to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm-vta.git


The following commit(s) were added to refs/heads/ci-docker-staging by this push:
 new 81850c7  add devcommon
81850c7 is described below

commit 81850c7089c345fa663d02eaffbb9eb35d36210e
Author: Luis Vega 
AuthorDate: Sun Aug 29 18:50:26 2021 +

add devcommon
---
 tests/scripts/dev_common.sh | 69 +
 1 file changed, 69 insertions(+)

diff --git a/tests/scripts/dev_common.sh b/tests/scripts/dev_common.sh
new file mode 100644
index 000..a15c855
--- /dev/null
+++ b/tests/scripts/dev_common.sh
@@ -0,0 +1,69 @@
+#!/bin/bash -e
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+if [ -z "${BASH_SOURCE[0]}" ]; then
+echo "NOTE: This script must be source'd from another bash script--it 
cannot be run directly"
+exit 2
+fi
+
+INVOCATION_PWD="$(pwd)"
+
+
+GIT_TOPLEVEL=$(cd $(dirname ${BASH_SOURCE[0]}) && git rev-parse 
--show-toplevel)
+
+
+function filter_jenkinsfile() {
+local echo_on=0;
+while read line; do
+if [ "${line}" == "// NOTE: these lines are scanned by 
tests/scripts/dev_common.sh. Please update the regex as needed. -->" ]; then
+echo_on=1
+elif [ "${line}" == "// <--- End of regex-scanned config." ]; then
+break
+elif [ ${echo_on} -eq 1 ]; then
+echo "$line"
+fi
+done
+}
+
+
+function lookup_image_spec() {
+img_line=$(cat "${GIT_TOPLEVEL}/Jenkinsfile" | filter_jenkinsfile | grep 
-E "^${1} = ")
+if [ -n "${img_line}" ]; then
+img_spec=$(echo "${img_line}" | sed -E "s/${1} = \"([^\"]*)\"/\1/")
+has_similar_docker_image=1
+docker inspect "${1}" &>/dev/null || has_similar_docker_image=0
+if [ ${has_similar_docker_image} -ne 0 ]; then
+echo "WARNING: resolved docker image through Jenkinsfile to 
\"${img_spec}\"" >&2
+fi
+echo "${img_spec}"
+fi
+}
+
+function run_docker() {
+image_name="$1"  # Name of the Jenkinsfile var to find
+shift
+
+image_spec=$(lookup_image_spec "${image_name}")
+if [ -z "${image_spec}" ]; then
+echo "${image_name}: not found in ${GIT_TOPLEVEL}/Jenkinsfile" >&2
+exit 2
+fi
+
+"${GIT_TOPLEVEL}/tests/scripts/docker_bash.sh" "${image_spec}" "$@"
+}


[tvm-vta] branch ci-docker-staging updated: update docker bash

2021-08-29 Thread vega
This is an automated email from the ASF dual-hosted git repository.

vega pushed a commit to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm-vta.git


The following commit(s) were added to refs/heads/ci-docker-staging by this push:
 new 112eb31  update docker bash
112eb31 is described below

commit 112eb3159b53db002ef201ecc25e097ac48acd52
Author: Luis Vega 
AuthorDate: Sun Aug 29 18:45:18 2021 +

update docker bash
---
 tests/scripts/docker_bash.sh | 437 ++-
 1 file changed, 385 insertions(+), 52 deletions(-)

diff --git a/tests/scripts/docker_bash.sh b/tests/scripts/docker_bash.sh
index cdda5d4..2a05abf 100755
--- a/tests/scripts/docker_bash.sh
+++ b/tests/scripts/docker_bash.sh
@@ -17,79 +17,412 @@
 # specific language governing permissions and limitations
 # under the License.
 
-# Docker bash script used to execute a command within a container.
 #
-if [ "$#" -lt 1 ]; then
-echo "Usage: tests/script/docker_bash.sh  [COMMAND]"
-exit -1
+# Start a bash, mount REPO_MOUNT_POINT to be current directory.
+#
+# Usage: docker/bash.sh [-i|--interactive] [--net=host] [-t|--tty]
+#  [--mount MOUNT_DIR] [--repo-mount-point REPO_MOUNT_POINT]
+#  [--dry-run]
+#   [--] [COMMAND]
+#
+# Usage: docker/bash.sh 
+# Starts an interactive session
+#
+# Usage2: docker/bash.sh [-i]  [COMMAND]
+# Execute command in the docker image, default non-interactive
+# With -i, execute interactively.
+#
+
+set -euo pipefail
+
+
+function show_usage() {
+cat < [--] [COMMAND]
+
+-h, --help
+
+Display this help message.
+
+-i, --interactive
+
+Start the docker session in interactive mode.
+
+-t, --tty
+
+Start the docker session with a pseudo terminal (tty).
+
+--net=host
+
+Expose servers run into the container to the host, passing the
+"--net=host" argument through to docker.  On MacOS, this is
+instead passed as "-p :" since the host networking driver
+isn't supported.
+
+--mount MOUNT_DIR
+
+Expose MOUNT_DIR as an additional mount point inside the docker
+container.  The mount point inside the container is the same as
+the folder location outside the container.  This option can be
+specified multiple times.
+
+--repo-mount-point REPO_MOUNT_POINT
+
+The directory inside the docker container at which the TVM
+repository should be mounted, and is used as the workspace inside
+the docker container.
+
+If unspecified, the mount location depends on the environment.  If
+running inside Jenkins, the mount location will be /workspace.
+Otherwise, the mount location of the repository will be the same
+as the external location of the repository, to maintain
+compatibility with git-worktree.
+
+--dry-run
+
+Print the docker command to be run, but do not execute it.
+
+DOCKER_IMAGE_NAME
+
+The name of the docker container to be run.  This can be an
+explicit name of a docker image (e.g. "tlcpack/ci-gpu:v0.76") or
+can be a shortcut as defined in the TVM Jenkinsfile
+(e.g. "ci_gpu").
+
+COMMAND
+
+The command to be run inside the docker container.  If this is set
+to "bash", both the --interactive and --net=host flags are set.
+If no command is specified, defaults to "bash".  If the command
+contains dash-prefixed arguments, the command should be preceded
+by -- to indicate arguments that are not intended for bash.sh.
+
+EOF
+}
+
+
+#
+### Start of argument parsing ###
+#
+
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd -P)"
+REPO_DIR="$(dirname "${SCRIPT_DIR}")"
+
+DRY_RUN=false
+INTERACTIVE=false
+TTY=false
+USE_NET_HOST=false
+DOCKER_IMAGE_NAME=
+COMMAND=bash
+MOUNT_DIRS=( )
+
+# TODO(Lunderberg): Remove this if statement and always set to
+# "${REPO_DIR}".  The consistent directory for Jenkins is currently
+# necessary to allow cmake build commands to run in CI after the build
+# steps.
+if [[ -n "${JENKINS_HOME:-}" ]]; then
+REPO_MOUNT_POINT=/workspace
+else
+REPO_MOUNT_POINT="${REPO_DIR}"
 fi
 
-DOCKER_IMAGE_NAME=("$1")
 
-if [ "$#" -eq 1 ]; then
-COMMAND="bash"
+function parse_error() {
+echo "$@" >&2
+show_usage >&2
+exit 1
+}
+
+# Handle joined flags, such as interpreting -ih as -i -h.  Either rewrites
+# the current argument if it is a joined argument, or shifts all arguments
+# otherwise.  Should be called as "eval $break_joined_flag" where joined
+# flags are possible.  Can't use a function definition, because it needs
+# to overwrite the parent scope's behavior.
+break_joined_flag='if (( ${#1} == 2 )); then shift; else set -- -"${1#-i}" 
"${@:2}"; fi'
+
+
+while (( $# )); do
+case "$1" in
+-h|--help)
+show_usage
+exit 0
+;;
+
+-i*|--interactive)
+INTERACTIVE=true
+eval $break_joined_flag
+;;
+
+ 

[tvm-vta] branch ci-docker-staging updated: update ci files

2021-08-29 Thread vega
This is an automated email from the ASF dual-hosted git repository.

vega pushed a commit to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm-vta.git


The following commit(s) were added to refs/heads/ci-docker-staging by this push:
 new 9bb632c  update ci files
9bb632c is described below

commit 9bb632c812e98f4ba4ec345e3d88eb968e07ee2e
Author: Luis Vega 
AuthorDate: Sun Aug 29 18:10:02 2021 +

update ci files
---
 tests/scripts/task_tvm_build.sh|  2 +-
 tests/scripts/task_tvm_config_build_cpu.sh | 13 +++--
 2 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/tests/scripts/task_tvm_build.sh b/tests/scripts/task_tvm_build.sh
index 2b73221..796bc2a 100755
--- a/tests/scripts/task_tvm_build.sh
+++ b/tests/scripts/task_tvm_build.sh
@@ -18,4 +18,4 @@
 
 
 export VTA_HW_PATH=`pwd`
-cd $1 && cmake .. && make $2 && cd -
+cd $1 && cmake .. -DCMAKE_BUILD_TYPE=RelWithDebInfo && make $2 && cd ..
diff --git a/tests/scripts/task_tvm_config_build_cpu.sh 
b/tests/scripts/task_tvm_config_build_cpu.sh
index 995ca27..e43f9bf 100755
--- a/tests/scripts/task_tvm_config_build_cpu.sh
+++ b/tests/scripts/task_tvm_config_build_cpu.sh
@@ -26,9 +26,9 @@ cp ../cmake/config.cmake .
 echo set\(USE_SORT ON\) >> config.cmake
 echo set\(USE_MICRO ON\) >> config.cmake
 echo set\(USE_MICRO_STANDALONE_RUNTIME ON\) >> config.cmake
-echo set\(USE_GRAPH_RUNTIME_DEBUG ON\) >> config.cmake
 echo set\(USE_PROFILER ON\) >> config.cmake
-echo set\(USE_EXAMPLE_EXT_RUNTIME ON\) >> config.cmake
+echo set\(USE_DNNL_CODEGEN ON\) >> config.cmake
+echo set\(USE_ARM_COMPUTE_LIB ON\) >> config.cmake
 echo set\(USE_LLVM llvm-config-11\) >> config.cmake
 echo set\(USE_NNPACK ON\) >> config.cmake
 echo set\(NNPACK_PATH /NNPACK/build/\) >> config.cmake
@@ -38,3 +38,12 @@ echo set\(CMAKE_CXX_FLAGS -Werror\) >> config.cmake
 echo set\(HIDE_PRIVATE_SYMBOLS ON\) >> config.cmake
 echo set\(USE_VTA_TSIM ON\) >> config.cmake
 echo set\(USE_VTA_FSIM ON\) >> config.cmake
+echo set\(USE_TFLITE ON\) >> config.cmake
+echo set\(USE_TENSORFLOW_PATH \"/tensorflow\"\) >> config.cmake
+echo set\(USE_FLATBUFFERS_PATH \"/flatbuffers\"\) >> config.cmake
+echo set\(USE_ETHOSN /opt/arm/ethosn-driver\) >> config.cmake
+echo set\(USE_ETHOSN_HW OFF\) >> config.cmake
+echo set\(USE_VITIS_AI ON\) >> config.cmake
+echo set\(USE_VERILATOR ON\) >> config.cmake
+echo set\(USE_LIBBACKTRACE ON\) >> config.cmake
+echo set\(USE_CCACHE OFF\) >> config.cmake


[tvm] branch main updated (2545e9c -> 27d3d60)

2021-08-29 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 2545e9c  [Frontend][Onnx] Simplify onnx input since name accesses are 
not reliable. (#8867)
 add 27d3d60  [TIR] GetBlockReadWriteRegion (#8875)

No new revisions were added by this update.

Summary of changes:
 include/tvm/tir/analysis.h | 19 +---
 python/tvm/tir/analysis/analysis.py| 24 ++-
 src/tir/analysis/block_access_region_detector.cc   | 34 +-
 src/tir/schedule/primitive/compute_inline.cc   |  2 +-
 .../plan_update_buffer_allocation_location.cc  | 16 +++---
 .../test_tir_analysis_get_block_access_region.py   | 29 ++
 .../python/unittest/test_tir_schedule_reduction.py |  1 -
 7 files changed, 105 insertions(+), 20 deletions(-)


[GitHub] [tvm] junrushao1994 merged pull request #8875: [TIR] GetBlockReadWriteRegion

2021-08-29 Thread GitBox


junrushao1994 merged pull request #8875:
URL: https://github.com/apache/tvm/pull/8875


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] Meteorix commented on a change in pull request #25: [RFC]PyTorchTVM

2021-08-29 Thread GitBox


Meteorix commented on a change in pull request #25:
URL: https://github.com/apache/tvm-rfcs/pull/25#discussion_r697981075



##
File path: rfcs/0025-add-pytorch-tvm.md
##
@@ -0,0 +1,265 @@
+- Feature Name: PyTorchTVM
+- Start Date: 2021-08-24
+- RFC PR: [apache/tvm-rfcs#0025](https://github.com/apache/tvm-rfcs/pull/25)
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and 
use accelerated module in PyTorch.
+
+To increase the TVM accessibility for PyTorch users, we propose `PyTorchTVM` 
module to support the following workflow:
+1. convert a torchscript module to tvm graph
+2. build and tune tvm graph
+3. export well-tuned tvm graph as a pytorch op
+4. torch jit trace the tvm pytorch op with other pytorch modules, then 
save/load/serve as normal pytorch model
+
+
+
+# Motivation
+[motivation]: #motivation
+
+PyTorch framework is increasingly being adopted for research and production. 
At the same time, PyTorch lacks an effective inference acceleration toolchain, 
which is the main concern in the industry. Existing acceleration includes:
+
+* PyTorch → ONNX → TensorRT/TVM
+* PyTorch → torchscript → TensorRT/TVM
+
+From our perspective, there are some limitations for both ONNX and TensorRT:
+
+* Onnx cannot cover all models with dynamic control flow (e.g. for loop)
+* TensorRT can only accelerate some standard networks
+
+So we hope to use TVM to accelerate PyTorch model inference.
+
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+
+For example, we have an end-to-end resnet classification model, consisting of 
3 parts:
+
+1. Image reader
+2. Image transforms
+3. Resnet model inference
+
+```
+class Predictor(nn.Module):
+
+def __init__(self, tvm_module=None):
+super().__init__()
+self.resnet18 = resnet18(pretrained=True, progress=False).eval()
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+print(x.shape)
+y_pred = self.resnet18(x)
+return y_pred.argmax(dim=1)
+```
+
+We choose to accelerate resnet model with PyTorchTVM
+
+```
+from tvm.contrib.pt_op import PyTorchTVMModule, compile
+
+print("compile...")
+option = {
+"input_infos": [
+("x", (1, 3, 224, 224)),
+],
+"default_dtype": "float16",
+"export_dir": "pytorch_compiled",
+"num_outputs": 1,
+"tuning_n_trials": 0,  # set zero to skip tuning
+"tuning_log_file": "tuning.log",
+}
+x = torch.randn(1, 3, 224, 224).cuda().half()
+resnet_jit = torch.jit.trace(model.resnet18, x)
+resnet_tvm = compile(resnet_jit, option)
+```
+
+Then we can use the accelerated tvm module directly in pytorch, and also use 
`torch.jit.script` together with the other 2 parts.
+
+```
+resnet_tvm = torch.jit.script(resnet_tvm)
+print(resnet_tvm.graph)
+
+
+class PredictorTVM(nn.Module):
+
+def __init__(self):
+super().__init__()
+self.resnet18 = resnet_tvm
+self.transforms = nn.Sequential(
+T.Resize([256, ]),  # We use single int value inside a list due to 
torchscript type restrictions
+T.CenterCrop(224),
+T.ConvertImageDtype(torch.half),
+T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+)
+
+def forward(self, image_path: List[str]) -> torch.Tensor:
+with torch.no_grad():
+images: List[torch.Tensor] = []
+for path in image_path:
+img = read_image(path)
+images.append(img)
+x = torch.stack(images).cuda().half()
+x = self.transforms(x)
+# y_pred = self.resnet18(x)
+y_pred = self.resnet18([x])[0]
+return y_pred.argmax(dim=1)
+
+
+print("run tvm...")
+model_tvm = PredictorTVM().cuda().half()
+for i in range(20):
+t = time.time()
+model_tvm([image_path])
+torch.cuda.synchronize()
+print(time.time() - t)
+
+torch.jit.script(model_tvm).save("model_tvm.pt")
+```
+
+Finally, we get a TVM accelerated model, which can be loaded and served in 
production.
+
+
+# Reference-level explanation
+[reference-level-explanation]: #reference-level-explanation
+
+We have opened an initial PR: https://github.com/apache/tvm/pull/8777
+
+The essential cpp code is as follows:
+
+```
+// This is just a wrapper class of tvm graph runtime module
+class 

[GitHub] [tvm] MasterJH5574 commented on a change in pull request #8875: [TIR] GetBlockReadWriteRegion

2021-08-29 Thread GitBox


MasterJH5574 commented on a change in pull request #8875:
URL: https://github.com/apache/tvm/pull/8875#discussion_r697969791



##
File path: src/tir/analysis/block_access_region_detector.cc
##
@@ -285,7 +285,39 @@ Array> GetBlockAccessRegion(const 
Block& block,
   return {detector.CollectReads(), detector.CollectWrites(), 
detector.CollectOpaques()};
 }
 
-TVM_REGISTER_GLOBAL("tir.analysis.get_block_access_region").set_body_typed(GetBlockAccessRegion);
+Array> GetBlockReadWriteRegion(Block block, Map buffer_var_map) {

Review comment:
   Thanks! It now gets fixed :-)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Hzfengsy commented on a change in pull request #8875: [TIR] GetBlockReadWriteRegion

2021-08-29 Thread GitBox


Hzfengsy commented on a change in pull request #8875:
URL: https://github.com/apache/tvm/pull/8875#discussion_r697968712



##
File path: src/tir/analysis/block_access_region_detector.cc
##
@@ -285,7 +285,39 @@ Array> GetBlockAccessRegion(const 
Block& block,
   return {detector.CollectReads(), detector.CollectWrites(), 
detector.CollectOpaques()};
 }
 
-TVM_REGISTER_GLOBAL("tir.analysis.get_block_access_region").set_body_typed(GetBlockAccessRegion);
+Array> GetBlockReadWriteRegion(Block block, Map buffer_var_map) {

Review comment:
   ```suggestion
   Array> GetBlockReadWriteRegion(const Block& block, const 
Map& buffer_var_map) {
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #8878: Deploy the Pretrained Model on Jetson Nano

2021-08-29 Thread GitBox


junrushao1994 commented on pull request #8878:
URL: https://github.com/apache/tvm/pull/8878#issuecomment-907741517


   I like the idea of this tutorial, but it will be helpful if someone could 
coach in terms of writing. @ganler @hogepodge would you guys like to help?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] BBuf opened a new pull request #8878: Deploy the Pretrained Model on Jetson Nano

2021-08-29 Thread GitBox


BBuf opened a new pull request #8878:
URL: https://github.com/apache/tvm/pull/8878


   This PR shows an example of inferring a ResNet pre-trained model on Jetson 
Nano, hoping to be accepted. Both inference modes (local and rpc remote) can 
successfully complete the prediction. The following is the test record. 
   
   remote rpc:
   
   
![图片](https://user-images.githubusercontent.com/35585791/131240928-a1b29800-0cc2-4d94-b17e-53cac1e72c16.png)
   
   local_demo:
   
   
![图片](https://user-images.githubusercontent.com/35585791/131240943-e5274dc8-f95b-49cc-b426-49d0d63ddae9.png)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org