[GitHub] [tvm] apivovarov commented on issue #8876: Unit Test java GPU failed - java.io.IOException: java.lang.RuntimeException: Failed to serialize

2021-08-28 Thread GitBox


apivovarov commented on issue #8876:
URL: https://github.com/apache/tvm/issues/8876#issuecomment-907730988


   CI Build 9 failed too - 
https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/PR-8860/9/pipeline
   Looks like TVM CI is broken.
   @tqchen 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] huajsj commented on a change in pull request #8702: [Runtime] Pipeline Executor Initial patch.

2021-08-28 Thread GitBox


huajsj commented on a change in pull request #8702:
URL: https://github.com/apache/tvm/pull/8702#discussion_r697954723



##
File path: python/tvm/contrib/pipeline_executor.py
##
@@ -0,0 +1,352 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Pipeline executor that executes pipeline containing TVM PackedFunc."""
+import json
+import tvm._ffi
+from tvm import relay
+from tvm.contrib import graph_executor
+
+
+def pipeline_executor_enabled():
+"""check if pipeline executor enabled.
+Return
+--
+enable: bool
+return pipeline executor get enabled or not
+"""
+pipeline_enabled = False
+try:
+pipelinecreate = 
tvm._ffi.get_global_func("tvm.pipeline_executor.create")
+assert pipelinecreate
+pipeline_enabled = True
+except ValueError:
+print("pipeline executor not enabled!")
+
+return pipeline_enabled
+
+
+def build_pipeline(mod_n_configs):
+"""build module list that can use for pipeline execution.
+
+Parameters
+--
+mod_n_configs: Dict[IRModule, Dict[str, Any]]
+build configuration informaton, structure like following.
+{IRModule: {"target":target,
+"target_host":target_host,
+"params":params,
+"mod_name"mod_name,
+"build":build}}
+
+Returns
+---
+ret: List[IRModule]
+list of IRModule
+string_config: Dict[int, Dict[str, any]]
+pipeline configuration
+"""
+mods = {}
+config_len = len(mod_n_configs)
+string_config = [{} for _ in range(config_len)]
+for _, (ir_mod, mod_config) in enumerate(mod_n_configs.items()):
+# init lib_name and json_name params with empty
+lib_name = ""
+json_name = ""
+params_name = ""
+# Get module configuration
+assert "pipeline" in mod_config and "mod_indx" in 
mod_config["pipeline"]
+# Get module index in pipeline configuration
+mconf = mod_config["pipeline"].copy()
+# Get mod device config
+dev = mod_config["dev"]
+mod_indx = mconf["mod_indx"] - 1
+target = mod_config["target"]
+assert mod_indx < config_len
+build_func = relay.build
+# if there is a self defined build function then use it.
+if "build" in mod_config and mod_config["build"]:
+build_func = mod_config["build"]

Review comment:
   for example VTA, when doing build work there are some customize logic 
like quantize and graph_pack, rpc, for such scenario the use case like following
   ```python
   def vta_build(mod, target, params, target_host, ):
  mod = relay.quantize.quantize(mod, params=params)
  mod = graph_pack(mod, ...)
 #
  with vta.build_config(opt_level=3, disabled_pass={"AlterOpLayout"}):
lib = relay.build(relay_prog,
 target,
 params=params,
 target_host=ENV.target_host
   return lib
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] huajsj commented on a change in pull request #8702: [Runtime] Pipeline Executor Initial patch.

2021-08-28 Thread GitBox


huajsj commented on a change in pull request #8702:
URL: https://github.com/apache/tvm/pull/8702#discussion_r697946280



##
File path: python/tvm/contrib/pipeline_executor.py
##
@@ -0,0 +1,352 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Pipeline executor that executes pipeline containing TVM PackedFunc."""
+import json
+import tvm._ffi
+from tvm import relay
+from tvm.contrib import graph_executor
+
+
+def pipeline_executor_enabled():
+"""check if pipeline executor enabled.
+Return
+--
+enable: bool
+return pipeline executor get enabled or not
+"""
+pipeline_enabled = False
+try:
+pipelinecreate = 
tvm._ffi.get_global_func("tvm.pipeline_executor.create")
+assert pipelinecreate
+pipeline_enabled = True
+except ValueError:
+print("pipeline executor not enabled!")
+
+return pipeline_enabled
+
+
+def build_pipeline(mod_n_configs):
+"""build module list that can use for pipeline execution.
+
+Parameters
+--
+mod_n_configs: Dict[IRModule, Dict[str, Any]]
+build configuration informaton, structure like following.
+{IRModule: {"target":target,
+"target_host":target_host,
+"params":params,
+"mod_name"mod_name,
+"build":build}}
+
+Returns
+---
+ret: List[IRModule]
+list of IRModule
+string_config: Dict[int, Dict[str, any]]
+pipeline configuration
+"""
+mods = {}
+config_len = len(mod_n_configs)
+string_config = [{} for _ in range(config_len)]
+for _, (ir_mod, mod_config) in enumerate(mod_n_configs.items()):

Review comment:
   removed enumerate.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] huajsj commented on a change in pull request #8702: [Runtime] Pipeline Executor Initial patch.

2021-08-28 Thread GitBox


huajsj commented on a change in pull request #8702:
URL: https://github.com/apache/tvm/pull/8702#discussion_r697945743



##
File path: python/tvm/contrib/pipeline_executor.py
##
@@ -0,0 +1,352 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Pipeline executor that executes pipeline containing TVM PackedFunc."""
+import json
+import tvm._ffi
+from tvm import relay
+from tvm.contrib import graph_executor
+
+
+def pipeline_executor_enabled():
+"""check if pipeline executor enabled.
+Return
+--
+enable: bool
+return pipeline executor get enabled or not
+"""
+pipeline_enabled = False
+try:
+pipelinecreate = 
tvm._ffi.get_global_func("tvm.pipeline_executor.create")
+assert pipelinecreate
+pipeline_enabled = True
+except ValueError:
+print("pipeline executor not enabled!")
+
+return pipeline_enabled
+
+
+def build_pipeline(mod_n_configs):

Review comment:
   fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] huajsj commented on a change in pull request #8702: [Runtime] Pipeline Executor Initial patch.

2021-08-28 Thread GitBox


huajsj commented on a change in pull request #8702:
URL: https://github.com/apache/tvm/pull/8702#discussion_r697945477



##
File path: python/tvm/contrib/pipeline_executor.py
##
@@ -0,0 +1,352 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Pipeline executor that executes pipeline containing TVM PackedFunc."""
+import json
+import tvm._ffi
+from tvm import relay
+from tvm.contrib import graph_executor
+
+
+def pipeline_executor_enabled():
+"""check if pipeline executor enabled.
+Return
+--
+enable: bool
+return pipeline executor get enabled or not
+"""
+pipeline_enabled = False
+try:
+pipelinecreate = 
tvm._ffi.get_global_func("tvm.pipeline_executor.create")
+assert pipelinecreate
+pipeline_enabled = True
+except ValueError:
+print("pipeline executor not enabled!")
+
+return pipeline_enabled
+
+
+def build_pipeline(mod_n_configs):
+"""build module list that can use for pipeline execution.
+
+Parameters
+--
+mod_n_configs: Dict[IRModule, Dict[str, Any]]
+build configuration informaton, structure like following.
+{IRModule: {"target":target,
+"target_host":target_host,
+"params":params,
+"mod_name"mod_name,
+"build":build}}
+
+Returns
+---
+ret: List[IRModule]
+list of IRModule
+string_config: Dict[int, Dict[str, any]]
+pipeline configuration
+"""
+mods = {}
+config_len = len(mod_n_configs)
+string_config = [{} for _ in range(config_len)]
+for _, (ir_mod, mod_config) in enumerate(mod_n_configs.items()):
+# init lib_name and json_name params with empty
+lib_name = ""
+json_name = ""
+params_name = ""
+# Get module configuration
+assert "pipeline" in mod_config and "mod_indx" in 
mod_config["pipeline"]
+# Get module index in pipeline configuration
+mconf = mod_config["pipeline"].copy()
+# Get mod device config
+dev = mod_config["dev"]
+mod_indx = mconf["mod_indx"] - 1
+target = mod_config["target"]
+assert mod_indx < config_len
+build_func = relay.build
+# if there is a self defined build function then use it.
+if "build" in mod_config and mod_config["build"]:
+build_func = mod_config["build"]
+
+# build IRModule
+mod = build_func(
+ir_mod,
+target,
+params=mod_config["params"],
+target_host=mod_config["target_host"],
+mod_name=mod_config["mod_name"],
+)
+
+mconf["lib_name"] = lib_name
+mconf["json_name"] = json_name
+mconf["params_name"] = params_name
+mconf["dev"] = "{},{}".format(dev.device_type, dev.device_id)
+# Create pipeline configuration
+string_config[mod_indx] = mconf
+# associate mod with device
+mods[mod] = {"dev": dev}
+
+# return IRModule list and pipeline configuration
+return mods, string_config
+
+
+def create(pipeline_mods, mod_config):
+"""Create a pipeline runtime executor.
+
+Parameters
+--
+pipeline_mods : List[IRModule]
+list of IRModule
+
+mod_config : Dict[int, Dict[str, Any]]
+modules and modules dependency configuration informaiton.
+
+Returns
+---
+submodule : PipelineModule
+Runtime pipeline module.
+"""
+
+submodule = PipelineModule(pipeline_mods, mod_config)
+return submodule
+
+
+class PipelineModule(object):
+"""Wrapper runtime module. This is a thin wrapper of the underlying TVM 
module.
+Parameters
+--
+pipeline_mods : List[GraphModule]
+The internal tvm module that holds the actual graph functions.
+
+pipeline_config : Dict[IRModule, Dict[str, Any]]
+modules and modules dependency configuration informaiton.
+
+"""
+
+def __init__(self, pipeline_mods, pipeline_config):
+self.pipeline_mods = pipeline_mods
+self.mod_config = pipeline_config
+mods, config = self.graph_executor_create(pipeline_mods, 
pipeline_config)
+

[GitHub] [tvm] huajsj commented on a change in pull request #8702: [Runtime] Pipeline Executor Initial patch.

2021-08-28 Thread GitBox


huajsj commented on a change in pull request #8702:
URL: https://github.com/apache/tvm/pull/8702#discussion_r697945255



##
File path: src/runtime/pipeline/pipeline_executor.cc
##
@@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file pipeline_executor.cc
+ */
+#include "pipeline_executor.h"
+
+namespace tvm {
+namespace runtime {
+
+void SubGraphRuntime::Init(const Array& modules,
+   const std::string& pipeline_json) {
+  return;
+}
+
+PackedFunc SubGraphRuntime::GetFunction(const std::string& name,
+const ObjectPtr& sptr_to_self) 
{
+  return PackedFunc();

Review comment:
   removed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] huajsj commented on a change in pull request #8702: [Runtime] Pipeline Executor Initial patch.

2021-08-28 Thread GitBox


huajsj commented on a change in pull request #8702:
URL: https://github.com/apache/tvm/pull/8702#discussion_r697945164



##
File path: tests/python/relay/test_pipeline_executor.py
##
@@ -0,0 +1,256 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import numpy as np
+import tvm
+import tvm.testing
+from tvm import relay
+from tvm.relay import transform
+from tvm.contrib import graph_executor, pipeline_executor
+
+
+def get_mannual_mod():
+"""
+# get list of module that represent a subgraph
+"""
+mods = []
+dshape = (3, 3)
+data = relay.var("data_0", relay.TensorType(dshape, "float32"))
+data21 = relay.var("data_1", relay.TensorType(dshape, "float32"))
+data_net1_output_1 = relay.var("data_0", relay.TensorType(dshape, 
"float32"))
+data_net1_output_2 = relay.var("data_1", relay.TensorType(dshape, 
"float32"))
+data_net2_output_1 = relay.var("data_0", relay.TensorType(dshape, 
"float32"))
+mvalue1 = np.full((1), 1).astype("float32")
+mvalue2 = np.full((1), 2).astype("float32")
+mvalue3 = np.full((1), 3).astype("float32")
+mv1 = relay.Constant(tvm.nd.array(mvalue1))
+mv2 = relay.Constant(tvm.nd.array(mvalue2))
+mv3 = relay.Constant(tvm.nd.array(mvalue3))
+
+"""
+# net1 have three output, output3 is final output.
+"""
+
+net_output1 = relay.add(data, mv1)
+net_output2 = relay.subtract(data, mv2)
+net_output3 = relay.multiply(data, mv3)
+
+"""
+# net2 use net1 output1 as input.
+"""
+net2 = relay.add(data_net1_output_1, mv2)
+net2 = relay.add(net2, data21)
+net2 = relay.add(net2, mv3)
+
+"""
+# net3 use net2 output1 and net1 outpu2 as input.
+"""
+net3 = relay.multiply(data_net2_output_1, mv3)
+net3 = relay.add(net3, data_net1_output_2)
+
+mods.append(
+tvm.IRModule.from_expr(
+relay.Function([data], relay.Tuple([net_output1, net_output2, 
net_output3]))
+)
+)
+mods.append(tvm.IRModule.from_expr(relay.Function([data_net1_output_1, 
data21], net2)))
+mods.append(
+tvm.IRModule.from_expr(relay.Function([data_net1_output_2, 
data_net2_output_1], net3))
+)
+
+return mods, dshape
+
+
+def get_manual_conf(mods):
+"""
+# This function use to generate manual pipe line configueration,
+# the result use to verify if the pipe configuration can generate
+# correct result.
+"""
+mod_config = {}
+"""
+# set configure
+"""
+mconfig1 = {}
+"""
+# third output is final output, second output for mod3, first for mod2
+# input
+"""
+mconfig1["pipeline"] = {
+"mod_indx": 1,
+"output": [
+{"output_indx": 0, "dependent": [{"mod_indx": 2, "input_name": 
"data_0"}]},
+{"output_indx": 1, "dependent": [{"mod_indx": 3, "input_name": 
"data_0"}]},
+{"output_indx": 2, "dependent": [{"mod_indx": 0, "input_name": 
"0"}]},
+],
+}
+mod_config[mods[0]] = mconfig1
+
+mconfig2 = {}
+mconfig2["pipeline"] = {
+"mod_indx": 2,
+"output": [
+{"output_indx": 0, "dependent": [{"mod_indx": 3, "input_name": 
"data_1"}]},
+],
+}
+mod_config[mods[1]] = mconfig2
+
+mconfig3 = {}
+
+mconfig3["pipeline"] = {
+"mod_indx": 3,
+"output": [{"output_indx": 0, "dependent": [{"mod_indx": 0, 
"input_name": "1"}]}],
+}
+mod_config[mods[2]] = mconfig3
+return mod_config
+
+
+def pipeline_module_create(target):
+"""
+#Get 3 pipeline module.
+"""
+(mod1, mod2, mod3), dshape = get_mannual_mod()
+
+# Prepare batch data for pipeline feeding
+datas = []
+for i in range(5):
+datas.append(np.full(dshape, 3 + i).astype("float32"))
+
+pipe_config = pipeline_executor.PipelineModuleConfig([mod1, mod2, mod3])
+
+# Create pipeline compute input/output and subgraph dependent relation.
+
+# pipeline compute input "data_0" would get forward to mod1 as input 
"data_0"
+pipe_config.connect(pipe_config.pipe_input("data_0"), 
pipe_config[mod1].input("data_0"))
+
+# pipeline compute input "data_1" would get forward to mod2 as input 
"data_1"
+

[GitHub] [tvm] huajsj commented on a change in pull request #8702: [Runtime] Pipeline Executor Initial patch.

2021-08-28 Thread GitBox


huajsj commented on a change in pull request #8702:
URL: https://github.com/apache/tvm/pull/8702#discussion_r697945139



##
File path: tests/python/relay/test_pipeline_executor.py
##
@@ -0,0 +1,256 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import numpy as np
+import tvm
+import tvm.testing
+from tvm import relay
+from tvm.relay import transform
+from tvm.contrib import graph_executor, pipeline_executor
+
+
+def get_mannual_mod():
+"""
+# get list of module that represent a subgraph
+"""
+mods = []
+dshape = (3, 3)
+data = relay.var("data_0", relay.TensorType(dshape, "float32"))
+data21 = relay.var("data_1", relay.TensorType(dshape, "float32"))
+data_net1_output_1 = relay.var("data_0", relay.TensorType(dshape, 
"float32"))
+data_net1_output_2 = relay.var("data_1", relay.TensorType(dshape, 
"float32"))
+data_net2_output_1 = relay.var("data_0", relay.TensorType(dshape, 
"float32"))
+mvalue1 = np.full((1), 1).astype("float32")
+mvalue2 = np.full((1), 2).astype("float32")
+mvalue3 = np.full((1), 3).astype("float32")
+mv1 = relay.Constant(tvm.nd.array(mvalue1))
+mv2 = relay.Constant(tvm.nd.array(mvalue2))
+mv3 = relay.Constant(tvm.nd.array(mvalue3))
+
+"""
+# net1 have three output, output3 is final output.
+"""
+
+net_output1 = relay.add(data, mv1)
+net_output2 = relay.subtract(data, mv2)
+net_output3 = relay.multiply(data, mv3)
+
+"""
+# net2 use net1 output1 as input.
+"""
+net2 = relay.add(data_net1_output_1, mv2)
+net2 = relay.add(net2, data21)
+net2 = relay.add(net2, mv3)
+
+"""
+# net3 use net2 output1 and net1 outpu2 as input.
+"""
+net3 = relay.multiply(data_net2_output_1, mv3)
+net3 = relay.add(net3, data_net1_output_2)
+
+mods.append(
+tvm.IRModule.from_expr(
+relay.Function([data], relay.Tuple([net_output1, net_output2, 
net_output3]))
+)
+)
+mods.append(tvm.IRModule.from_expr(relay.Function([data_net1_output_1, 
data21], net2)))
+mods.append(
+tvm.IRModule.from_expr(relay.Function([data_net1_output_2, 
data_net2_output_1], net3))
+)
+
+return mods, dshape
+
+
+def get_manual_conf(mods):
+"""
+# This function use to generate manual pipe line configueration,
+# the result use to verify if the pipe configuration can generate
+# correct result.
+"""
+mod_config = {}
+"""
+# set configure
+"""
+mconfig1 = {}
+"""
+# third output is final output, second output for mod3, first for mod2
+# input
+"""
+mconfig1["pipeline"] = {
+"mod_indx": 1,
+"output": [
+{"output_indx": 0, "dependent": [{"mod_indx": 2, "input_name": 
"data_0"}]},
+{"output_indx": 1, "dependent": [{"mod_indx": 3, "input_name": 
"data_0"}]},
+{"output_indx": 2, "dependent": [{"mod_indx": 0, "input_name": 
"0"}]},
+],
+}
+mod_config[mods[0]] = mconfig1
+
+mconfig2 = {}
+mconfig2["pipeline"] = {
+"mod_indx": 2,
+"output": [
+{"output_indx": 0, "dependent": [{"mod_indx": 3, "input_name": 
"data_1"}]},
+],
+}
+mod_config[mods[1]] = mconfig2
+
+mconfig3 = {}
+
+mconfig3["pipeline"] = {
+"mod_indx": 3,
+"output": [{"output_indx": 0, "dependent": [{"mod_indx": 0, 
"input_name": "1"}]}],
+}
+mod_config[mods[2]] = mconfig3
+return mod_config
+
+
+def pipeline_module_create(target):
+"""
+#Get 3 pipeline module.
+"""
+(mod1, mod2, mod3), dshape = get_mannual_mod()
+
+# Prepare batch data for pipeline feeding
+datas = []
+for i in range(5):
+datas.append(np.full(dshape, 3 + i).astype("float32"))
+
+pipe_config = pipeline_executor.PipelineModuleConfig([mod1, mod2, mod3])
+
+# Create pipeline compute input/output and subgraph dependent relation.
+
+# pipeline compute input "data_0" would get forward to mod1 as input 
"data_0"
+pipe_config.connect(pipe_config.pipe_input("data_0"), 
pipe_config[mod1].input("data_0"))
+
+# pipeline compute input "data_1" would get forward to mod2 as input 
"data_1"
+

[GitHub] [tvm] comaniac commented on a change in pull request #8702: [Runtime] Pipeline Executor Initial patch.

2021-08-28 Thread GitBox


comaniac commented on a change in pull request #8702:
URL: https://github.com/apache/tvm/pull/8702#discussion_r697926981



##
File path: python/tvm/contrib/pipeline_executor.py
##
@@ -0,0 +1,352 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Pipeline executor that executes pipeline containing TVM PackedFunc."""
+import json
+import tvm._ffi
+from tvm import relay
+from tvm.contrib import graph_executor
+
+
+def pipeline_executor_enabled():
+"""check if pipeline executor enabled.
+Return
+--
+enable: bool
+return pipeline executor get enabled or not
+"""
+pipeline_enabled = False
+try:
+pipelinecreate = 
tvm._ffi.get_global_func("tvm.pipeline_executor.create")
+assert pipelinecreate
+pipeline_enabled = True
+except ValueError:
+print("pipeline executor not enabled!")
+
+return pipeline_enabled
+
+
+def build_pipeline(mod_n_configs):
+"""build module list that can use for pipeline execution.
+
+Parameters
+--
+mod_n_configs: Dict[IRModule, Dict[str, Any]]
+build configuration informaton, structure like following.
+{IRModule: {"target":target,
+"target_host":target_host,
+"params":params,
+"mod_name"mod_name,
+"build":build}}
+
+Returns
+---
+ret: List[IRModule]
+list of IRModule
+string_config: Dict[int, Dict[str, any]]
+pipeline configuration
+"""
+mods = {}
+config_len = len(mod_n_configs)
+string_config = [{} for _ in range(config_len)]
+for _, (ir_mod, mod_config) in enumerate(mod_n_configs.items()):
+# init lib_name and json_name params with empty
+lib_name = ""
+json_name = ""
+params_name = ""
+# Get module configuration
+assert "pipeline" in mod_config and "mod_indx" in 
mod_config["pipeline"]
+# Get module index in pipeline configuration
+mconf = mod_config["pipeline"].copy()
+# Get mod device config
+dev = mod_config["dev"]
+mod_indx = mconf["mod_indx"] - 1
+target = mod_config["target"]
+assert mod_indx < config_len
+build_func = relay.build
+# if there is a self defined build function then use it.
+if "build" in mod_config and mod_config["build"]:
+build_func = mod_config["build"]
+
+# build IRModule
+mod = build_func(
+ir_mod,
+target,
+params=mod_config["params"],
+target_host=mod_config["target_host"],
+mod_name=mod_config["mod_name"],
+)
+
+mconf["lib_name"] = lib_name
+mconf["json_name"] = json_name
+mconf["params_name"] = params_name
+mconf["dev"] = "{},{}".format(dev.device_type, dev.device_id)
+# Create pipeline configuration
+string_config[mod_indx] = mconf
+# associate mod with device
+mods[mod] = {"dev": dev}
+
+# return IRModule list and pipeline configuration
+return mods, string_config
+
+
+def create(pipeline_mods, mod_config):
+"""Create a pipeline runtime executor.
+
+Parameters
+--
+pipeline_mods : List[IRModule]
+list of IRModule
+
+mod_config : Dict[int, Dict[str, Any]]
+modules and modules dependency configuration informaiton.
+
+Returns
+---
+submodule : PipelineModule
+Runtime pipeline module.
+"""
+
+submodule = PipelineModule(pipeline_mods, mod_config)
+return submodule
+
+
+class PipelineModule(object):
+"""Wrapper runtime module. This is a thin wrapper of the underlying TVM 
module.
+Parameters
+--
+pipeline_mods : List[GraphModule]
+The internal tvm module that holds the actual graph functions.
+
+pipeline_config : Dict[IRModule, Dict[str, Any]]
+modules and modules dependency configuration informaiton.
+
+"""
+
+def __init__(self, pipeline_mods, pipeline_config):
+self.pipeline_mods = pipeline_mods
+self.mod_config = pipeline_config
+mods, config = self.graph_executor_create(pipeline_mods, 
pipeline_config)

[GitHub] [tvm] junrushao1994 commented on pull request #8875: [TIR] GetBlockReadWriteRegion

2021-08-28 Thread GitBox


junrushao1994 commented on pull request #8875:
URL: https://github.com/apache/tvm/pull/8875#issuecomment-907698556


   CC @Hzfengsy please review :-)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] huajsj commented on a change in pull request #8702: [Runtime] Pipeline Executor Initial patch.

2021-08-28 Thread GitBox


huajsj commented on a change in pull request #8702:
URL: https://github.com/apache/tvm/pull/8702#discussion_r697923152



##
File path: tests/python/relay/test_pipeline_executor.py
##
@@ -0,0 +1,256 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import numpy as np
+import tvm
+import tvm.testing
+from tvm import relay
+from tvm.relay import transform
+from tvm.contrib import graph_executor, pipeline_executor
+
+
+def get_mannual_mod():
+"""
+# get list of module that represent a subgraph
+"""
+mods = []
+dshape = (3, 3)
+data = relay.var("data_0", relay.TensorType(dshape, "float32"))
+data21 = relay.var("data_1", relay.TensorType(dshape, "float32"))
+data_net1_output_1 = relay.var("data_0", relay.TensorType(dshape, 
"float32"))
+data_net1_output_2 = relay.var("data_1", relay.TensorType(dshape, 
"float32"))
+data_net2_output_1 = relay.var("data_0", relay.TensorType(dshape, 
"float32"))
+mvalue1 = np.full((1), 1).astype("float32")
+mvalue2 = np.full((1), 2).astype("float32")
+mvalue3 = np.full((1), 3).astype("float32")
+mv1 = relay.Constant(tvm.nd.array(mvalue1))
+mv2 = relay.Constant(tvm.nd.array(mvalue2))
+mv3 = relay.Constant(tvm.nd.array(mvalue3))
+
+"""
+# net1 have three output, output3 is final output.
+"""
+
+net_output1 = relay.add(data, mv1)
+net_output2 = relay.subtract(data, mv2)
+net_output3 = relay.multiply(data, mv3)
+
+"""
+# net2 use net1 output1 as input.
+"""
+net2 = relay.add(data_net1_output_1, mv2)
+net2 = relay.add(net2, data21)
+net2 = relay.add(net2, mv3)
+
+"""
+# net3 use net2 output1 and net1 outpu2 as input.
+"""
+net3 = relay.multiply(data_net2_output_1, mv3)
+net3 = relay.add(net3, data_net1_output_2)
+
+mods.append(
+tvm.IRModule.from_expr(
+relay.Function([data], relay.Tuple([net_output1, net_output2, 
net_output3]))
+)
+)
+mods.append(tvm.IRModule.from_expr(relay.Function([data_net1_output_1, 
data21], net2)))
+mods.append(
+tvm.IRModule.from_expr(relay.Function([data_net1_output_2, 
data_net2_output_1], net3))
+)
+
+return mods, dshape
+
+
+def get_manual_conf(mods):
+"""
+# This function use to generate manual pipe line configueration,
+# the result use to verify if the pipe configuration can generate
+# correct result.
+"""
+mod_config = {}
+"""
+# set configure
+"""
+mconfig1 = {}
+"""
+# third output is final output, second output for mod3, first for mod2
+# input
+"""
+mconfig1["pipeline"] = {
+"mod_indx": 1,
+"output": [
+{"output_indx": 0, "dependent": [{"mod_indx": 2, "input_name": 
"data_0"}]},
+{"output_indx": 1, "dependent": [{"mod_indx": 3, "input_name": 
"data_0"}]},
+{"output_indx": 2, "dependent": [{"mod_indx": 0, "input_name": 
"0"}]},
+],
+}
+mod_config[mods[0]] = mconfig1
+
+mconfig2 = {}
+mconfig2["pipeline"] = {
+"mod_indx": 2,
+"output": [
+{"output_indx": 0, "dependent": [{"mod_indx": 3, "input_name": 
"data_1"}]},
+],
+}
+mod_config[mods[1]] = mconfig2
+
+mconfig3 = {}
+
+mconfig3["pipeline"] = {
+"mod_indx": 3,
+"output": [{"output_indx": 0, "dependent": [{"mod_indx": 0, 
"input_name": "1"}]}],
+}
+mod_config[mods[2]] = mconfig3
+return mod_config
+
+
+def pipeline_module_create(target):
+"""
+#Get 3 pipeline module.
+"""
+(mod1, mod2, mod3), dshape = get_mannual_mod()
+
+# Prepare batch data for pipeline feeding
+datas = []
+for i in range(5):
+datas.append(np.full(dshape, 3 + i).astype("float32"))
+
+pipe_config = pipeline_executor.PipelineModuleConfig([mod1, mod2, mod3])
+
+# Create pipeline compute input/output and subgraph dependent relation.
+
+# pipeline compute input "data_0" would get forward to mod1 as input 
"data_0"
+pipe_config.connect(pipe_config.pipe_input("data_0"), 
pipe_config[mod1].input("data_0"))
+
+# pipeline compute input "data_1" would get forward to mod2 as input 
"data_1"
+

[GitHub] [tvm] huajsj commented on a change in pull request #8702: [Runtime] Pipeline Executor Initial patch.

2021-08-28 Thread GitBox


huajsj commented on a change in pull request #8702:
URL: https://github.com/apache/tvm/pull/8702#discussion_r697915244



##
File path: tests/python/relay/test_pipeline_executor.py
##
@@ -0,0 +1,256 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import numpy as np
+import tvm
+import tvm.testing
+from tvm import relay
+from tvm.relay import transform
+from tvm.contrib import graph_executor, pipeline_executor
+
+
+def get_mannual_mod():
+"""
+# get list of module that represent a subgraph
+"""
+mods = []
+dshape = (3, 3)
+data = relay.var("data_0", relay.TensorType(dshape, "float32"))
+data21 = relay.var("data_1", relay.TensorType(dshape, "float32"))
+data_net1_output_1 = relay.var("data_0", relay.TensorType(dshape, 
"float32"))
+data_net1_output_2 = relay.var("data_1", relay.TensorType(dshape, 
"float32"))
+data_net2_output_1 = relay.var("data_0", relay.TensorType(dshape, 
"float32"))
+mvalue1 = np.full((1), 1).astype("float32")
+mvalue2 = np.full((1), 2).astype("float32")
+mvalue3 = np.full((1), 3).astype("float32")
+mv1 = relay.Constant(tvm.nd.array(mvalue1))
+mv2 = relay.Constant(tvm.nd.array(mvalue2))
+mv3 = relay.Constant(tvm.nd.array(mvalue3))
+
+"""
+# net1 have three output, output3 is final output.
+"""
+
+net_output1 = relay.add(data, mv1)
+net_output2 = relay.subtract(data, mv2)
+net_output3 = relay.multiply(data, mv3)
+
+"""
+# net2 use net1 output1 as input.
+"""
+net2 = relay.add(data_net1_output_1, mv2)
+net2 = relay.add(net2, data21)
+net2 = relay.add(net2, mv3)
+
+"""
+# net3 use net2 output1 and net1 outpu2 as input.
+"""
+net3 = relay.multiply(data_net2_output_1, mv3)
+net3 = relay.add(net3, data_net1_output_2)
+
+mods.append(
+tvm.IRModule.from_expr(
+relay.Function([data], relay.Tuple([net_output1, net_output2, 
net_output3]))
+)
+)
+mods.append(tvm.IRModule.from_expr(relay.Function([data_net1_output_1, 
data21], net2)))
+mods.append(
+tvm.IRModule.from_expr(relay.Function([data_net1_output_2, 
data_net2_output_1], net3))
+)
+
+return mods, dshape
+
+
+def get_manual_conf(mods):
+"""
+# This function use to generate manual pipe line configueration,
+# the result use to verify if the pipe configuration can generate
+# correct result.
+"""
+mod_config = {}
+"""
+# set configure
+"""
+mconfig1 = {}
+"""
+# third output is final output, second output for mod3, first for mod2
+# input
+"""
+mconfig1["pipeline"] = {
+"mod_indx": 1,
+"output": [
+{"output_indx": 0, "dependent": [{"mod_indx": 2, "input_name": 
"data_0"}]},
+{"output_indx": 1, "dependent": [{"mod_indx": 3, "input_name": 
"data_0"}]},
+{"output_indx": 2, "dependent": [{"mod_indx": 0, "input_name": 
"0"}]},
+],
+}
+mod_config[mods[0]] = mconfig1
+
+mconfig2 = {}
+mconfig2["pipeline"] = {
+"mod_indx": 2,
+"output": [
+{"output_indx": 0, "dependent": [{"mod_indx": 3, "input_name": 
"data_1"}]},
+],
+}
+mod_config[mods[1]] = mconfig2
+
+mconfig3 = {}
+
+mconfig3["pipeline"] = {
+"mod_indx": 3,
+"output": [{"output_indx": 0, "dependent": [{"mod_indx": 0, 
"input_name": "1"}]}],
+}
+mod_config[mods[2]] = mconfig3
+return mod_config
+
+
+def pipeline_module_create(target):
+"""
+#Get 3 pipeline module.
+"""
+(mod1, mod2, mod3), dshape = get_mannual_mod()
+
+# Prepare batch data for pipeline feeding
+datas = []
+for i in range(5):
+datas.append(np.full(dshape, 3 + i).astype("float32"))
+
+pipe_config = pipeline_executor.PipelineModuleConfig([mod1, mod2, mod3])
+
+# Create pipeline compute input/output and subgraph dependent relation.
+
+# pipeline compute input "data_0" would get forward to mod1 as input 
"data_0"
+pipe_config.connect(pipe_config.pipe_input("data_0"), 
pipe_config[mod1].input("data_0"))
+
+# pipeline compute input "data_1" would get forward to mod2 as input 
"data_1"
+

[GitHub] [tvm] huajsj commented on a change in pull request #8702: [Runtime] Pipeline Executor Initial patch.

2021-08-28 Thread GitBox


huajsj commented on a change in pull request #8702:
URL: https://github.com/apache/tvm/pull/8702#discussion_r697914291



##
File path: tests/python/relay/test_pipeline_executor.py
##
@@ -0,0 +1,256 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import numpy as np
+import tvm
+import tvm.testing
+from tvm import relay
+from tvm.relay import transform
+from tvm.contrib import graph_executor, pipeline_executor
+
+
+def get_mannual_mod():
+"""
+# get list of module that represent a subgraph
+"""
+mods = []
+dshape = (3, 3)
+data = relay.var("data_0", relay.TensorType(dshape, "float32"))
+data21 = relay.var("data_1", relay.TensorType(dshape, "float32"))
+data_net1_output_1 = relay.var("data_0", relay.TensorType(dshape, 
"float32"))
+data_net1_output_2 = relay.var("data_1", relay.TensorType(dshape, 
"float32"))
+data_net2_output_1 = relay.var("data_0", relay.TensorType(dshape, 
"float32"))
+mvalue1 = np.full((1), 1).astype("float32")
+mvalue2 = np.full((1), 2).astype("float32")
+mvalue3 = np.full((1), 3).astype("float32")
+mv1 = relay.Constant(tvm.nd.array(mvalue1))
+mv2 = relay.Constant(tvm.nd.array(mvalue2))
+mv3 = relay.Constant(tvm.nd.array(mvalue3))
+
+"""
+# net1 have three output, output3 is final output.
+"""
+
+net_output1 = relay.add(data, mv1)
+net_output2 = relay.subtract(data, mv2)
+net_output3 = relay.multiply(data, mv3)
+
+"""
+# net2 use net1 output1 as input.
+"""
+net2 = relay.add(data_net1_output_1, mv2)
+net2 = relay.add(net2, data21)
+net2 = relay.add(net2, mv3)
+
+"""
+# net3 use net2 output1 and net1 outpu2 as input.
+"""
+net3 = relay.multiply(data_net2_output_1, mv3)
+net3 = relay.add(net3, data_net1_output_2)
+
+mods.append(
+tvm.IRModule.from_expr(
+relay.Function([data], relay.Tuple([net_output1, net_output2, 
net_output3]))
+)
+)
+mods.append(tvm.IRModule.from_expr(relay.Function([data_net1_output_1, 
data21], net2)))
+mods.append(
+tvm.IRModule.from_expr(relay.Function([data_net1_output_2, 
data_net2_output_1], net3))
+)
+
+return mods, dshape
+
+
+def get_manual_conf(mods):
+"""
+# This function use to generate manual pipe line configueration,
+# the result use to verify if the pipe configuration can generate
+# correct result.
+"""
+mod_config = {}
+"""
+# set configure
+"""
+mconfig1 = {}
+"""
+# third output is final output, second output for mod3, first for mod2
+# input
+"""
+mconfig1["pipeline"] = {
+"mod_indx": 1,
+"output": [
+{"output_indx": 0, "dependent": [{"mod_indx": 2, "input_name": 
"data_0"}]},
+{"output_indx": 1, "dependent": [{"mod_indx": 3, "input_name": 
"data_0"}]},
+{"output_indx": 2, "dependent": [{"mod_indx": 0, "input_name": 
"0"}]},
+],
+}
+mod_config[mods[0]] = mconfig1
+
+mconfig2 = {}
+mconfig2["pipeline"] = {
+"mod_indx": 2,
+"output": [
+{"output_indx": 0, "dependent": [{"mod_indx": 3, "input_name": 
"data_1"}]},
+],
+}
+mod_config[mods[1]] = mconfig2
+
+mconfig3 = {}
+
+mconfig3["pipeline"] = {
+"mod_indx": 3,
+"output": [{"output_indx": 0, "dependent": [{"mod_indx": 0, 
"input_name": "1"}]}],
+}
+mod_config[mods[2]] = mconfig3
+return mod_config
+
+
+def pipeline_module_create(target):
+"""
+#Get 3 pipeline module.
+"""
+(mod1, mod2, mod3), dshape = get_mannual_mod()
+
+# Prepare batch data for pipeline feeding
+datas = []
+for i in range(5):
+datas.append(np.full(dshape, 3 + i).astype("float32"))
+
+pipe_config = pipeline_executor.PipelineModuleConfig([mod1, mod2, mod3])
+
+# Create pipeline compute input/output and subgraph dependent relation.
+
+# pipeline compute input "data_0" would get forward to mod1 as input 
"data_0"
+pipe_config.connect(pipe_config.pipe_input("data_0"), 
pipe_config[mod1].input("data_0"))
+
+# pipeline compute input "data_1" would get forward to mod2 as input 
"data_1"
+

[tvm] branch main updated (0961b65 -> 2545e9c)

2021-08-28 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 0961b65  [Tutorial][Executor] Fix the usage of executors in tutorials 
(#8586)
 add 2545e9c  [Frontend][Onnx] Simplify onnx input since name accesses are 
not reliable. (#8867)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/onnx.py | 82 +++
 1 file changed, 22 insertions(+), 60 deletions(-)


[GitHub] [tvm] junrushao1994 merged pull request #8867: [Frontend][Onnx] Simplify onnx input since name accesses are not reliable.

2021-08-28 Thread GitBox


junrushao1994 merged pull request #8867:
URL: https://github.com/apache/tvm/pull/8867


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #8863: [TensorIR][M2a] CacheRead/Write

2021-08-28 Thread GitBox


junrushao1994 commented on pull request #8863:
URL: https://github.com/apache/tvm/pull/8863#issuecomment-907675668


   Will review tomorrow. Thanks a lot!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] thecooltechguy opened a new issue #8877: NotImplementedError: The following operators are not implemented: ['aten::stft']

2021-08-28 Thread GitBox


thecooltechguy opened a new issue #8877:
URL: https://github.com/apache/tvm/issues/8877


   Hi,
   
   When I try to compile a Pytorch model that uses `torch.stft`, I'm getting 
the following error.
   
   `NotImplementedError: The following operators are not implemented: 
['aten::stft']`
   
   Will this operator be supported within Apache TVM?
   
   Thanks!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] apivovarov opened a new issue #8876: Unit Test java GPU failed - java.io.IOException: java.lang.RuntimeException: Failed to serialize

2021-08-28 Thread GitBox


apivovarov opened a new issue #8876:
URL: https://github.com/apache/tvm/issues/8876


   I tried to build my PR several times. CI is unstable.
   Last error : `java.io.IOException: java.lang.RuntimeException: Failed to 
serialize hudson.model.Actionable#actions for class 
org.jenkinsci.plugins.workflow.job.WorkflowRun`
   CI link : 
https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/PR-8860/8/pipeline
   ```
   using credential 9a13f4da-9dbc-4e5b-89ef-9839b7b18337
   
   Fetching changes from the remote Git repository
   
   Fetching without tags
   
   Merging remotes/origin/main commit 1df6c273f0fb1242d0b399614616635cef38bc15 
into PR head commit bf2c28ba07661c93221458cea692f23fceb64cf5
   
> git rev-parse --is-inside-work-tree # timeout=10
   
> git config remote.origin.url https://github.com/apache/tvm.git # 
timeout=10
   
   Fetching upstream changes from https://github.com/apache/tvm.git
   
> git --version # timeout=10
   
   using GIT_ASKPASS to set credentials citlcpack
   
> git fetch --no-tags --force --progress -- 
https://github.com/apache/tvm.git 
+refs/pull/8860/head:refs/remotes/origin/PR-8860 
+refs/heads/main:refs/remotes/origin/main # timeout=10
   
> git config core.sparsecheckout # timeout=10
   
> git checkout -f bf2c28ba07661c93221458cea692f23fceb64cf5 # timeout=10
   
   Merge succeeded, producing b9cdccb18bf4f7162774cd40a055c09e9ca3d6c9
   
   Checking out Revision b9cdccb18bf4f7162774cd40a055c09e9ca3d6c9 (PR-8860)
   
   Commit message: "Merge commit '1df6c273f0fb1242d0b399614616635cef38bc15' 
into HEAD"
   
> git remote # timeout=10
   
> git config --get remote.origin.url # timeout=10
   
   using GIT_ASKPASS to set credentials citlcpack
   
> git merge 1df6c273f0fb1242d0b399614616635cef38bc15 # timeout=10
   
> git rev-parse HEAD^{commit} # timeout=10
   
> git config core.sparsecheckout # timeout=10
   
> git checkout -f b9cdccb18bf4f7162774cd40a055c09e9ca3d6c9 # timeout=10
   
   java.io.IOException: java.lang.RuntimeException: Failed to serialize 
hudson.model.Actionable#actions for class 
org.jenkinsci.plugins.workflow.job.WorkflowRun
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jwfromm edited a comment on pull request #8867: [Frontend][Onnx] Simplify onnx input since name accesses are not reliable.

2021-08-28 Thread GitBox


jwfromm edited a comment on pull request #8867:
URL: https://github.com/apache/tvm/pull/8867#issuecomment-907659234


   Onnx has strict conventions around input ordering. Any optional input that 
isnt used, but has other inputs after it that are used, must be provided and 
have its name set to an empty string. If this convention isnt followed it would 
be impossible to tell which input is which. This is the key point that 
@mbrookhart and I weren't aware of when we tried to use input names. See [here 
](https://github.com/onnx/onnx/blob/master/docs/IR.md)for more info.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jwfromm edited a comment on pull request #8867: [Frontend][Onnx] Simplify onnx input since name accesses are not reliable.

2021-08-28 Thread GitBox


jwfromm edited a comment on pull request #8867:
URL: https://github.com/apache/tvm/pull/8867#issuecomment-907659234


   Onnx has strict conventions around input ordering. Any optional input that 
isnt used, but has other inputs after it that are used, must explicitly be set 
to None. If this convention isnt followed it would be impossible to tell which 
input is which. This is the key point that @mbrookhart and I weren't aware of 
when we tried to use input names. See [here 
](https://github.com/onnx/onnx/blob/master/docs/IR.md)for more info.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jwfromm commented on pull request #8867: [Frontend][Onnx] Simplify onnx input since name accesses are not reliable.

2021-08-28 Thread GitBox


jwfromm commented on pull request #8867:
URL: https://github.com/apache/tvm/pull/8867#issuecomment-907659234


   Onnx has strict conventions around input ordering. Any optional input that 
isnt used, but has other inputs after it that are used, must explicitly be set 
to None. If this convention isnt followed it would be impossible to tell which 
input is which. This is the key point that @mbrookhart and I weren't aware of 
when we tried to use input names.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] MasterJH5574 opened a new pull request #8875: [TIR] GetBlockReadWriteRegion

2021-08-28 Thread GitBox


MasterJH5574 opened a new pull request #8875:
URL: https://github.com/apache/tvm/pull/8875


   This PR adds a new analysis function named `GetBlockReadWriteRegion` for 
TIR, which collects all buffer regions that a given block reads from and writes 
to.
   
   Before this PR, there was an existed function named `GetBlockAccessRegion` 
which collects three kinds of accesses: read accesses, write accesses, and 
opaque accesses. This function is only responsible for _reflecting the true 
access patterns of the given block_. That is, for example, suppose that `A` is 
a buffer of shape `[128]`, and there's an external function call 
`tir.call_extern("test", A.data, A[vi])` in the input block. The result of 
`GetBlockAccessRegion` says "the input block has a read access to `A[vi:vi + 
1]` and an opaque access to `A[0:128]`".
   
   To create a block's `read`/`write` fields, we need to merge the read/write 
accesses with the opaque accesses of the block respectively. Continue the 
example above. Merging accesses means telling that "the block reads from 
`A[0:128]` and writes to `A[0:128]`". We leave out the read access `A[vi:vi + 
1]` in the merging as it's covered by the opaque access. The "merging" is 
exactly what `GetBlockReadWriteRegion` does.
   
   After introducing `GetBlockReadWriteRegion`, whenever people want to 
generate the `read`/`write` fields of a block, or want to collect the 
read/write regions regarding opaque accesses as both read and write accesses, 
they can just invoke `GetBlockReadWriteRegion` - no need to call 
`GetBlockAccessRegion` and then merge the accesses.
   
   --
   
   cc @Hzfengsy @junrushao1994 @tqchen 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #8738: [microTVM][RVM] Improve base-box-tool 'build' command

2021-08-28 Thread GitBox


areusch commented on a change in pull request #8738:
URL: https://github.com/apache/tvm/pull/8738#discussion_r697882277



##
File path: apps/microtvm/reference-vm/base-box-tool.py
##
@@ -226,18 +227,33 @@ def generate_packer_config(file_path, providers):
 
 
 def build_command(args):
+this_dir = pathlib.Path(THIS_DIR)
+base_box_dir = this_dir / args.platform / "base-box"
+
 generate_packer_config(
-os.path.join(THIS_DIR, args.platform, "base-box", PACKER_FILE_NAME),
+os.path.join(base_box_dir, PACKER_FILE_NAME),
 args.provider or ALL_PROVIDERS,
 )
 env = copy.copy(os.environ)
-packer_args = ["packer", "build"]
+packer_args = ["packer", "build", "-force"]

Review comment:
   maybe we should just rm the artifacts ourselves rather than allowing 
packer vagrant builder choose how to proceed. wdyt?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] areusch commented on pull request #15: [RFC] Use CMSIS-NN with TVM

2021-08-28 Thread GitBox


areusch commented on pull request #15:
URL: https://github.com/apache/tvm-rfcs/pull/15#issuecomment-907639183


   Merging as Cody said LGTM in his last review


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] areusch merged pull request #15: [RFC] Use CMSIS-NN with TVM

2021-08-28 Thread GitBox


areusch merged pull request #15:
URL: https://github.com/apache/tvm-rfcs/pull/15


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm-rfcs] branch main updated: [RFC] Use CMSIS-NN with TVM (#15)

2021-08-28 Thread areusch
This is an automated email from the ASF dual-hosted git repository.

areusch pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm-rfcs.git


The following commit(s) were added to refs/heads/main by this push:
 new 39124b1  [RFC] Use CMSIS-NN with TVM (#15)
39124b1 is described below

commit 39124b1aff4fd0de9d2422315c9a39df3aaeaba5
Author: Ashutosh Parkhi <86472128+ashutosh-...@users.noreply.github.com>
AuthorDate: Sat Aug 28 16:06:19 2021 +0100

[RFC] Use CMSIS-NN with TVM (#15)

* Markdown for CMSIS-NN integration

Change-Id: I3b0954f3fdb4d54b3e38a84de0ab649c1e79bca8

* Title changed to use of CMSIS-NN with TVM

Change-Id: I6142c001175cdf41c58b5bb555a39e07c834254f

* Added acronyms and fixed few spellings

Change-Id: Id63d1866cd783f5e59b568f36c9177ee8715bc4d

* Changed name of the markdown to match PR number

Change-Id: Id54bae4bd2ca4bd9c3ab734e8cae966ebbe332b2

* Cody's comments about python APIs and config.cmake

Change-Id: I56a5f9bf319576d342a5bdc3771402262584e8c4

* Andrew's comments: more details about CMSIS-NN ops and fixed mistakes 
with some terminologies

Change-Id: I002be9cc67b72444ea27fe0a31769549fb6fd452

* Andrew's comments II: restructuring testing, guide level explanations

* Upstreaming plan misses line separator

* Upstreaming plan misses line separator
---
 rfcs/0015_Arm_CMSIS-NN_Integration.md | 219 ++
 1 file changed, 219 insertions(+)

diff --git a/rfcs/0015_Arm_CMSIS-NN_Integration.md 
b/rfcs/0015_Arm_CMSIS-NN_Integration.md
new file mode 100644
index 000..71fa494
--- /dev/null
+++ b/rfcs/0015_Arm_CMSIS-NN_Integration.md
@@ -0,0 +1,219 @@
+- Feature Name: [RFC] Use CMSIS-NN with TVM
+- Start Date: July 2021
+- RFC PR: https://github.com/apache/tvm-rfcs/pull/15
+- GitHub Issue: https://github.com/apache/tvm/issues/8646
+
+# Acronyms
+CMSIS: Common Microcontroller Software Interface Standard
+ACL: The Compute Library for the Arm® Architecture
+MLF: Model Library Format
+Cortex-M: Arm® Cortex®-M processor
+
+# Summary
+
+This RFC introduces plan of integration of CMSIS-NN library into TVM. It 
consists of efficient kernels targeted for Cortex-M architecture.
+
+Please refer to the following pages for more details on CMSIS-NN.
+* [CMSIS-NN user manual] 
(https://arm-software.github.io/CMSIS_5/NN/html/index.html)
+* [GITHUB CMSIS-NN Source] 
(https://github.com/ARM-software/CMSIS_5/tree/develop/CMSIS/NN)
+
+First PR in the series of PRs to fulfill this integration would be graph 
partitioner for softmax int8. Detailed plan can found below in this RFC.
+
+
+# Motivation
+
+CMSIS-NN library consists of hand-tuned kernels that are suitable for Cortex-M 
and are compliant with the quantization scheme used in Tensorflow Lite. They 
have been optimized for better performance and small memory footprint which is 
required on these embedded devices and it would make sense for TVM to reuse 
these while generating code for Cortex-M. They have been integrated with the 
TensorFlow Lite Micro project. In this work, we plan to map TFLite operators to 
the existing CMSIS-NN AP [...]
+
+
+# Guide-level explanation
+
+We will enable this integration by considering TFLite networks, but is equally 
applicable for all other networks that can be translated into Relay IR.
+
+TVM's BYOC infrastructure allows for the partitioning and code generation 
using the external compiler. Partitioned subgraphs containing operator(s) 
targeted for Cortex-M can then be translated into the CMSIS-NN C APIs which 
eventually become part of MLF.
+
+If a user runs tvmc, they will get a MLF format archive which calls out to the 
CMSIS operators. The source for the CMSIS-NN is not included in the MLF. Also, 
the support will remain up to date with changing library as we expect minimal 
changes to the CMSIS-NN API interface. Source code from github will be used for 
linking against the MLF by the test setup that allows execution on Cortex-M.
+
+```
+tvmc --target=cmsisnn,c --output-format=mlf --executor=aot
+```
+In the absence of tvmc support, following python APIs can be used to generate 
the C code. But eventually tvmc will be supporting CMSIS-NN as mentioned above.
+
+```python
+from tvm.relay.op.contrib import cmsisnn
+# API to call CMSIS-NN partitioning
+# Here, module is the relay module
+cmsisnn_module = cmsisnn.partition_for_cmsisnn(module)
+
+# Invoke AOT compiler to get the MLF containing CMSIS-NN APIs
+with tvm.target.Target("c -runtime=c --link-params -mcpu=cortex-m55 
--executor=aot --unpacked-api=1"):
+factory = tvm.relay.build(cmsisnn_mod)
+```
+
+
+
+# Reference-level explanation
+
+This section details how TFLite softmax int8 is converted into the C code. 
TFLite frontend first translates softmax int8 into the following sequence of 
relay operations: *dequantize -> softmax -> 

[GitHub] [tvm-rfcs] areusch commented on a change in pull request #7: [UnitTests] Parametrized Unit Tests

2021-08-28 Thread GitBox


areusch commented on a change in pull request #7:
URL: https://github.com/apache/tvm-rfcs/pull/7#discussion_r697878666



##
File path: rfcs/0007-parametrized-unit-tests.md
##
@@ -0,0 +1,568 @@
+- Feature Name: Parametrized Unit Tests
+- Start Date: 2021-05-10(fill me in with today's date, -MM-DD)
+- RFC PR: [apache/tvm-rfcs#0007](https://github.com/apache/tvm-rfcs/pull/0007)
+- GitHub PR: [apache/tvm#8010](https://github.com/apache/tvm/issues/8010)
+
+# Summary
+[summary]: #summary
+
+This RFC documents how to implement unit tests that depend on input
+parameters, or have setup that depends on input parameters.
+
+# Motivation
+[motivation]: #motivation
+
+Some unit tests should be tested along a variety of parameters for
+better coverage.  For example, a unit test that does not depend on
+target-specific features should be tested on all targets that the test
+platform supports.  Alternatively, a unit test may need to pass
+different array sizes to a function, in order to exercise different
+code paths within that function.
+
+The simplest implementation would be to write a test function that
+loops over all parameters, throwing an exception if any parameter
+fails the test.  However, this does not give full information to a
+developer, as a failure from any parameter results in the entire test
+to be marked as failing.  A unit-test that fails for all targets
+requires different debugging than a unit-test that fails on a single
+specific target, and so this information should be exposed.
+
+This RFC adds functionality for implementing parameterized unit tests,
+such that each set of parameters appears as a separate test result in
+the final output.
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+## Parameters
+
+To make a new parameter for unit tests to use, define it with the
+`tvm.testing.parameter` function.  For example, the following will

Review comment:
   ```suggestion
   Before you can use a parameter in a test case, you need to register it with 
`pytest`. 
   Do this using the `tvm.testing.parameter` function.  For example, the 
following will
   ```

##
File path: rfcs/0007-parametrized-unit-tests.md
##
@@ -0,0 +1,568 @@
+- Feature Name: Parametrized Unit Tests
+- Start Date: 2021-05-10(fill me in with today's date, -MM-DD)
+- RFC PR: [apache/tvm-rfcs#0007](https://github.com/apache/tvm-rfcs/pull/0007)
+- GitHub PR: [apache/tvm#8010](https://github.com/apache/tvm/issues/8010)
+
+# Summary
+[summary]: #summary
+
+This RFC documents how to implement unit tests that depend on input
+parameters, or have setup that depends on input parameters.
+
+# Motivation
+[motivation]: #motivation
+
+Some unit tests should be tested along a variety of parameters for
+better coverage.  For example, a unit test that does not depend on
+target-specific features should be tested on all targets that the test

Review comment:
   nit: "could be tested"

##
File path: rfcs/0007-parametrized-unit-tests.md
##
@@ -0,0 +1,568 @@
+- Feature Name: Parametrized Unit Tests
+- Start Date: 2021-05-10(fill me in with today's date, -MM-DD)
+- RFC PR: [apache/tvm-rfcs#0007](https://github.com/apache/tvm-rfcs/pull/0007)
+- GitHub PR: [apache/tvm#8010](https://github.com/apache/tvm/issues/8010)
+
+# Summary
+[summary]: #summary
+
+This RFC documents how to implement unit tests that depend on input
+parameters, or have setup that depends on input parameters.
+
+# Motivation
+[motivation]: #motivation
+
+Some unit tests should be tested along a variety of parameters for
+better coverage.  For example, a unit test that does not depend on
+target-specific features should be tested on all targets that the test
+platform supports.  Alternatively, a unit test may need to pass
+different array sizes to a function, in order to exercise different
+code paths within that function.
+
+The simplest implementation would be to write a test function that
+loops over all parameters, throwing an exception if any parameter
+fails the test.  However, this does not give full information to a
+developer, as a failure from any parameter results in the entire test
+to be marked as failing.  A unit-test that fails for all targets

Review comment:
   ```suggestion
   The simplest implementation would be to write a test function that
   internally loops over all parameters and throws an exception when the 
   test fails.  However, this does not give full information to a
   developer because `pytest` does not necessarily include the parameter
   value in the test report (even when it does, the value is in a different 
place
   depending on the way the internal loop is written). A unit-test that fails 
for all targets
   ```

##
File path: rfcs/0007-parametrized-unit-tests.md
##
@@ -0,0 +1,568 @@
+- Feature Name: Parametrized Unit Tests
+- Start Date: 2021-05-10(fill me in with today's date, -MM-DD)
+- RFC PR: 

[GitHub] [tvm] manupa-arm commented on issue #8717: Fusion of operations and cast in mobilenet v1 conv2d causing large feature maps

2021-08-28 Thread GitBox


manupa-arm commented on issue #8717:
URL: https://github.com/apache/tvm/issues/8717#issuecomment-907621658


   Hi All,
   
   Thanks for looking at this, for the relay please check the relay in 
'Primitive' form (not the relay that goes on relay.build). This relay could be 
found at observing relay coming out of Optimize in build_module.cc. This is 
because the legalizations legalize the qnn dialect to primitive relay operators.
   
   Hope this helps. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #8586: [Tutorial][Executor] Fix the usage of executors in tutorials

2021-08-28 Thread GitBox


junrushao1994 commented on pull request #8586:
URL: https://github.com/apache/tvm/pull/8586#issuecomment-907600180


   Thanks @ganler! It is finally merged


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [Tutorial][Executor] Fix the usage of executors in tutorials (#8586)

2021-08-28 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 0961b65  [Tutorial][Executor] Fix the usage of executors in tutorials 
(#8586)
0961b65 is described below

commit 0961b65cbf0d6e1c5f51e0e88dd17886d6111522
Author: Jiawei Liu 
AuthorDate: Sat Aug 28 04:28:07 2021 -0500

[Tutorial][Executor] Fix the usage of executors in tutorials (#8586)

* fix: executor usage for keras tutorial

* fix: executor usage for onnx tutorial

* [Tutorial][Executor] Fix executors in tutorials
---
 tutorials/dev/bring_your_own_datatypes.py | 3 ++-
 tutorials/frontend/from_keras.py  | 4 ++--
 tutorials/frontend/from_onnx.py   | 6 --
 3 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/tutorials/dev/bring_your_own_datatypes.py 
b/tutorials/dev/bring_your_own_datatypes.py
index a5e8e28..1cf556d 100644
--- a/tutorials/dev/bring_your_own_datatypes.py
+++ b/tutorials/dev/bring_your_own_datatypes.py
@@ -257,8 +257,9 @@ module, params = get_mobilenet()
 ##
 # It's easy to execute MobileNet with native TVM:
 
+ex = tvm.relay.create_executor("graph", mod=module, params=params)
 input = get_cat_image()
-result = tvm.relay.create_executor("graph", mod=module).evaluate()(input, 
**params).numpy()
+result = ex.evaluate()(input).numpy()
 # print first 10 elements
 print(result.flatten()[:10])
 
diff --git a/tutorials/frontend/from_keras.py b/tutorials/frontend/from_keras.py
index e62836d..182e769 100644
--- a/tutorials/frontend/from_keras.py
+++ b/tutorials/frontend/from_keras.py
@@ -103,14 +103,14 @@ dev = tvm.cuda(0)
 # due to a latent bug. Note that the pass context only has an effect within
 # evaluate() and is not captured by create_executor().
 with tvm.transform.PassContext(opt_level=0):
-model = relay.build_module.create_executor("graph", mod, dev, 
target).evaluate()
+model = relay.build_module.create_executor("graph", mod, dev, target, 
params).evaluate()
 
 
 ##
 # Execute on TVM
 # ---
 dtype = "float32"
-tvm_out = model(tvm.nd.array(data.astype(dtype)), **params)
+tvm_out = model(tvm.nd.array(data.astype(dtype)))
 top1_tvm = np.argmax(tvm_out.numpy()[0])
 
 #
diff --git a/tutorials/frontend/from_onnx.py b/tutorials/frontend/from_onnx.py
index 890bfba..fd51d7a 100644
--- a/tutorials/frontend/from_onnx.py
+++ b/tutorials/frontend/from_onnx.py
@@ -92,13 +92,15 @@ shape_dict = {input_name: x.shape}
 mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
 
 with tvm.transform.PassContext(opt_level=1):
-compiled = relay.build_module.create_executor("graph", mod, tvm.cpu(0), 
target).evaluate()
+executor = relay.build_module.create_executor(
+"graph", mod, tvm.cpu(0), target, params
+).evaluate()
 
 ##
 # Execute on TVM
 # -
 dtype = "float32"
-tvm_output = compiled(tvm.nd.array(x.astype(dtype)), **params).numpy()
+tvm_output = executor(tvm.nd.array(x.astype(dtype))).numpy()
 
 ##
 # Display results


[GitHub] [tvm] junrushao1994 merged pull request #8586: [Tutorial][Executor] Fix the usage of executors in tutorials

2021-08-28 Thread GitBox


junrushao1994 merged pull request #8586:
URL: https://github.com/apache/tvm/pull/8586


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (7214f52 -> 5ab527a)

2021-08-28 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 7214f52  [TIR] Fix opaque access in buffer locator pass and 
match_buffer in region detector (#8855)
 add 5ab527a  [Autoscheduler] Configurable workload keys (#8862)

No new revisions were added by this update.

Summary of changes:
 python/tvm/auto_scheduler/compute_dag.py   | 15 ++--
 python/tvm/auto_scheduler/relay_integration.py | 11 +-
 .../relay/test_auto_scheduler_task_extraction.py   | 44 +-
 3 files changed, 63 insertions(+), 7 deletions(-)


[tvm] branch main updated: [TIR] Fix opaque access in buffer locator pass and match_buffer in region detector (#8855)

2021-08-28 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 7214f52  [TIR] Fix opaque access in buffer locator pass and 
match_buffer in region detector (#8855)
7214f52 is described below

commit 7214f5239dbb8da4585d4d10fbc8c65c8f155b12
Author: Siyuan Feng 
AuthorDate: Sat Aug 28 17:23:43 2021 +0800

[TIR] Fix opaque access in buffer locator pass and match_buffer in region 
detector (#8855)

* init

* fix

* Update src/tir/transforms/plan_update_buffer_allocation_location.cc

Co-authored-by: Ruihang Lai 

* Update src/tir/transforms/plan_update_buffer_allocation_location.cc

Co-authored-by: Ruihang Lai 

* address

Co-authored-by: Junru Shao 
Co-authored-by: Ruihang Lai 
---
 src/tir/analysis/block_access_region_detector.cc   |  7 ++-
 .../plan_update_buffer_allocation_location.cc  | 39 +-
 .../test_tir_analysis_get_block_access_region.py   | 21 +---
 ...sform_plan_update_buffer_allocation_location.py | 62 ++
 4 files changed, 109 insertions(+), 20 deletions(-)

diff --git a/src/tir/analysis/block_access_region_detector.cc 
b/src/tir/analysis/block_access_region_detector.cc
index 8f87ef9..dd01aed 100644
--- a/src/tir/analysis/block_access_region_detector.cc
+++ b/src/tir/analysis/block_access_region_detector.cc
@@ -110,8 +110,11 @@ void BlockReadWriteDetector::operator()(const Stmt& stmt) {
   ICHECK(block != nullptr) << "Only visiting Blocks is allowed, but got " << 
stmt->GetTypeKey();
   for (const MatchBufferRegion& match_buffer : block->match_buffers) {
 const Var& target_var = match_buffer->buffer->data;
-match_buffers_[target_var.get()] = match_buffer;
-buffer_var_map_.Set(target_var, match_buffer->buffer);
+const Var& source_var = match_buffer->source->buffer->data;
+if (buffer_var_map_.find(source_var) != buffer_var_map_.end()) {
+  match_buffers_[target_var.get()] = match_buffer;
+  buffer_var_map_.Set(target_var, match_buffer->buffer);
+}
   }
   StmtExprVisitor::operator()(stmt);
 }
diff --git a/src/tir/transforms/plan_update_buffer_allocation_location.cc 
b/src/tir/transforms/plan_update_buffer_allocation_location.cc
index bee11ad..59f9170 100644
--- a/src/tir/transforms/plan_update_buffer_allocation_location.cc
+++ b/src/tir/transforms/plan_update_buffer_allocation_location.cc
@@ -75,8 +75,6 @@ class BufferAllocationLocator : public StmtExprMutator {
 
   Stmt VisitStmt_(const BlockNode* op) final {
 ICHECK(!op->init.defined());
-bool is_root = is_root_;
-is_root_ = false;
 Array alloc_buffers;
 auto it = alloc_buffers_.find(op);
 if (it != alloc_buffers_.end()) {
@@ -85,11 +83,23 @@ class BufferAllocationLocator : public StmtExprMutator {
 buffer_data_to_buffer_.Set(buf->data, buf);
   }
 }
+for (const MatchBufferRegion match_buffer : op->match_buffers) {
+  const Var& target_var = match_buffer->buffer->data;
+  const Var& source_var = match_buffer->source->buffer->data;
+  ICHECK(buffer_data_to_buffer_.count(source_var));
+  buffer_data_to_buffer_.Set(target_var, match_buffer->buffer);
+}
 Stmt stmt = StmtMutator::VisitStmt_(op);
 op = stmt.as();
 ICHECK(op != nullptr);
 
-// Ignore buffer allocated inside the block when getting access region.
+// No longer consider buffers created by match_buffer inside the block 
when updating access
+// region.
+for (const MatchBufferRegion match_buffer : op->match_buffers) {
+  const Var& target_var = match_buffer->buffer->data;
+  buffer_data_to_buffer_.erase(target_var);
+}
+// No longer consider buffers allocated inside the block when updating 
access region.
 if (it != alloc_buffers_.end()) {
   for (const Buffer& buf : it->second) {
 buffer_data_to_buffer_.erase(buf->data);
@@ -98,12 +108,9 @@ class BufferAllocationLocator : public StmtExprMutator {
 
 ObjectPtr n = CopyOnWrite(op);
 n->alloc_buffers = std::move(alloc_buffers);
-// The read/write regions of root block are always empty.
-if (!is_root) {
-  // Recalculate block access region
-  CollectReadWrite(GetRef(op), >reads, >writes);
-}
-
+// Erase buffer allocated inside the block from access region.
+n->reads = RemoveRedundantBufferRegion(n->reads);
+n->writes = RemoveRedundantBufferRegion(n->writes);
 return Stmt(n);
   }
 
@@ -127,8 +134,18 @@ class BufferAllocationLocator : public StmtExprMutator {
 return std::move(realize);
   }
 
+  Array RemoveRedundantBufferRegion(const Array& 
region) const {
+Array result;
+for (const BufferRegion& buffer_region : region) {
+  if (buffer_data_to_buffer_.count(buffer_region->buffer->data)) {
+result.push_back(buffer_region);
+  }
+ 

[GitHub] [tvm] junrushao1994 merged pull request #8862: [Autoscheduler] Configurable workload keys

2021-08-28 Thread GitBox


junrushao1994 merged pull request #8862:
URL: https://github.com/apache/tvm/pull/8862


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 merged pull request #8855: [TIR] Fix opaque access in buffer locator pass and match_buffer in region detector

2021-08-28 Thread GitBox


junrushao1994 merged pull request #8855:
URL: https://github.com/apache/tvm/pull/8855


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #8871: [Build] Generate libinfo.cc

2021-08-28 Thread GitBox


junrushao1994 commented on pull request #8871:
URL: https://github.com/apache/tvm/pull/8871#issuecomment-907596657


   I believe @antinucleon has some experience (painfully) migrating python to 
C++ :-)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AriMIR commented on issue #8717: Fusion of operations and cast in mobilenet v1 conv2d causing large feature maps

2021-08-28 Thread GitBox


AriMIR commented on issue #8717:
URL: https://github.com/apache/tvm/issues/8717#issuecomment-907596350


   Hello @Mousius,
   
   Sorry for the large number of questions. I just started to deal with the 
mechanics of TVM and it is quite possible that some of the questions are pretty 
stupid.
   
   I cut  a Depthwise Conv2D with output (1, 56, 56, 128) from quantized 
mobilenet_v1 frozen graph (mobilenet_v1_1.0_224_quant_frozen.pb),
   then transform it to tflite and check a relay and TIR.
   
   There is my relay for Conv2D:
def 
@main(%MobilenetV1/MobilenetV1/Conv2d_2_pointwise/act_quant/FakeQuantWithMinMaxVars:
 Tensor[(1, 56, 56, 128), float32], %v_param_2: Tensor[(1, 3, 3, 128), 
float32], %v_param_3: Tensor[(128), float32]) {
 %0 = qnn.quantize(%v_param_2, 0.0605373f, 160, out_dtype="uint8");
 %1 = qnn.dequantize(%0, 0.0605373f, 160);
 %2 = reshape(%1, newshape=[3, 3, 128, 1]);
 %3 = 
nn.conv2d(%MobilenetV1/MobilenetV1/Conv2d_2_pointwise/act_quant/FakeQuantWithMinMaxVars,
 %2, padding=[1, 1, 1, 1], groups=128, channels=128, kernel_size=[3, 3], 
data_layout="NHWC", kernel_layout="HWOI");
 %4 = nn.bias_add(%3, %v_param_3, axis=3);
 %5 = clip(%4, a_min=0f, a_max=6f);
 %6 = qnn.quantize(%5, 0.0235285f, 0, out_dtype="uint8");
 qnn.dequantize(%6, 0.0235285f, 0)
   }
   
   But it differs from the relay in your bug report.
   Then I checked relay from the whole mobilenet: (from 
mobilenet_v1_1.0_224_quant_frozen.pb and then from 
mobilenet_v1_1.0_224_quant.tflite also),
   But I haven’t found a relay like yours in the whole mobilenet relay on my 
side (tried to find “int16” or “fixed_point_multiply” lines).
   
   Another difference is that we are using tflite quantizer (“qnn.quantize”, 
“qnn.dequantize” in the relay from our side)
   I suggest you use another quantizer, because there are no “qnn” lines in 
your relay.
   
   My questions on the above problem are:
   1. It seems that you have a different version of tvm than mine, please 
indicate in which commit did you get this error?
   2. Which quantizer did you use?
   3. Please tell me why relays are different?
   
   The next part of my question is regarding large allocations
   1.   In which place in the code do you check TIR? (TIR primfn on my  side 
also differs, not at all, but in some parts) 
   2.   Could you please give me an example when op fusion is correct
   
   Thank you!
   
   Best regards,
   Arina Naumova,
   Software developer, Grovety
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org