[GitHub] [tvm] UniverseFly opened a new issue #8264: [TIR] Potential bug: type mismatch makes TVM core dumped, which cannot be captured as a Python exception

2021-06-15 Thread GitBox


UniverseFly opened a new issue #8264:
URL: https://github.com/apache/tvm/issues/8264


   The following code triggers the problem, where I deliberately pass an `int` 
to the `name` argument of `Var`. Should such a behavior be considered a bug? I 
think it's common that people misuse APIs, but we may still wants to capture 
this bug at user level instead of a crash.
   
   ```python
   # OS: CentOS 7 & TVM: 0.8.dev0
   from tvm import tir
   
   if __name__ == '__main__':
   try:
   tir.Var(name=1, dtype='int')
   except:
   print("Should be captured")
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (a8663d2 -> d05fdc5)

2021-06-15 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from a8663d2  [Metal] Fix run metal model when non first device is selected 
(#8261)
 add d05fdc5  Fix docstrings in tvm.relay.cast_like (#8262)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/transform.py | 3 +++
 1 file changed, 3 insertions(+)


[GitHub] [tvm] masahi merged pull request #8262: Fix docstrings in tvm.relay.cast_like

2021-06-15 Thread GitBox


masahi merged pull request #8262:
URL: https://github.com/apache/tvm/pull/8262


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] SamKG opened a new issue #8263: [RELAY] Support different PRNG algorithms

2021-06-15 Thread GitBox


SamKG opened a new issue #8263:
URL: https://github.com/apache/tvm/issues/8263


   It is well-established that performance of PRNG algorithms differs greatly 
on different hardware [1]. Currently, TVM seems to only support threefry random 
number generation. However, a user who is deploying to a GPU may benefit 
greatly from Philox instead (the paper linked below demonstrates a 3x 
performance difference!). In addition, decreasing the precision of the PRNG 
algorithm can also improve performance where suitable. Ideally, TVM would allow 
for the choice of PRNG to be user-selectable or tunable in some way.
   
   I'd be happy to give this issue a stab myself if someone could offer some 
pointers.
   
   [1] http://www.thesalmons.org/john/random123/papers/random123sc11.pdf


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


comaniac commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652216408



##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,420 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed floating point precision for relay graphs. i.e. turn 
a graph into fp16.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// MIXED_PRECISION_ALWAYS ops should always be done in lower precision due to 
the speed and memory
+// savings. MIXED_PRECISION_FOLLOW ops can be done in lower precision but 
don't have speedups to
+// justify a cast. MIXED_PRECISION_NEVER colored ops should not be done in 
lower precision due to
+// numerical reasons.
+enum MixedTypeConversionCategory : int {
+  MIXED_PRECISION_ALWAYS = 0,
+  MIXED_PRECISION_FOLLOW = 1,
+  MIXED_PRECISION_NEVER = 2
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// Return array is of type : [MixedTypeConversionCategory (int), String, 
String]
+// The fields are  : [ConversionCategory, accumulation_datatype, 
output_datatype]
+// Call is a call node, DataType is the mixed precision type
+using FTVMMixedPrecisionConversionType = 
runtime::TypedPackedFunc(
+const Call& call_node, const std::string& target_dtype_str)>;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache_;
+
+  /*! \brief The target datatype we want to convert to e.g. FP16 */
+  const DataType mixed_precision_type;

Review comment:
   Please check through all changes you've made.
   ```suggestion
 const DataType mixed_precision_type_;
   ```

##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,420 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed floating point precision for relay graphs. i.e. turn 
a graph into fp16.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// MIXED_PRECISION_ALWAYS ops should always be done in lower precision due to 
the speed and memory
+// savings. MIXED_PRECISION_FOLLOW ops can be done in lower precision but 
don't have speedups to
+// justify a cast. MIXED_PRECISION_NEVER colored ops should not be done in 
lower precision due to
+// numerical reasons.
+enum MixedTypeConversionCategory : int {
+  MIXED_PRECISION_ALWAYS = 0,
+  MIXED_PRECISION_FOLLOW = 1,
+  MIXED_PRECISION_NEVER = 2
+};
+
+// A map of a 

[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652211945



##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed floating point precision for relay graphs. i.e. turn 
a graph into fp16.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// MIXED_PRECISION_ALWAYS ops should always be done in lower precision due to 
the speed and memory
+// savings. MIXED_PRECISION_FOLLOW ops can be done in lower precision but 
don't have speedups to
+// justify a cast. MIXED_PRECISION_NEVER colored ops should not be done in 
lower precision due to
+// numerical reasons.
+enum MixedTypeConversionCategory : int {
+  MIXED_PRECISION_ALWAYS = 0,
+  MIXED_PRECISION_FOLLOW = 1,
+  MIXED_PRECISION_NEVER = 2
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// Return array is of type : [MixedTypeConversionCategory (int), String, 
String]
+// The fields are  : [ConversionCategory, accumulation_datatype, 
output_datatype]
+// Call is a call node, DataType is the mixed precision type
+using FTVMMixedPrecisionConversionType = 
runtime::TypedPackedFunc(
+const Call& call_node, const std::string& target_dtype_str)>;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;
+
+  // The target datatype we want to convert to e.g. FP16
+  const DataType mixed_precision_type;
+
+  // If false, throws a fatal error if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool ignore_missing_ops;
+
+  // If true, emits a warning if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool warn_missing_ops;
+
+  Attrs GetNewAttrs(const CallNode* call, const DataType& accumulation_dtype) 
const {
+/* If the accumulation dtype is in the attributes make a copy and mutate 
the field. */
+Attrs cur_attrs = call->attrs;
+if (cur_attrs.get() != nullptr) {
+  // TODO(AndrewZhaoLuo): Figure out a better way to do this
+  // modify output_dtype attributes (accumulation dtypes for ops)
+  if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = 
cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  }
+
+  // modify dtype attributes 

[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652211415



##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed floating point precision for relay graphs. i.e. turn 
a graph into fp16.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// MIXED_PRECISION_ALWAYS ops should always be done in lower precision due to 
the speed and memory
+// savings. MIXED_PRECISION_FOLLOW ops can be done in lower precision but 
don't have speedups to
+// justify a cast. MIXED_PRECISION_NEVER colored ops should not be done in 
lower precision due to
+// numerical reasons.
+enum MixedTypeConversionCategory : int {
+  MIXED_PRECISION_ALWAYS = 0,
+  MIXED_PRECISION_FOLLOW = 1,
+  MIXED_PRECISION_NEVER = 2
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// Return array is of type : [MixedTypeConversionCategory (int), String, 
String]
+// The fields are  : [ConversionCategory, accumulation_datatype, 
output_datatype]
+// Call is a call node, DataType is the mixed precision type
+using FTVMMixedPrecisionConversionType = 
runtime::TypedPackedFunc(
+const Call& call_node, const std::string& target_dtype_str)>;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;
+
+  // The target datatype we want to convert to e.g. FP16
+  const DataType mixed_precision_type;
+
+  // If false, throws a fatal error if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool ignore_missing_ops;
+
+  // If true, emits a warning if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool warn_missing_ops;
+
+  Attrs GetNewAttrs(const CallNode* call, const DataType& accumulation_dtype) 
const {
+/* If the accumulation dtype is in the attributes make a copy and mutate 
the field. */
+Attrs cur_attrs = call->attrs;
+if (cur_attrs.get() != nullptr) {
+  // TODO(AndrewZhaoLuo): Figure out a better way to do this
+  // modify output_dtype attributes (accumulation dtypes for ops)
+  if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = 
cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  }
+
+  // modify dtype attributes 

[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652211415



##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed floating point precision for relay graphs. i.e. turn 
a graph into fp16.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// MIXED_PRECISION_ALWAYS ops should always be done in lower precision due to 
the speed and memory
+// savings. MIXED_PRECISION_FOLLOW ops can be done in lower precision but 
don't have speedups to
+// justify a cast. MIXED_PRECISION_NEVER colored ops should not be done in 
lower precision due to
+// numerical reasons.
+enum MixedTypeConversionCategory : int {
+  MIXED_PRECISION_ALWAYS = 0,
+  MIXED_PRECISION_FOLLOW = 1,
+  MIXED_PRECISION_NEVER = 2
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// Return array is of type : [MixedTypeConversionCategory (int), String, 
String]
+// The fields are  : [ConversionCategory, accumulation_datatype, 
output_datatype]
+// Call is a call node, DataType is the mixed precision type
+using FTVMMixedPrecisionConversionType = 
runtime::TypedPackedFunc(
+const Call& call_node, const std::string& target_dtype_str)>;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;
+
+  // The target datatype we want to convert to e.g. FP16
+  const DataType mixed_precision_type;
+
+  // If false, throws a fatal error if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool ignore_missing_ops;
+
+  // If true, emits a warning if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool warn_missing_ops;
+
+  Attrs GetNewAttrs(const CallNode* call, const DataType& accumulation_dtype) 
const {
+/* If the accumulation dtype is in the attributes make a copy and mutate 
the field. */
+Attrs cur_attrs = call->attrs;
+if (cur_attrs.get() != nullptr) {
+  // TODO(AndrewZhaoLuo): Figure out a better way to do this
+  // modify output_dtype attributes (accumulation dtypes for ops)
+  if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = 
cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  }
+
+  // modify dtype attributes 

[GitHub] [tvm] rohanmukh commented on a change in pull request #8251: [Frontend, Tensorflow] Support for broadcasting in batch_matmul when shapes differ

2021-06-15 Thread GitBox


rohanmukh commented on a change in pull request #8251:
URL: https://github.com/apache/tvm/pull/8251#discussion_r652209025



##
File path: python/tvm/relay/frontend/tensorflow_ops.py
##
@@ -1162,6 +1163,9 @@ def _impl(inputs, attr, params, mod):
 adj_x = attr["adj_x"]
 adj_y = attr["adj_y"]
 input_x = _op.transpose(input_x, axes=[0, 2, 1]) if adj_x else input_x
+shape_y = _infer_shape(input_y, mod)
+if len(shape_y) < 3:
+input_y = _op.reshape(input_y, (1, orig_shape_y[-2], 
orig_shape_y[-1]))

Review comment:
   Thanks @comaniac . It is important for cases where `ndim=len(shape_x)` 
is <=3. I have test cases that fail without this line. Like 
`_test_batch_matmul((1, 8, 64), (64, 1), "float32", False, False)`.  However 
the case that you mentioned can also happen for certain input configurations. I 
refactored the logic to avoid that. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #8251: [Frontend, Tensorflow] Support for broadcasting in batch_matmul when shapes differ

2021-06-15 Thread GitBox


comaniac commented on a change in pull request #8251:
URL: https://github.com/apache/tvm/pull/8251#discussion_r652201290



##
File path: python/tvm/relay/frontend/tensorflow_ops.py
##
@@ -1162,6 +1163,9 @@ def _impl(inputs, attr, params, mod):
 adj_x = attr["adj_x"]
 adj_y = attr["adj_y"]
 input_x = _op.transpose(input_x, axes=[0, 2, 1]) if adj_x else input_x
+shape_y = _infer_shape(input_y, mod)
+if len(shape_y) < 3:
+input_y = _op.reshape(input_y, (1, orig_shape_y[-2], 
orig_shape_y[-1]))

Review comment:
   Is this required for static shape? It seems to me that you'll get two 
reshapes, although it should be simplified later by the SimplifyExpr pass.
   ```
   %1 = reshape(%y, (1, p, q));
   %2 = reshape(%1, (1, p, q));
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


comaniac commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652195571



##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed floating point precision for relay graphs. i.e. turn 
a graph into fp16.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// MIXED_PRECISION_ALWAYS ops should always be done in lower precision due to 
the speed and memory
+// savings. MIXED_PRECISION_FOLLOW ops can be done in lower precision but 
don't have speedups to
+// justify a cast. MIXED_PRECISION_NEVER colored ops should not be done in 
lower precision due to
+// numerical reasons.
+enum MixedTypeConversionCategory : int {
+  MIXED_PRECISION_ALWAYS = 0,
+  MIXED_PRECISION_FOLLOW = 1,
+  MIXED_PRECISION_NEVER = 2
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// Return array is of type : [MixedTypeConversionCategory (int), String, 
String]
+// The fields are  : [ConversionCategory, accumulation_datatype, 
output_datatype]
+// Call is a call node, DataType is the mixed precision type
+using FTVMMixedPrecisionConversionType = 
runtime::TypedPackedFunc(
+const Call& call_node, const std::string& target_dtype_str)>;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;
+
+  // The target datatype we want to convert to e.g. FP16
+  const DataType mixed_precision_type;
+
+  // If false, throws a fatal error if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool ignore_missing_ops;
+
+  // If true, emits a warning if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool warn_missing_ops;
+
+  Attrs GetNewAttrs(const CallNode* call, const DataType& accumulation_dtype) 
const {
+/* If the accumulation dtype is in the attributes make a copy and mutate 
the field. */
+Attrs cur_attrs = call->attrs;
+if (cur_attrs.get() != nullptr) {
+  // TODO(AndrewZhaoLuo): Figure out a better way to do this
+  // modify output_dtype attributes (accumulation dtypes for ops)
+  if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = 
cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  }
+
+  // modify dtype attributes (creating 

[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652192089



##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed floating point precision for relay graphs. i.e. turn 
a graph into fp16.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// MIXED_PRECISION_ALWAYS ops should always be done in lower precision due to 
the speed and memory
+// savings. MIXED_PRECISION_FOLLOW ops can be done in lower precision but 
don't have speedups to
+// justify a cast. MIXED_PRECISION_NEVER colored ops should not be done in 
lower precision due to
+// numerical reasons.
+enum MixedTypeConversionCategory : int {
+  MIXED_PRECISION_ALWAYS = 0,
+  MIXED_PRECISION_FOLLOW = 1,
+  MIXED_PRECISION_NEVER = 2
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// Return array is of type : [MixedTypeConversionCategory (int), String, 
String]
+// The fields are  : [ConversionCategory, accumulation_datatype, 
output_datatype]
+// Call is a call node, DataType is the mixed precision type
+using FTVMMixedPrecisionConversionType = 
runtime::TypedPackedFunc(
+const Call& call_node, const std::string& target_dtype_str)>;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;
+
+  // The target datatype we want to convert to e.g. FP16
+  const DataType mixed_precision_type;
+
+  // If false, throws a fatal error if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool ignore_missing_ops;
+
+  // If true, emits a warning if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool warn_missing_ops;
+
+  Attrs GetNewAttrs(const CallNode* call, const DataType& accumulation_dtype) 
const {
+/* If the accumulation dtype is in the attributes make a copy and mutate 
the field. */
+Attrs cur_attrs = call->attrs;
+if (cur_attrs.get() != nullptr) {
+  // TODO(AndrewZhaoLuo): Figure out a better way to do this
+  // modify output_dtype attributes (accumulation dtypes for ops)
+  if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = 
cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  }
+
+  // modify dtype attributes 

[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652191690



##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed floating point precision for relay graphs. i.e. turn 
a graph into fp16.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// MIXED_PRECISION_ALWAYS ops should always be done in lower precision due to 
the speed and memory
+// savings. MIXED_PRECISION_FOLLOW ops can be done in lower precision but 
don't have speedups to
+// justify a cast. MIXED_PRECISION_NEVER colored ops should not be done in 
lower precision due to
+// numerical reasons.
+enum MixedTypeConversionCategory : int {
+  MIXED_PRECISION_ALWAYS = 0,
+  MIXED_PRECISION_FOLLOW = 1,
+  MIXED_PRECISION_NEVER = 2
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// Return array is of type : [MixedTypeConversionCategory (int), String, 
String]
+// The fields are  : [ConversionCategory, accumulation_datatype, 
output_datatype]
+// Call is a call node, DataType is the mixed precision type
+using FTVMMixedPrecisionConversionType = 
runtime::TypedPackedFunc(
+const Call& call_node, const std::string& target_dtype_str)>;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;
+
+  // The target datatype we want to convert to e.g. FP16
+  const DataType mixed_precision_type;
+
+  // If false, throws a fatal error if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool ignore_missing_ops;
+
+  // If true, emits a warning if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool warn_missing_ops;
+
+  Attrs GetNewAttrs(const CallNode* call, const DataType& accumulation_dtype) 
const {
+/* If the accumulation dtype is in the attributes make a copy and mutate 
the field. */
+Attrs cur_attrs = call->attrs;
+if (cur_attrs.get() != nullptr) {
+  // TODO(AndrewZhaoLuo): Figure out a better way to do this
+  // modify output_dtype attributes (accumulation dtypes for ops)
+  if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = 
cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  }
+
+  // modify dtype attributes 

[GitHub] [tvm] comaniac commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


comaniac commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652190924



##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed floating point precision for relay graphs. i.e. turn 
a graph into fp16.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// MIXED_PRECISION_ALWAYS ops should always be done in lower precision due to 
the speed and memory
+// savings. MIXED_PRECISION_FOLLOW ops can be done in lower precision but 
don't have speedups to
+// justify a cast. MIXED_PRECISION_NEVER colored ops should not be done in 
lower precision due to
+// numerical reasons.
+enum MixedTypeConversionCategory : int {
+  MIXED_PRECISION_ALWAYS = 0,
+  MIXED_PRECISION_FOLLOW = 1,
+  MIXED_PRECISION_NEVER = 2
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// Return array is of type : [MixedTypeConversionCategory (int), String, 
String]
+// The fields are  : [ConversionCategory, accumulation_datatype, 
output_datatype]
+// Call is a call node, DataType is the mixed precision type
+using FTVMMixedPrecisionConversionType = 
runtime::TypedPackedFunc(
+const Call& call_node, const std::string& target_dtype_str)>;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;
+
+  // The target datatype we want to convert to e.g. FP16
+  const DataType mixed_precision_type;
+
+  // If false, throws a fatal error if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool ignore_missing_ops;
+
+  // If true, emits a warning if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool warn_missing_ops;
+
+  Attrs GetNewAttrs(const CallNode* call, const DataType& accumulation_dtype) 
const {
+/* If the accumulation dtype is in the attributes make a copy and mutate 
the field. */
+Attrs cur_attrs = call->attrs;
+if (cur_attrs.get() != nullptr) {
+  // TODO(AndrewZhaoLuo): Figure out a better way to do this
+  // modify output_dtype attributes (accumulation dtypes for ops)
+  if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = 
cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  }
+
+  // modify dtype attributes (creating 

[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652190815



##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed floating point precision for relay graphs. i.e. turn 
a graph into fp16.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// MIXED_PRECISION_ALWAYS ops should always be done in lower precision due to 
the speed and memory
+// savings. MIXED_PRECISION_FOLLOW ops can be done in lower precision but 
don't have speedups to
+// justify a cast. MIXED_PRECISION_NEVER colored ops should not be done in 
lower precision due to
+// numerical reasons.
+enum MixedTypeConversionCategory : int {
+  MIXED_PRECISION_ALWAYS = 0,
+  MIXED_PRECISION_FOLLOW = 1,
+  MIXED_PRECISION_NEVER = 2
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// Return array is of type : [MixedTypeConversionCategory (int), String, 
String]
+// The fields are  : [ConversionCategory, accumulation_datatype, 
output_datatype]
+// Call is a call node, DataType is the mixed precision type
+using FTVMMixedPrecisionConversionType = 
runtime::TypedPackedFunc(
+const Call& call_node, const std::string& target_dtype_str)>;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;
+
+  // The target datatype we want to convert to e.g. FP16
+  const DataType mixed_precision_type;
+
+  // If false, throws a fatal error if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool ignore_missing_ops;
+
+  // If true, emits a warning if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool warn_missing_ops;
+
+  Attrs GetNewAttrs(const CallNode* call, const DataType& accumulation_dtype) 
const {
+/* If the accumulation dtype is in the attributes make a copy and mutate 
the field. */
+Attrs cur_attrs = call->attrs;
+if (cur_attrs.get() != nullptr) {
+  // TODO(AndrewZhaoLuo): Figure out a better way to do this
+  // modify output_dtype attributes (accumulation dtypes for ops)
+  if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = 
cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  }
+
+  // modify dtype attributes 

[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652190632



##
File path: python/tvm/relay/transform/transform.py
##
@@ -1199,3 +1198,18 @@ def FakeQuantizationToInteger():
 The registered SimplifyExpr pass.
 """
 return _ffi_api.FakeQuantizationToInteger()
+
+
+def ToMixedPrecision(
+mixed_precision_type="float16", ignore_missing_ops=True, 
warn_missing_ops=True
+):
+"""
+Automatic mixed precision rewriter. Rewrite an FP32 relay graph into a 
version
+where as many operations as possible are in the target 
mixed_precision_type.

Review comment:
   Done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652189117



##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed floating point precision for relay graphs. i.e. turn 
a graph into fp16.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// MIXED_PRECISION_ALWAYS ops should always be done in lower precision due to 
the speed and memory
+// savings. MIXED_PRECISION_FOLLOW ops can be done in lower precision but 
don't have speedups to
+// justify a cast. MIXED_PRECISION_NEVER colored ops should not be done in 
lower precision due to
+// numerical reasons.
+enum MixedTypeConversionCategory : int {
+  MIXED_PRECISION_ALWAYS = 0,
+  MIXED_PRECISION_FOLLOW = 1,
+  MIXED_PRECISION_NEVER = 2
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// Return array is of type : [MixedTypeConversionCategory (int), String, 
String]
+// The fields are  : [ConversionCategory, accumulation_datatype, 
output_datatype]
+// Call is a call node, DataType is the mixed precision type
+using FTVMMixedPrecisionConversionType = 
runtime::TypedPackedFunc(
+const Call& call_node, const std::string& target_dtype_str)>;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;
+
+  // The target datatype we want to convert to e.g. FP16
+  const DataType mixed_precision_type;
+
+  // If false, throws a fatal error if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool ignore_missing_ops;
+
+  // If true, emits a warning if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool warn_missing_ops;
+
+  Attrs GetNewAttrs(const CallNode* call, const DataType& accumulation_dtype) 
const {
+/* If the accumulation dtype is in the attributes make a copy and mutate 
the field. */
+Attrs cur_attrs = call->attrs;
+if (cur_attrs.get() != nullptr) {
+  // TODO(AndrewZhaoLuo): Figure out a better way to do this
+  // modify output_dtype attributes (accumulation dtypes for ops)
+  if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = 
cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  }
+
+  // modify dtype attributes 

[GitHub] [tvm] masahi commented on pull request #8244: [Metal] Fix bad stream after interrupted tuning session

2021-06-15 Thread GitBox


masahi commented on pull request #8244:
URL: https://github.com/apache/tvm/pull/8244#issuecomment-861870103


   @echuraev please fix the conflict.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #7876: [iOS] Add tracker support into ios-rpc application

2021-06-15 Thread GitBox


areusch commented on a change in pull request #7876:
URL: https://github.com/apache/tvm/pull/7876#discussion_r652185799



##
File path: apps/ios_rpc/tvmrpc/RPCArgs.mm
##
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#import "RPCArgs.h"
+
+#import 
+
+#import "../../../src/support/socket.h"
+#import "../../../src/support/utils.h"
+
+#import 
+
+using std::string;
+
+const char* kUsage =
+"\n"
+"iOS tvmrpc application supported flags:\n"
+"--host_url  The tracker/proxy address, Default=0.0.0.0\n"
+"--host_port The tracker/proxy port, Default=9190\n"
+"--key   The key used to identify the device type in tracker. 
Default=\"\"\n"
+"--custom_addr   Custom IP Address to Report to RPC Tracker. 
Default=\"\"\n"
+"--immediate_connect   No UI interconnection, connect to tracker 
immediately. Default=False\n"
+"--verbose   Allow to print status info to std out. Default=False\n"
+"--server_mode   Server mode. Can be \"pure_server\", \"proxy\" or 
\"tracker\". "
+"Default=pure_server \n"
+"\n";
+
+struct RPCArgs_cpp {
+  string host_url = "0.0.0.0";
+  int host_port = 9190;
+
+  string key;
+  string custom_addr = "";
+
+  bool immediate_connect = false;
+  bool verbose = false;
+  char server_mode = 0;
+
+  operator RPCArgs() const {
+return RPCArgs{.host_url = host_url.c_str(),
+   .host_port = host_port,
+   .key = key.c_str(),
+   .custom_addr = custom_addr.c_str(),
+   .verbose = verbose,
+   .immediate_connect = immediate_connect,
+   .server_mode = server_mode};
+  };
+
+  RPCArgs_cpp& operator=(const RPCArgs& args) {
+host_url = args.host_url;
+host_port = args.host_port;
+key = args.key;
+custom_addr = args.custom_addr;
+verbose = args.verbose;
+immediate_connect = args.immediate_connect;
+server_mode = args.server_mode;
+return *this;
+  }
+};
+
+struct RPCArgs_cpp g_rpc_args;
+
+static void restore_from_cache() {
+  NSUserDefaults* defaults = [NSUserDefaults standardUserDefaults];
+
+  auto get_string_from_cache = [defaults](const char* key) {
+NSString* ns_key = [NSString stringWithUTF8String:key];
+NSString* ns_val = [defaults stringForKey:ns_key];
+return std::string(ns_val != nil ? [ns_val UTF8String] : "");
+  };
+
+  auto get_int_from_cache = [defaults](const char* key) {
+NSString* ns_key = [NSString stringWithUTF8String:key];
+return static_cast([defaults integerForKey:ns_key]);
+  };
+
+  g_rpc_args.host_url = get_string_from_cache("tmvrpc_url");
+  g_rpc_args.host_port = get_int_from_cache("tmvrpc_port");
+  g_rpc_args.key = get_string_from_cache("tmvrpc_key");
+}
+
+static void update_in_cache() {
+  NSUserDefaults* defaults = [NSUserDefaults standardUserDefaults];
+
+  [defaults setObject:[NSString 
stringWithUTF8String:g_rpc_args.host_url.c_str()]
+   forKey:@"tmvrpc_url"];
+  [defaults setInteger:g_rpc_args.host_port forKey:@"tmvrpc_port"];
+  [defaults setObject:[NSString stringWithUTF8String:g_rpc_args.key.c_str()] 
forKey:@"tmvrpc_key"];

Review comment:
   nit: tvm, here and above

##
File path: apps/ios_rpc/tvmrpc/RPCServer.mm
##
@@ -0,0 +1,809 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file ViewController.mm
+ */
+
+#import "RPCServer.h"
+
+#include 

[GitHub] [tvm] masahi merged pull request #8261: [Metal] Fix run metal model when non first device is selected

2021-06-15 Thread GitBox


masahi merged pull request #8261:
URL: https://github.com/apache/tvm/pull/8261


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [Metal] Fix run metal model when non first device is selected (#8261)

2021-06-15 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new a8663d2  [Metal] Fix run metal model when non first device is selected 
(#8261)
a8663d2 is described below

commit a8663d223a163a932b6c5ebe7d21108f98cd7b94
Author: Egor Churaev 
AuthorDate: Wed Jun 16 01:10:22 2021 +0300

[Metal] Fix run metal model when non first device is selected (#8261)

In case when we select non first Metal device, we got problem in
stream, due to we used wrong device_id in CopyDataFromTo.
---
 src/runtime/metal/metal_device_api.mm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/runtime/metal/metal_device_api.mm 
b/src/runtime/metal/metal_device_api.mm
index 1c5666d..43d8ccd 100644
--- a/src/runtime/metal/metal_device_api.mm
+++ b/src/runtime/metal/metal_device_api.mm
@@ -206,11 +206,11 @@ void MetalWorkspace::CopyDataFromTo(const void* from, 
size_t from_offset, void*
   AUTORELEASEPOOL {
 this->Init();
 Device dev = dev_from;
+if (dev_from.device_type == kDLCPU) dev = dev_to;
 Stream* s = GetStream(stream, dev.device_id);
 if (s->HasErrorHappened()) {
   LOG(FATAL) << "Error! Some problems on GPU happaned! Cannot copy data to 
current stream";
 }
-if (dev_from.device_type == kDLCPU) dev = dev_to;
 id cb = s->GetCommandBuffer();
 int from_dev_type = static_cast(dev_from.device_type);
 int to_dev_type = static_cast(dev_to.device_type);


[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652184550



##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed floating point precision for relay graphs. i.e. turn 
a graph into fp16.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// MIXED_PRECISION_ALWAYS ops should always be done in lower precision due to 
the speed and memory
+// savings. MIXED_PRECISION_FOLLOW ops can be done in lower precision but 
don't have speedups to
+// justify a cast. MIXED_PRECISION_NEVER colored ops should not be done in 
lower precision due to
+// numerical reasons.
+enum MixedTypeConversionCategory : int {
+  MIXED_PRECISION_ALWAYS = 0,
+  MIXED_PRECISION_FOLLOW = 1,
+  MIXED_PRECISION_NEVER = 2
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// Return array is of type : [MixedTypeConversionCategory (int), String, 
String]
+// The fields are  : [ConversionCategory, accumulation_datatype, 
output_datatype]
+// Call is a call node, DataType is the mixed precision type
+using FTVMMixedPrecisionConversionType = 
runtime::TypedPackedFunc(
+const Call& call_node, const std::string& target_dtype_str)>;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;

Review comment:
   done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652182389



##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed floating point precision for relay graphs. i.e. turn 
a graph into fp16.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// MIXED_PRECISION_ALWAYS ops should always be done in lower precision due to 
the speed and memory
+// savings. MIXED_PRECISION_FOLLOW ops can be done in lower precision but 
don't have speedups to
+// justify a cast. MIXED_PRECISION_NEVER colored ops should not be done in 
lower precision due to
+// numerical reasons.
+enum MixedTypeConversionCategory : int {
+  MIXED_PRECISION_ALWAYS = 0,
+  MIXED_PRECISION_FOLLOW = 1,
+  MIXED_PRECISION_NEVER = 2
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// Return array is of type : [MixedTypeConversionCategory (int), String, 
String]
+// The fields are  : [ConversionCategory, accumulation_datatype, 
output_datatype]
+// Call is a call node, DataType is the mixed precision type
+using FTVMMixedPrecisionConversionType = 
runtime::TypedPackedFunc(
+const Call& call_node, const std::string& target_dtype_str)>;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;
+
+  // The target datatype we want to convert to e.g. FP16
+  const DataType mixed_precision_type;
+
+  // If false, throws a fatal error if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool ignore_missing_ops;
+
+  // If true, emits a warning if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool warn_missing_ops;
+
+  Attrs GetNewAttrs(const CallNode* call, const DataType& accumulation_dtype) 
const {
+/* If the accumulation dtype is in the attributes make a copy and mutate 
the field. */
+Attrs cur_attrs = call->attrs;
+if (cur_attrs.get() != nullptr) {
+  // TODO(AndrewZhaoLuo): Figure out a better way to do this
+  // modify output_dtype attributes (accumulation dtypes for ops)
+  if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = 
cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  }
+
+  // modify dtype attributes 

[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652181818



##
File path: tests/python/relay/test_to_mixed_precision.py
##
@@ -0,0 +1,446 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Unit tests for testing ToMixedPrecision pass"""

Review comment:
   It takes around 3 seconds on my m1 mac.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


comaniac commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652169800



##
File path: python/tvm/relay/transform/transform.py
##
@@ -1199,3 +1198,18 @@ def FakeQuantizationToInteger():
 The registered SimplifyExpr pass.
 """
 return _ffi_api.FakeQuantizationToInteger()
+
+
+def ToMixedPrecision(
+mixed_precision_type="float16", ignore_missing_ops=True, 
warn_missing_ops=True
+):
+"""
+Automatic mixed precision rewriter. Rewrite an FP32 relay graph into a 
version
+where as many operations as possible are in the target 
mixed_precision_type.
+
+Returns
+---
+ret : tvm.transform.Pass
+The registered RewriteFP16 pass.

Review comment:
   ```suggestion
   The registered pass.
   ```

##
File path: python/tvm/relay/op/op.py
##
@@ -457,6 +458,29 @@ def register_fake_quantization_to_integer(op_name, 
func=None, level=10):
 return tvm.ir.register_op_attr(op_name, "FTVMFakeQuantizationToInteger", 
func, level)
 
 
+def register_mixed_precision_conversion(op_name, func=None, level=10):
+"""Register mixed precision conversion function for an op
+
+Given an op the function should return information on how the value should 
be
+converted. Specifically the function should take a call node and the target
+mixed precision datatype (e.g. FP16) and return the conversion category
+(see python/tvm/relay/transform/mixed_precision.py) as well as the 
accumulation
+and output datatype of the oepration.

Review comment:
   ```suggestion
   and output datatype of the operation.
   ```

##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed floating point precision for relay graphs. i.e. turn 
a graph into fp16.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// MIXED_PRECISION_ALWAYS ops should always be done in lower precision due to 
the speed and memory
+// savings. MIXED_PRECISION_FOLLOW ops can be done in lower precision but 
don't have speedups to
+// justify a cast. MIXED_PRECISION_NEVER colored ops should not be done in 
lower precision due to
+// numerical reasons.
+enum MixedTypeConversionCategory : int {
+  MIXED_PRECISION_ALWAYS = 0,
+  MIXED_PRECISION_FOLLOW = 1,
+  MIXED_PRECISION_NEVER = 2
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// Return array is of type : [MixedTypeConversionCategory (int), String, 
String]
+// The fields are  : [ConversionCategory, accumulation_datatype, 
output_datatype]
+// Call is a call node, DataType is the mixed precision type
+using FTVMMixedPrecisionConversionType = 
runtime::TypedPackedFunc(
+const Call& call_node, const std::string& target_dtype_str)>;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;

Review comment:
   Add suffix `_` to all private class members.
   ```suggestion
 CachedCastNodes cast_nodes_cache_;
   ```

##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   

[GitHub] [tvm] comaniac commented on a change in pull request #8253: [tvmc] Add a --config option to `tvmc compile`

2021-06-15 Thread GitBox


comaniac commented on a change in pull request #8253:
URL: https://github.com/apache/tvm/pull/8253#discussion_r652165844



##
File path: python/tvm/driver/tvmc/common.py
##
@@ -415,3 +415,86 @@ def parse_shape_string(inputs_string):
 shape_dict[name] = shape
 
 return shape_dict
+
+
+def set_config_value(name, value, config_type):
+"""Set a PassContext configuration value according to its value"""
+
+if config_type == "IntImm":
+# "Bool" configurations in the PassContext are recognized as
+# IntImm, so deal with this case here
+mapping_values = {
+"false": False,
+"true": True,
+}
+
+if value.isdigit():
+parsed_value = int(value)
+else:
+# if not an int, accept only values on the mapping table, case 
insensitive
+parsed_value = mapping_values.get(value.lower(), None)
+
+if parsed_value is None:
+raise TVMCException(f"Invalid value '{value}' for configuration 
'{name}'. ")
+
+if config_type == "runtime.String":
+parsed_value = value

Review comment:
   Hmm you already processed int so I don't think it would be an issue 
here. Anyways, I don't have a strong preference of using `json` so I'll commit.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


comaniac commented on pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#issuecomment-861847268


   > LGTM
   
   Sorry I was reviewing another PR and misapproved this one. Please ignore 
this approval and I'll take another look later.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on pull request #8253: [tvmc] Add a --config option to `tvmc compile`

2021-06-15 Thread GitBox


leandron commented on pull request #8253:
URL: https://github.com/apache/tvm/pull/8253#issuecomment-861832097


   @comaniac @gromero I updated this incorporating most of you comments. Please 
have a look when you have a moment.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


mbrookhart commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652142972



##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed floating point precision for relay graphs. i.e. turn 
a graph into fp16.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// MIXED_PRECISION_ALWAYS ops should always be done in lower precision due to 
the speed and memory
+// savings. MIXED_PRECISION_FOLLOW ops can be done in lower precision but 
don't have speedups to
+// justify a cast. MIXED_PRECISION_NEVER colored ops should not be done in 
lower precision due to
+// numerical reasons.
+enum MixedTypeConversionCategory : int {
+  MIXED_PRECISION_ALWAYS = 0,
+  MIXED_PRECISION_FOLLOW = 1,
+  MIXED_PRECISION_NEVER = 2
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// Return array is of type : [MixedTypeConversionCategory (int), String, 
String]
+// The fields are  : [ConversionCategory, accumulation_datatype, 
output_datatype]
+// Call is a call node, DataType is the mixed precision type
+using FTVMMixedPrecisionConversionType = 
runtime::TypedPackedFunc(
+const Call& call_node, const std::string& target_dtype_str)>;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;
+
+  // The target datatype we want to convert to e.g. FP16
+  const DataType mixed_precision_type;
+
+  // If false, throws a fatal error if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool ignore_missing_ops;
+
+  // If true, emits a warning if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool warn_missing_ops;
+
+  Attrs GetNewAttrs(const CallNode* call, const DataType& accumulation_dtype) 
const {
+/* If the accumulation dtype is in the attributes make a copy and mutate 
the field. */
+Attrs cur_attrs = call->attrs;
+if (cur_attrs.get() != nullptr) {
+  // TODO(AndrewZhaoLuo): Figure out a better way to do this
+  // modify output_dtype attributes (accumulation dtypes for ops)
+  if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = 
cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  }
+
+  // modify dtype attributes 

[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652142144



##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed floating point precision for relay graphs. i.e. turn 
a graph into fp16.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// MIXED_PRECISION_ALWAYS ops should always be done in lower precision due to 
the speed and memory
+// savings. MIXED_PRECISION_FOLLOW ops can be done in lower precision but 
don't have speedups to
+// justify a cast. MIXED_PRECISION_NEVER colored ops should not be done in 
lower precision due to
+// numerical reasons.
+enum MixedTypeConversionCategory : int {
+  MIXED_PRECISION_ALWAYS = 0,
+  MIXED_PRECISION_FOLLOW = 1,
+  MIXED_PRECISION_NEVER = 2
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// Return array is of type : [MixedTypeConversionCategory (int), String, 
String]
+// The fields are  : [ConversionCategory, accumulation_datatype, 
output_datatype]
+// Call is a call node, DataType is the mixed precision type
+using FTVMMixedPrecisionConversionType = 
runtime::TypedPackedFunc(
+const Call& call_node, const std::string& target_dtype_str)>;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;
+
+  // The target datatype we want to convert to e.g. FP16
+  const DataType mixed_precision_type;
+
+  // If false, throws a fatal error if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool ignore_missing_ops;
+
+  // If true, emits a warning if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool warn_missing_ops;
+
+  Attrs GetNewAttrs(const CallNode* call, const DataType& accumulation_dtype) 
const {
+/* If the accumulation dtype is in the attributes make a copy and mutate 
the field. */
+Attrs cur_attrs = call->attrs;
+if (cur_attrs.get() != nullptr) {
+  // TODO(AndrewZhaoLuo): Figure out a better way to do this
+  // modify output_dtype attributes (accumulation dtypes for ops)
+  if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = 
cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  }
+
+  // modify dtype attributes 

[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652134539



##
File path: python/tvm/relay/transform/transform.py
##
@@ -1199,3 +1198,22 @@ def FakeQuantizationToInteger():
 The registered SimplifyExpr pass.
 """
 return _ffi_api.FakeQuantizationToInteger()
+
+
+def ToMixedPrecision(
+mixed_precision_type="float16", ignore_missing_ops=True, 
warn_missing_ops=True
+):
+"""
+Automatic mixed precision rewriter. Rewrite an FP32 relay graph into a 
version
+where as many operations as possible are in the target 
mixed_precision_type.
+
+Note this does mutate the original graph putting it in a bad state 
potentially.
+
+TODO(AndrewZhaoLuo): don't mutate the original graph.

Review comment:
   That problem is pretty old. It doesn't seem to have the problem anymore 
so removed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] electriclilies commented on pull request #8248: [BUG FIX] Add _type_has_method_sequal_reduce to Span and SourceNode

2021-06-15 Thread GitBox


electriclilies commented on pull request #8248:
URL: https://github.com/apache/tvm/pull/8248#issuecomment-861814199


   Thanks @masahi!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on a change in pull request #8259: [Graph Debug Executor] Fix device_type for profile command

2021-06-15 Thread GitBox


tkonolige commented on a change in pull request #8259:
URL: https://github.com/apache/tvm/pull/8259#discussion_r652128115



##
File path: src/runtime/graph_executor/debug/graph_executor_debug.cc
##
@@ -300,7 +306,8 @@ class GraphExecutorDebug : public GraphExecutor {
 }
 
 uint32_t eid = entry_id(i, 0);
-const Device& dev = data_entry_[eid]->device;
+Device dev = data_entry_[eid]->device;
+dev.device_type = static_cast(dev.device_type % 
kRPCSessMask);

Review comment:
   I don't think the device type should be changed within the profiling 
call. Instead, you should set the device type when the graph executor is 
created.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on a change in pull request #8069: [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass

2021-06-15 Thread GitBox


mbrookhart commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652090449



##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed floating point precision for relay graphs. i.e. turn 
a graph into fp16.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// MIXED_PRECISION_ALWAYS ops should always be done in lower precision due to 
the speed and memory
+// savings. MIXED_PRECISION_FOLLOW ops can be done in lower precision but 
don't have speedups to
+// justify a cast. MIXED_PRECISION_NEVER colored ops should not be done in 
lower precision due to
+// numerical reasons.
+enum MixedTypeConversionCategory : int {
+  MIXED_PRECISION_ALWAYS = 0,
+  MIXED_PRECISION_FOLLOW = 1,
+  MIXED_PRECISION_NEVER = 2
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// Return array is of type : [MixedTypeConversionCategory (int), String, 
String]
+// The fields are  : [ConversionCategory, accumulation_datatype, 
output_datatype]
+// Call is a call node, DataType is the mixed precision type
+using FTVMMixedPrecisionConversionType = 
runtime::TypedPackedFunc(
+const Call& call_node, const std::string& target_dtype_str)>;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;
+
+  // The target datatype we want to convert to e.g. FP16
+  const DataType mixed_precision_type;
+
+  // If false, throws a fatal error if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool ignore_missing_ops;
+
+  // If true, emits a warning if an op which is not registered with a
+  // FTVMMixedPrecisionConversionType is encountered.
+  bool warn_missing_ops;
+
+  Attrs GetNewAttrs(const CallNode* call, const DataType& accumulation_dtype) 
const {
+/* If the accumulation dtype is in the attributes make a copy and mutate 
the field. */
+Attrs cur_attrs = call->attrs;
+if (cur_attrs.get() != nullptr) {
+  // TODO(AndrewZhaoLuo): Figure out a better way to do this
+  // modify output_dtype attributes (accumulation dtypes for ops)
+  if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = 
cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = cur_attrs.as()) {
+return ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  }

Review comment:
   Makes me miss duck 

[tvm] 01/01: point jenkins at new docker

2021-06-15 Thread mbrookhart
This is an automated email from the ASF dual-hosted git repository.

mbrookhart pushed a commit to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit f824f4a7f07bace257a7d4b649c974577c25e781
Author: Matthew Brookhart 
AuthorDate: Tue Jun 15 12:45:48 2021 -0700

point jenkins at new docker
---
 Jenkinsfile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 3ea6d22..e304e35 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -45,7 +45,7 @@
 
 // NOTE: these lines are scanned by docker/dev_common.sh. Please update the 
regex as needed. -->
 ci_lint = "tlcpack/ci-lint:v0.65"
-ci_gpu = "tlcpack/ci-gpu:v0.75"
+ci_gpu = "mbrookhart/tvm.ci-gpu_onnx_update"
 ci_cpu = "tlcpack/ci-cpu:v0.74"
 ci_wasm = "tlcpack/ci-wasm:v0.71"
 ci_i386 = "tlcpack/ci-i386:v0.73"


[tvm] branch ci-docker-staging updated (a2e0166 -> f824f4a)

2021-06-15 Thread mbrookhart
This is an automated email from the ASF dual-hosted git repository.

mbrookhart pushed a change to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git.


 discard a2e0166  add failing onnx tets
 add 1dea3ea  add failing onnx tets
 new f824f4a  point jenkins at new docker

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (a2e0166)
\
 N -- N -- N   refs/heads/ci-docker-staging (f824f4a)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 Jenkinsfile| 2 +-
 tests/python/frontend/onnx/test_forward.py | 4 +++-
 2 files changed, 4 insertions(+), 2 deletions(-)


[tvm] branch ci-docker-staging updated (946bdbe -> a2e0166)

2021-06-15 Thread mbrookhart
This is an automated email from the ASF dual-hosted git repository.

mbrookhart pushed a change to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git.


 discard 946bdbe  [CI] Update to the latest version
 add 27e44ee  [Relay] Support dynamic indices size in gather_nd and 
scatter_nd (#8105)
 add e26990f  [AutoTVM][AutoScheduler] Add workaround to alter op layout 
bug in task extraction. (#8143)
 add 8b5d843  Fix tvmc tuner for cases when uTVM is not enabled (#8153)
 add e535ec8  [VM] Avoid round-trip Target->str->Target conversions (#8161)
 add 1fe9f8d  [CMake][Minor] Update CMake warning flags (#8152)
 add 4bbbfe8  [Fix] Fix conv2d HWNC type strategy (#8147)
 add 7316a38  [CI] Fix the CI after image update. (#8164)
 add 713de0c  [CI][DOCKER] Fix cuda11 nvidia-docker support for non-Tesla 
gpus (#8163)
 add eebd5a9  [FastMath] Add cuda & x86 schedules for fast_softmax (#8150)
 add bd4b14d  Update auto_tuning_with_python.py (#8158)
 add 06a466c  allow libbacktrace to be used when cross compiling the 
runtime (#7917)
 add 106c331  [microTVM] make RVM memory and number of cores variable 
(#8154)
 add 6baccc1  [ONNX] [Relay] Update unique operator to match ONNX output 
(1D only) (#8099)
 add bc785de  Add function attribute for shape func for profiling (#8148)
 add bb3e772  [Vulkan][Docs] Minor updates following Vulkan target query. 
(#8151)
 add 0c83fe8  [Vulkan] Remove dependency on Target from -from_device 
functionality. (#8171)
 add b7c98b8  [Strategy] Add group_conv2d_nchw_int8 in cuda strategy (#8167)
 add cbe3dca  [Relay, TOPI] Refactor strided_slice and add axes argument 
(#8165)
 add cc3d60e  [BYOC][TensorRT] Reuse TRT engines based on max_batch_size 
for dynamic batching, improve device buffer allocation (#8172)
 add 155f669  [TVMC] Fix tvmc compile to extract target and target_host 
from --target (#8176)
 add b753772  fix UTF (#8185)
 add dd09bbb  [TensorIR][M2a] ComputeInline,ReverseComputeInline (#8170)
 add 7c99d83  [Vulkan][UnitTests] Compatibility fix for 
test_vulkan_unique(). (#8186)
 add aca48d6  [Vulkan] Corrected typo in Vulkan capability error messages. 
(#8187)
 add ae4a3be  [Vulkan][Refactor] Pull out vulkan initialization into 
VulkanInstance and VulkanDevice (#8188)
 add c7f1b45  Onnx eyelike (#8191)
 add 0429c63  Complete register op from python (#8079)
 add a74d0fe  [Codegen] Use "target.build.$TARGET_KIND" for all codegen 
functions. (#8071)
 add c9db3d0  [METAL] Fix the rest memory leaks in Metal runtime (#8175)
 add 82cf197  Fix prelu bug in pytorch frontend (#8192)
 add aa9974f  [TE/TIR] Fix create_prim_func to properly handle rank 0 
tensors. (#8128)
 add 3e34e11  [CMake] Add compile-time check that libtvm_runtime.so has no 
undefined symbols. (#8178)
 add a769ece  [AOT] Initial implementation of --unpacked-api (#8023)
 add a1cd6d5  fix py files (#8194)
 add e0baf80  Run ONNX Node Tests on available targets (#8189)
 add f4ec5fd  [Relay, TF] Support converting TF combined_nms using Relay 
all_class_nms (#8174)
 add 010d11b  [Texture support][Part 0] Device API and runtime support 
(#7711)
 add 5b37b4a  Fix typo (#8197)
 add 43387d0  fix bug in dense_nopack if dynamic input shape (#8166)
 add 2cca934  [RUNTIME][REFACTOR] Re-organize Containers into SubFolders 
(#8183)
 add cc9d5cf  update python code style to 3.6 (#8199)
 add f4b5e76  [CI][DOCS] Fix the sphinx doc style for sphinx4 (#8198)
 add 072a3d2  Fix incorrect device name in TVMC. (#8181)
 add 3ab4a6b  Add thread_warp_size for Metal device in default target 
attributes (#8202)
 add 51bbd63  Fix conv2d_nchw for opencl intel graphics (#8201)
 add 364bc1b  [QEMU] Add number of cores, target list for build (#8156)
 add 2c67d71  [FIX] Allow tokenizer to parse numbers greater than INT_MAX. 
(#8120)
 add 64a8e81  [Frontend, Tensorflow2] Adding TF2 frontend code with support 
for control flow ops  (#8142)
 add 9be0f4f  [Relay] Convert a fake quantized or QAT graph into QNN ops 
(#8126)
 add d1e2e0d  [Fix][microTVM] QEMU RPC issue (#8021)
 add f1486ef  [Docker] Add external directory mount (#8144)
 add bd0f5bc  Support dequantizing scalar inputs (#8207)
 add f646048  use an empty module for fold_constant (#8208)
 add 5e006e0  [TIR] Fix data dependent indexing when lowering TE to TIR 
(#8217)
 add 685ebda  [VM] Better error messages (#8218)
 add 9899f1e  Auto-tuning a Convolutional Network for ARM CPU (tutorial 
error, bug reports)  (#8103)
 add 55459e7  [TVMSCRIPT] Add tir.min node in tvm script (#8219)
 add 5dc9627  [Metal] Remove matching Metal to OpenCL in tophub (#8211)
 add 8a04efa  Graph executor: remove unnecessary unique_ptr, NFC (#8214)
 add 53e4c60  [DOC] Improve "Getting Started with TVM" tutorials and fix 
warnings (#8221)
 add 1f2ca06  Expose 

[GitHub] [tvm-rfcs] AndrewZhaoLuo commented on pull request #6: [RFC] [Relay] Automatic Mixed Precision Pass

2021-06-15 Thread GitBox


AndrewZhaoLuo commented on pull request #6:
URL: https://github.com/apache/tvm-rfcs/pull/6#issuecomment-861736780


   So the associated PR is getting closer to a mergeable state. Is this RFC 
ready to merge?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on pull request #8069: [Relay] [Pass] Add FP16 model conversion pass

2021-06-15 Thread GitBox


AndrewZhaoLuo commented on pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#issuecomment-861727219


   @anijain2305 
   @masahi 
   @comaniac 
   @mbrookhart 
   @csullivan 
   
   PTAL. I believe I've addressed all the major points.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #8069: [Relay] [Pass] Add FP16 model conversion pass

2021-06-15 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r652035337



##
File path: src/relay/transforms/fp32_to_fp16.h
##
@@ -0,0 +1,232 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file fp32_to_fp16.h
+ * \brief Utilities and common types used for FP32->FP16 pass.
+ */
+#ifndef TVM_RELAY_TRANSFORMS_FP32_TO_FP16_H_
+#define TVM_RELAY_TRANSFORMS_FP32_TO_FP16_H_
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+struct FP16OpDType {
+  DataType accumulation_dtype;
+  DataType output_dtype;
+};
+
+// GREEN colored ops should always be done in FP16 due to the speed and memory 
savings
+// GRAY colored ops can be done in FP16 but don't have speedups to justify a 
dedicated cast.
+// RED colored ops should not be done in FP16 due to numerical reasons.
+enum FP16ConversionCategory { RED, GRAY, GREEN };
+
+using OpStringSet = std::unordered_set;
+
+// Default lists inspired from TF's classifications:

Review comment:
   This is now done.

##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,356 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed precision for relay graphs. i.e. turn a graph into 
fp16 form.
+ */
+#include "to_mixed_precision.h"
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// A function which maps CallNodes to their initial conversion color
+using ColorFunc = std::function;
+
+// A function which maps MIXED_PRECISION_ALWAYS CallNodes to wanted 
accumulation and output dtypes
+using OutputDtypeFunc = std::function;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;
+  const ColorFunc colorer;
+  const OutputDtypeFunc output_dtype_func;
+  const DataType mixed_precision_type;
+
+  Attrs GetNewAttrs(const CallNode* call, const DataType& accumulation_dtype) 
const {
+/* If the accumulation dtype is in the attributes make a copy and mutate 
the field. */
+Attrs new_attrs = Attrs(call->attrs);
+if (new_attrs.get() != nullptr) {
+  // TODO(AndrewZhaoLuo): Figure out a better way to do this
+  // modify output_dtype attributes (accumulation dtypes for ops)
+  if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, 

[GitHub] [tvm] electriclilies commented on issue #6624: [Relay] Module mutated in-place

2021-06-15 Thread GitBox


electriclilies commented on issue #6624:
URL: https://github.com/apache/tvm/issues/6624#issuecomment-861707369


   Hopefully the issue should be resolved soon! The AlterOpLayout pass has been 
causing issues for a while now :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (5df25cf -> e4c7623)

2021-06-15 Thread jwfromm
This is an automated email from the ASF dual-hosted git repository.

jwfromm pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 5df25cf  [microTVM] Add wait to QEMU Setup   (#8236)
 add e4c7623  Fix compilation of tvm runtime for iOS (#8242)

No new revisions were added by this update.

Summary of changes:
 CMakeLists.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


[GitHub] [tvm] jwfromm merged pull request #8242: Fix compilation of tvm runtime for iOS

2021-06-15 Thread GitBox


jwfromm merged pull request #8242:
URL: https://github.com/apache/tvm/pull/8242


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on a change in pull request #8056: [Relay, TOPI] Add negative log likelihood loss (nll_loss) op

2021-06-15 Thread GitBox


tkonolige commented on a change in pull request #8056:
URL: https://github.com/apache/tvm/pull/8056#discussion_r651950458



##
File path: tests/python/relay/test_op_level10.py
##
@@ -577,6 +577,49 @@ def _verify(input_shape, diagonal_shape, dtype, k=0, 
align="RIGHT_LEFT"):
 _verify((2, 3, 4), (2, 4, 3), "int32", (-1, 2), "RIGHT_RIGHT")
 
 
+@tvm.testing.uses_gpu
+def test_nll_loss():

Review comment:
   Switch this to use `parameterize_targets`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (75d9b78 -> 5df25cf)

2021-06-15 Thread areusch
This is an automated email from the ASF dual-hosted git repository.

areusch pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 75d9b78  Add check to only cast opaque handles to cl::BufferDescriptor 
at runtime. (#8256)
 add 5df25cf  [microTVM] Add wait to QEMU Setup   (#8236)

No new revisions were added by this update.

Summary of changes:
 python/tvm/micro/contrib/zephyr.py| 44 +++
 tests/micro/zephyr/test_zephyr_aot.py | 43 ++
 2 files changed, 87 insertions(+)


[GitHub] [tvm] areusch commented on pull request #8236: [microTVM] Add wait to QEMU Setup

2021-06-15 Thread GitBox


areusch commented on pull request #8236:
URL: https://github.com/apache/tvm/pull/8236#issuecomment-861613556


   thanks @mehrdadh !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch merged pull request #8236: [microTVM] Add wait to QEMU Setup

2021-06-15 Thread GitBox


areusch merged pull request #8236:
URL: https://github.com/apache/tvm/pull/8236


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] senychen opened a new pull request #8262: Fix docstrings in tvm.relay.cast_like

2021-06-15 Thread GitBox


senychen opened a new pull request #8262:
URL: https://github.com/apache/tvm/pull/8262


   * use correct formats to avoid chaos
   
   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   
   just to avoid html can show correct and fine information as other functions.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] gromero commented on a change in pull request #8253: [tvmc] Add a --config option to `tvmc compile`

2021-06-15 Thread GitBox


gromero commented on a change in pull request #8253:
URL: https://github.com/apache/tvm/pull/8253#discussion_r651769989



##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -42,6 +42,13 @@ def add_compile_parser(subparsers):
 
 parser = subparsers.add_parser("compile", help="compile a model.")
 parser.set_defaults(func=drive_compile)
+parser.add_argument(
+"--config",
+action="append",
+metavar=("name=value"),
+help="configurations to be used at compile time. A subset of options 
provided "
+"by TVM are supported. e.g. 'relay.backend.use_auto_scheduler=0'",

Review comment:
   OK. Yeah I was indeed thinking of how we deal with `--target` ;)
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zotanika commented on pull request #8125: [Caffe Frontend] supporting group > 1 cases for Deconv op

2021-06-15 Thread GitBox


zotanika commented on pull request #8125:
URL: https://github.com/apache/tvm/pull/8125#issuecomment-861443441


   reopened #8260 on a clean branch 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] echuraev opened a new pull request #8261: [Metal] Fix run metal model when non first device is selected

2021-06-15 Thread GitBox


echuraev opened a new pull request #8261:
URL: https://github.com/apache/tvm/pull/8261


   In case when we select non first Metal device, we got problem in
   stream, due to we used wrong device_id in CopyDataFromTo.
   
   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbaret commented on pull request #7925: Add a 'rolling_buffer' scheduling primitive

2021-06-15 Thread GitBox


mbaret commented on pull request #7925:
URL: https://github.com/apache/tvm/pull/7925#issuecomment-861413701


   ping @junrushao1994, could you take a look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on a change in pull request #8253: [tvmc] Add a --config option to `tvmc compile`

2021-06-15 Thread GitBox


leandron commented on a change in pull request #8253:
URL: https://github.com/apache/tvm/pull/8253#discussion_r651680618



##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -42,6 +42,13 @@ def add_compile_parser(subparsers):
 
 parser = subparsers.add_parser("compile", help="compile a model.")
 parser.set_defaults(func=drive_compile)
+parser.add_argument(
+"--config",
+action="append",
+metavar=("name=value"),
+help="configurations to be used at compile time. A subset of options 
provided "
+"by TVM are supported. e.g. 'relay.backend.use_auto_scheduler=0'",

Review comment:
   > I'm wondering if it would make sense to enhance the help message a bit 
more so users don't try to do something like:
   
   Fixed.
   
   > 
   > I also see duplicated and even conflicting flags don't generate any error 
or warning. Should we treat them too? Like:
   > 
   
   I think most tools won't complain if you provide repeated configs. In this 
case, similar to what most tools e.g. _docker, bash_, will just assume the 
latest (considering parsing is done left to right) value is the one to be used.
   
   note: In some other cases, like in the `--target`, I'm validating duplicates 
because I need to translate that plain string to various internal APIs.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on a change in pull request #8253: [tvmc] Add a --config option to `tvmc compile`

2021-06-15 Thread GitBox


leandron commented on a change in pull request #8253:
URL: https://github.com/apache/tvm/pull/8253#discussion_r651616441



##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -42,6 +42,13 @@ def add_compile_parser(subparsers):
 
 parser = subparsers.add_parser("compile", help="compile a model.")
 parser.set_defaults(func=drive_compile)
+parser.add_argument(
+"--config",

Review comment:
   I'll move it to be `pass-config` instead.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on a change in pull request #8253: [tvmc] Add a --config option to `tvmc compile`

2021-06-15 Thread GitBox


leandron commented on a change in pull request #8253:
URL: https://github.com/apache/tvm/pull/8253#discussion_r651615824



##
File path: python/tvm/driver/tvmc/common.py
##
@@ -415,3 +415,86 @@ def parse_shape_string(inputs_string):
 shape_dict[name] = shape
 
 return shape_dict
+
+
+def set_config_value(name, value, config_type):
+"""Set a PassContext configuration value according to its value"""
+
+if config_type == "IntImm":
+# "Bool" configurations in the PassContext are recognized as
+# IntImm, so deal with this case here
+mapping_values = {
+"false": False,
+"true": True,
+}
+
+if value.isdigit():
+parsed_value = int(value)
+else:
+# if not an int, accept only values on the mapping table, case 
insensitive
+parsed_value = mapping_values.get(value.lower(), None)
+
+if parsed_value is None:
+raise TVMCException(f"Invalid value '{value}' for configuration 
'{name}'. ")
+
+if config_type == "runtime.String":
+parsed_value = value

Review comment:
   I did some investigation on this, and wrt to the `distutils` I didn't 
want to add that dependency here, because I think it would be a a bit misplaced.
   
   Also wrt to the `json` approach, I think it would still require more 
validation because the allowed values for that option are "int numbers", "true" 
or "false", and opening that to "json.loads" would add all sorts of json 
passing, then requiring more validation. That's why I added my own mapping 
table.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on a change in pull request #8253: [tvmc] Add a --config option to `tvmc compile`

2021-06-15 Thread GitBox


leandron commented on a change in pull request #8253:
URL: https://github.com/apache/tvm/pull/8253#discussion_r651587235



##
File path: tests/python/driver/tvmc/test_tvmc_common.py
##
@@ -306,3 +306,49 @@ def test_parse_quotes_and_separators_on_options():
 
 assert len(targets_double_quote) == 1
 assert "+v1.0x,+value" == targets_double_quote[0]["opts"]["option1"]
+
+
+def test_config_invalid_format():
+with pytest.raises(TVMCException):
+_ = 
tvmc.common.parse_configs(["relay.backend.use_auto_scheduler.missing.value"])
+
+
+def test_config_missing_from_tvm():
+with pytest.raises(TVMCException):
+_ = 
tvmc.common.parse_configs(["relay.backend.use_auto_scheduler.missing.value=1234"])
+
+
+def test_config_unsupported_tvmc_config():
+with pytest.raises(TVMCException):
+_ = tvmc.common.parse_configs(["tir.LoopPartition=value"])
+
+
+def test_config_empty():
+with pytest.raises(TVMCException):
+_ = tvmc.common.parse_configs([""])
+
+
+def test_config_valid_config_bool():
+configs = 
tvmc.common.parse_configs(["relay.backend.use_auto_scheduler=true"])
+
+assert len(configs) == 1
+assert "relay.backend.use_auto_scheduler" in configs.keys()
+assert configs["relay.backend.use_auto_scheduler"] == True
+
+
+def test_config_valid_multiple_configs():
+configs = tvmc.common.parse_configs(
+[
+"relay.backend.use_auto_scheduler=false",
+"tir.detect_global_barrier=10",
+"relay.ext.vitis_ai.options.build_dir=mystring",

Review comment:
   Yeah, it crashed because that specific CI environment doesn't build with 
Vitis support ON. I'll need to use a different config name for my test. 
   
   edit: Actually, it turns out that vitis configs are the only ones using 
string type at the moment, so I make this specific test case to be conditional.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on a change in pull request #8253: [tvmc] Add a --config option to `tvmc compile`

2021-06-15 Thread GitBox


leandron commented on a change in pull request #8253:
URL: https://github.com/apache/tvm/pull/8253#discussion_r651588641



##
File path: python/tvm/driver/tvmc/common.py
##
@@ -415,3 +415,86 @@ def parse_shape_string(inputs_string):
 shape_dict[name] = shape
 
 return shape_dict
+
+
+def set_config_value(name, value, config_type):

Review comment:
   Makes sense, I updated that now.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on a change in pull request #8253: [tvmc] Add a --config option to `tvmc compile`

2021-06-15 Thread GitBox


leandron commented on a change in pull request #8253:
URL: https://github.com/apache/tvm/pull/8253#discussion_r651587235



##
File path: tests/python/driver/tvmc/test_tvmc_common.py
##
@@ -306,3 +306,49 @@ def test_parse_quotes_and_separators_on_options():
 
 assert len(targets_double_quote) == 1
 assert "+v1.0x,+value" == targets_double_quote[0]["opts"]["option1"]
+
+
+def test_config_invalid_format():
+with pytest.raises(TVMCException):
+_ = 
tvmc.common.parse_configs(["relay.backend.use_auto_scheduler.missing.value"])
+
+
+def test_config_missing_from_tvm():
+with pytest.raises(TVMCException):
+_ = 
tvmc.common.parse_configs(["relay.backend.use_auto_scheduler.missing.value=1234"])
+
+
+def test_config_unsupported_tvmc_config():
+with pytest.raises(TVMCException):
+_ = tvmc.common.parse_configs(["tir.LoopPartition=value"])
+
+
+def test_config_empty():
+with pytest.raises(TVMCException):
+_ = tvmc.common.parse_configs([""])
+
+
+def test_config_valid_config_bool():
+configs = 
tvmc.common.parse_configs(["relay.backend.use_auto_scheduler=true"])
+
+assert len(configs) == 1
+assert "relay.backend.use_auto_scheduler" in configs.keys()
+assert configs["relay.backend.use_auto_scheduler"] == True
+
+
+def test_config_valid_multiple_configs():
+configs = tvmc.common.parse_configs(
+[
+"relay.backend.use_auto_scheduler=false",
+"tir.detect_global_barrier=10",
+"relay.ext.vitis_ai.options.build_dir=mystring",

Review comment:
   Yeah, it crashed because that specific CI environment doesn't build with 
Vitis support ON. I'll need to use a different config name for my test.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on a change in pull request #8253: [tvmc] Add a --config option to `tvmc compile`

2021-06-15 Thread GitBox


leandron commented on a change in pull request #8253:
URL: https://github.com/apache/tvm/pull/8253#discussion_r651584886



##
File path: python/tvm/driver/tvmc/common.py
##
@@ -415,3 +415,86 @@ def parse_shape_string(inputs_string):
 shape_dict[name] = shape
 
 return shape_dict
+
+
+def set_config_value(name, value, config_type):
+"""Set a PassContext configuration value according to its value"""
+
+if config_type == "IntImm":
+# "Bool" configurations in the PassContext are recognized as
+# IntImm, so deal with this case here
+mapping_values = {
+"false": False,
+"true": True,
+}
+
+if value.isdigit():
+parsed_value = int(value)
+else:
+# if not an int, accept only values on the mapping table, case 
insensitive
+parsed_value = mapping_values.get(value.lower(), None)
+
+if parsed_value is None:
+raise TVMCException(f"Invalid value '{value}' for configuration 
'{name}'. ")
+
+if config_type == "runtime.String":
+parsed_value = value
+
+return parsed_value
+
+
+def parse_configs(input_configs):
+"""Parse configuration values set via command line.
+
+Parameters
+--
+input_configs: list of str
+list of configurations provided via command line.
+
+Returns
+---
+pass_context_configs: dict
+a dict containing key-value configs to be used in the PassContext.
+"""
+all_configs = tvm.ir.transform.PassContext.list_configs()
+supported_config_types = ("IntImm", "runtime.String")
+supported_configs = [
+name for name in all_configs.keys() if all_configs[name]["type"] in 
supported_config_types
+]
+pass_context_configs = {}
+
+if not input_configs:
+return {}
+
+for config in input_configs:
+if len(config) == 0:
+raise TVMCException(
+f"Invalid format for configuration '{config}', use 
="
+)
+
+# Each config is expected to be provided as "name=value"
+try:
+name, value = config.split("=")
+name = name.strip()
+value = value.strip()
+except ValueError:
+raise TVMCException(
+f"Invalid format for configuration '{config}', use 
="
+)
+
+if name not in all_configs:
+raise TVMCException(
+f"Configuration '{name}' is not defined in TVM. "
+f"These are the existing configurations: {', 
'.join(all_configs)}"
+)
+
+if name not in supported_configs:
+raise TVMCException(
+f"Configuration '{name}' is not supported in TVMC. "

Review comment:
   Done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on a change in pull request #8253: [tvmc] Add a --config option to `tvmc compile`

2021-06-15 Thread GitBox


leandron commented on a change in pull request #8253:
URL: https://github.com/apache/tvm/pull/8253#discussion_r651581634



##
File path: python/tvm/driver/tvmc/common.py
##
@@ -415,3 +415,86 @@ def parse_shape_string(inputs_string):
 shape_dict[name] = shape
 
 return shape_dict
+
+
+def set_config_value(name, value, config_type):
+"""Set a PassContext configuration value according to its value"""
+
+if config_type == "IntImm":
+# "Bool" configurations in the PassContext are recognized as
+# IntImm, so deal with this case here
+mapping_values = {
+"false": False,
+"true": True,
+}
+
+if value.isdigit():
+parsed_value = int(value)
+else:
+# if not an int, accept only values on the mapping table, case 
insensitive
+parsed_value = mapping_values.get(value.lower(), None)
+
+if parsed_value is None:
+raise TVMCException(f"Invalid value '{value}' for configuration 
'{name}'. ")
+
+if config_type == "runtime.String":
+parsed_value = value
+
+return parsed_value
+
+
+def parse_configs(input_configs):
+"""Parse configuration values set via command line.
+
+Parameters
+--
+input_configs: list of str
+list of configurations provided via command line.
+
+Returns
+---
+pass_context_configs: dict
+a dict containing key-value configs to be used in the PassContext.
+"""
+all_configs = tvm.ir.transform.PassContext.list_configs()
+supported_config_types = ("IntImm", "runtime.String")
+supported_configs = [
+name for name in all_configs.keys() if all_configs[name]["type"] in 
supported_config_types
+]
+pass_context_configs = {}
+
+if not input_configs:
+return {}
+
+for config in input_configs:
+if len(config) == 0:

Review comment:
   Done
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on a change in pull request #8253: [tvmc] Add a --config option to `tvmc compile`

2021-06-15 Thread GitBox


leandron commented on a change in pull request #8253:
URL: https://github.com/apache/tvm/pull/8253#discussion_r651581339



##
File path: python/tvm/driver/tvmc/common.py
##
@@ -415,3 +415,86 @@ def parse_shape_string(inputs_string):
 shape_dict[name] = shape
 
 return shape_dict
+
+
+def set_config_value(name, value, config_type):
+"""Set a PassContext configuration value according to its value"""
+
+if config_type == "IntImm":
+# "Bool" configurations in the PassContext are recognized as
+# IntImm, so deal with this case here
+mapping_values = {
+"false": False,
+"true": True,
+}
+
+if value.isdigit():
+parsed_value = int(value)
+else:
+# if not an int, accept only values on the mapping table, case 
insensitive
+parsed_value = mapping_values.get(value.lower(), None)
+
+if parsed_value is None:
+raise TVMCException(f"Invalid value '{value}' for configuration 
'{name}'. ")
+
+if config_type == "runtime.String":
+parsed_value = value
+
+return parsed_value
+
+
+def parse_configs(input_configs):
+"""Parse configuration values set via command line.
+
+Parameters
+--
+input_configs: list of str
+list of configurations provided via command line.
+
+Returns
+---
+pass_context_configs: dict
+a dict containing key-value configs to be used in the PassContext.
+"""
+all_configs = tvm.ir.transform.PassContext.list_configs()
+supported_config_types = ("IntImm", "runtime.String")
+supported_configs = [
+name for name in all_configs.keys() if all_configs[name]["type"] in 
supported_config_types
+]
+pass_context_configs = {}
+
+if not input_configs:
+return {}

Review comment:
   Done.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] chiwwang commented on a change in pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-15 Thread GitBox


chiwwang commented on a change in pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#discussion_r651525882



##
File path: docs/dev/pass_infra.rst
##
@@ -526,16 +663,93 @@ decorators and then invoke it. For more examples about 
how to customize your own
 optimization pipeline and debug Relay and tir passes, please refer to the
 `use pass infra`_ tutorial.
 
+
+.. _pass_instrument_py_frontend:
+
+Pass Instrument
+^^^
+
+A customizable framework to instrument passes is provided. ``PassInstrument`` 
classes can be registered while constructing ``PassContext``.
+
+.. code:: python
+
+@tvm._ffi.register_object("transform.PassContext")
+class PassContext(tvm.runtime.Object):
+def __init__(
+self,
+opt_level=2,
+required_pass=None,
+disabled_pass=None,
+instruments=None,
+config=None,
+):
+# ...
+
+One can implement a ``PassInstrument`` by using the ``pass_instrument`` 
decorator(`python/tvm/ir/instrument.py`_) on a class implementing following 
methods:

Review comment:
   Done

##
File path: docs/dev/pass_infra.rst
##
@@ -389,6 +396,136 @@ To allow other C++ modules to apply this pass, we declare 
a free function in
 
 TVM_DLL Pass FoldConstant();
 
+.. _pass_instrument_cpp_backend:
+
+Pass Instrument
+^^^
+
+Currently we introduce four instrument point in the life-cycle of 
``PassContext``.

Review comment:
   Done

##
File path: docs/dev/pass_infra.rst
##
@@ -526,16 +663,93 @@ decorators and then invoke it. For more examples about 
how to customize your own
 optimization pipeline and debug Relay and tir passes, please refer to the
 `use pass infra`_ tutorial.
 
+
+.. _pass_instrument_py_frontend:
+
+Pass Instrument
+^^^
+
+A customizable framework to instrument passes is provided. ``PassInstrument`` 
classes can be registered while constructing ``PassContext``.
+
+.. code:: python
+
+@tvm._ffi.register_object("transform.PassContext")
+class PassContext(tvm.runtime.Object):
+def __init__(
+self,
+opt_level=2,
+required_pass=None,
+disabled_pass=None,
+instruments=None,
+config=None,
+):
+# ...
+
+One can implement a ``PassInstrument`` by using the ``pass_instrument`` 
decorator(`python/tvm/ir/instrument.py`_) on a class implementing following 
methods:

Review comment:
   Done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] m3at commented on issue #6624: [Relay] Module mutated in-place

2021-06-15 Thread GitBox


m3at commented on issue #6624:
URL: https://github.com/apache/tvm/issues/6624#issuecomment-861244871


   Understood. Thanks for checking  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Beya2019 commented on pull request #8235: [TVMSCRIPT] add more type support in script function parameter

2021-06-15 Thread GitBox


Beya2019 commented on pull request #8235:
URL: https://github.com/apache/tvm/pull/8235#issuecomment-861209760


   > Can you also add `int16`, `int64`, and `float64`?
   
   OK, Now support types as below:
   ```
   int8 = ConcreteType("int8")
   int16 = ConcreteType("int16")
   int32 = ConcreteType("int32")
   int64 = ConcreteType("int64")
   float16 = ConcreteType("float16")
   float32 = ConcreteType("float32")
   float64 = ConcreteType("float64")
   bool = ConcreteType("bool")
   handle = ConcreteType("handle")
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org