(tvm) branch nightly updated (6a3fadc065 -> 268d15c987)

2024-02-07 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch nightly
in repository https://gitbox.apache.org/repos/asf/tvm.git


from 6a3fadc065 [Unity][Transform] Handle `call_tir_inplace` in `FuseTIR` 
and `FuseOps` (#16487)
 add 2dcf9ec5a6 [Keras] Enable Dense operator for any input dims (#16526)
 add 268d15c987 [CI] Fix CI Script and Broken Tests (#16521)

No new revisions were added by this update.

Summary of changes:
 python/tvm/__init__.py |   6 +
 python/tvm/contrib/debugger/debug_executor.py  |   4 +-
 python/tvm/meta_schedule/utils.py  |  10 +-
 python/tvm/relay/frontend/keras.py |  14 +-
 src/arith/iter_affine_map.cc   |   9 +
 src/tir/ir/tir_visitor_with_path.cc|  10 ++
 src/tir/transforms/lower_tvm_builtin.cc|  19 ++-
 tests/python/codegen/test_target_codegen_cuda.py   |   2 +
 tests/python/frontend/keras/test_forward.py|  10 ++
 ...meta_schedule_mma_m16n8k8_auto_tensorization.py | 158 +
 ...est_meta_schedule_postproc_rewrite_tensorize.py |   2 +-
 .../test_meta_schedule_schedule_rule_mlt_tc.py | 126 +++---
 .../test_meta_schedule_trace_apply.py  | 190 +
 tests/python/runtime/test_runtime_trace.py |   8 +-
 tests/python/te/test_te_create_primfunc.py |   8 +-
 .../test_tir_analysis_verify_well_formed.py|   3 +-
 tests/python/tir-base/test_debug_info.py   |   6 +-
 .../tir-schedule/test_tir_schedule_rfactor.py  |   1 +
 tests/scripts/task_python_unittest.sh  |   2 +-
 19 files changed, 266 insertions(+), 322 deletions(-)



Re: [PR] [Unity][TVMScript] Optionally hide StructInfo that can be inferred [tvm]

2024-02-07 Thread via GitHub


Lunderberg commented on PR #16356:
URL: https://github.com/apache/tvm/pull/16356#issuecomment-1933248740

   Sounds good, and thank you for the review!  Before merging, I'm going to 
rebase onto main, since the CI results look a bit out of date.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][TVMScript] Optionally hide StructInfo that can be inferred [tvm]

2024-02-07 Thread via GitHub


Lunderberg commented on code in PR #16356:
URL: https://github.com/apache/tvm/pull/16356#discussion_r1482323270


##
src/script/printer/relax/utils.h:
##
@@ -82,10 +84,47 @@ inline Optional StructInfoAsAnn(const relax::Var& 
v, const ObjectPath&
   if (!v->struct_info_.defined()) {
 return NullOpt;
   }
+  bool attempt_to_hide_struct_info = !d->cfg->show_all_struct_info;
+
   if (const auto* call = rhs.as()) {
 static const Op& call_tir_op = Op::Get("relax.call_tir");
 static const Op& call_dps_packed_op = Op::Get("relax.call_dps_packed");
 if (call->op.same_as(call_tir_op) || call->op.same_as(call_dps_packed_op)) 
{
+  attempt_to_hide_struct_info = true;
+}
+  }
+  if (attempt_to_hide_struct_info) {

Review Comment:
   Yeah, probably overkill, but I tend to be more okay with overkill for 
non-default behavior.  Good point on the comparison with normalizing on each 
pass.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Runtime] Add "TVM_DLL" to NDArray cache load func [tvm]

2024-02-07 Thread via GitHub


MasterJH5574 commented on PR #16541:
URL: https://github.com/apache/tvm/pull/16541#issuecomment-1933223239

   @tvm-bot rerun


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [TIR] Expand debug symbol output for CodeGenLLVM [tvm]

2024-02-07 Thread via GitHub


Lunderberg commented on PR #16544:
URL: https://github.com/apache/tvm/pull/16544#issuecomment-1933217085

   This came about while debugging the implementation of 
https://github.com/apache/tvm/pull/16542, but is otherwise unrelated.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [TIR] Fix segfaults from ordering of Let/Assert in MakePackedAPI [tvm]

2024-02-07 Thread via GitHub


Lunderberg commented on PR #16543:
URL: https://github.com/apache/tvm/pull/16543#issuecomment-1933217017

   This came about while debugging the implementation of 
https://github.com/apache/tvm/pull/16542, but is otherwise unrelated.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [TIR] Fix segfaults from ordering of Let/Assert in MakePackedAPI [tvm]

2024-02-07 Thread via GitHub


Lunderberg opened a new pull request, #16543:
URL: https://github.com/apache/tvm/pull/16543

   Prior to this commit, the `MakePackedAPI` pass would output steps in the 
following order:
   
   1. Check the number of arguments.
   2. All `LetStmt` produced by the `ArgBinder`
   3. `AssertStmt` for the Type code checks for each argument.
   4. Additional `AssertStmt` produced by the `ArgBinder`.
   
   This order can cause segfaults if a function was provided incorrect 
arguments.  For example, an integer argument passed to a function expecting a 
`DLTensor*` would be dereferenced to find the tensor's data pointer (step (2)) 
before checking if it is valid to perform that dereference (step (3)).  The 
same would occur when reading the size of a tensor's axes (step (2)) before 
checking whether the tensor is the correct dimensionality (step (4)).
   
   This commit updates the steps to the following order.
   
   1. Check the number of arguments.
   2. Check the type code of each argument.
   3. All `LetStmt` and `AssertStmt` produced by the `ArgBinder`, in the order 
in which they are generated.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Relax] Support callback as argument [tvm]

2024-02-07 Thread via GitHub


Lunderberg opened a new pull request, #16542:
URL: https://github.com/apache/tvm/pull/16542

   Prior to this commit, calls from Relax to external PackedFuncs could only be 
done through the TVM global registry.  While Relax functions accepting a 
callback could be written as `callback_arg: R.Callable(arg_struct_info, 
ret_struct_info)`, attempting to compile these functions would raise an error 
during the `CodeGenVM` step of `relax.build`.  In addition, the global registry 
is only queried when initializing the `relax.VirtualMachine`, and so later 
changes requires restarting the VM.
   
   This commit updates both the `CodeGenVM` lowering pass and the relax VM to 
support callbacks.  The motivating use case is with the `LazyTransformParams` 
pass, to improve flexibility by avoiding use of the global registry.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



(tvm) branch main updated (2dcf9ec5a6 -> 268d15c987)

2024-02-07 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


from 2dcf9ec5a6 [Keras] Enable Dense operator for any input dims (#16526)
 add 268d15c987 [CI] Fix CI Script and Broken Tests (#16521)

No new revisions were added by this update.

Summary of changes:
 python/tvm/__init__.py |   6 +
 python/tvm/contrib/debugger/debug_executor.py  |   4 +-
 python/tvm/meta_schedule/utils.py  |  10 +-
 src/arith/iter_affine_map.cc   |   9 +
 src/tir/ir/tir_visitor_with_path.cc|  10 ++
 src/tir/transforms/lower_tvm_builtin.cc|  19 ++-
 tests/python/codegen/test_target_codegen_cuda.py   |   2 +
 ...meta_schedule_mma_m16n8k8_auto_tensorization.py | 158 +
 ...est_meta_schedule_postproc_rewrite_tensorize.py |   2 +-
 .../test_meta_schedule_schedule_rule_mlt_tc.py | 126 +++---
 .../test_meta_schedule_trace_apply.py  | 190 +
 tests/python/runtime/test_runtime_trace.py |   8 +-
 tests/python/te/test_te_create_primfunc.py |   8 +-
 .../test_tir_analysis_verify_well_formed.py|   3 +-
 tests/python/tir-base/test_debug_info.py   |   6 +-
 .../tir-schedule/test_tir_schedule_rfactor.py  |   1 +
 tests/scripts/task_python_unittest.sh  |   2 +-
 17 files changed, 248 insertions(+), 316 deletions(-)



Re: [PR] [CI] Fix CI Script and Broken Tests [tvm]

2024-02-07 Thread via GitHub


tqchen merged PR #16521:
URL: https://github.com/apache/tvm/pull/16521


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Upd] Enable lld search to include /opt/rocm/llvm/bin for rocm [tvm]

2024-02-07 Thread via GitHub


shreygupta2809 opened a new pull request, #16540:
URL: https://github.com/apache/tvm/pull/16540

   Closes [#1216](https://github.com/mlc-ai/mlc-llm/issues/1216) and 
[#1614](https://github.com/mlc-ai/mlc-llm/issues/1614)
   @tqchen


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Improve error message in NDArray::CopyFromTo [tvm]

2024-02-07 Thread via GitHub


ekalda commented on PR #16539:
URL: https://github.com/apache/tvm/pull/16539#issuecomment-1932248856

   cc @lhutton1 @eirenevp 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Improve error message in NDArray::CopyFromTo [tvm]

2024-02-07 Thread via GitHub


ekalda opened a new pull request, #16539:
URL: https://github.com/apache/tvm/pull/16539

   Make it explicit that the quoted numbers are bytes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [SVE] Change the dtype of Ramp and Broadcast lanes to PrimExpr [tvm]

2024-02-07 Thread via GitHub


tqchen commented on code in PR #16523:
URL: https://github.com/apache/tvm/pull/16523#discussion_r1481512533


##
include/tvm/runtime/data_type.h:
##
@@ -114,17 +118,28 @@ class DataType {
   /*! \return whether type is a handle type. */
   bool is_handle() const { return code() == DataType::kHandle && !is_void(); }
   /*! \return whether type is a vector type. */
-  bool is_vector() const { return lanes() > 1; }
+  bool is_vector() const {
+int encoded_lanes = static_cast(data_.lanes);
+return encoded_lanes != 0 && encoded_lanes != 1;
+  }
   /*! \return whether type is a bool vector type. */
   bool is_vector_bool() const { return is_vector() && bits() == 1; }
   /*! \return whether type is a Void type. */
   bool is_void() const { return code() == DataType::kHandle && bits() == 0 && 
lanes() == 0; }
+  /*! \return Whether the type is scalable. */
+  bool is_scalable() const { return static_cast(data_.lanes) < 0; }
   /*!
* \brief Create a new data type by change lanes to a specified value.
* \param lanes The target number of lanes.
* \return the result type.
*/
   DataType with_lanes(int lanes) const { return DataType(data_.code, 
data_.bits, lanes); }
+  /*!
+   * \brief Create a new scalable data type by changing the lanes to a 
specified value.
+   * \param lanes The target number of lanes.

Review Comment:
   in this case, perhaps we can rename `is_vector` to 
`is_scalable_or_fixed_length_vector()` to be more explicit.
   
   Otherwise LGTM



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[I] [BUG] List index out of range compiling `test_tflite_large_irregular` on `arm_cpu` target [tvm]

2024-02-07 Thread via GitHub


lhutton1 opened a new issue, #16538:
URL: https://github.com/apache/tvm/issues/16538

   ### Expected behaviour:
   The test 
`tests/python/relay/test_op_qnn_conv2d.py:test_tflite_large_irregular` runs 
successfully when the target is `arm_cpu`. 
   
   ### Actual behaviour:
   The test fails to run and gives the following error:
   ```
   python/tvm/autotvm/task/space.py:736: in define_split
   return self._add_new_transform(SplitSpace, name, axes, policy, **kwargs)
   python/tvm/autotvm/task/space.py:1132: in _add_new_transform
   self._entity_map[name] = space[0]
   _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _
   
   self = Split(policy=factors, product=1001, num_outputs=2) len=0, index = 0
   
   def __getitem__(self, index):
   """Get an entity of the space by index
   
   Parameters
   --
   index: int
   
   Returns
   ---
   transform entity
   """
   >   return self.entities[index]
   E   IndexError: list index out of range
   
   python/tvm/autotvm/task/space.py:93: IndexError
   ```
   
   ### Environment:
   Tested with TVM at 
https://github.com/apache/tvm/commit/6a3fadc0654ecf9557ffe08d24677684c96e80b0. 
The issue was found as a result of the changes in 
https://github.com/apache/tvm/pull/16513, however it can be reproduced without 
as described below.
   
   ### How to reproduce:
   Run `pytest tests/python/relay/test_op_qnn_conv2d.py -k 
test_tflite_large_irregular` with an `arm_cpu` target. Note: Reminder to remove 
any skip condition that exists in the test currently.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [SVE] Change the dtype of Ramp and Broadcast lanes to PrimExpr [tvm]

2024-02-07 Thread via GitHub


ekalda commented on code in PR #16523:
URL: https://github.com/apache/tvm/pull/16523#discussion_r1481340370


##
src/arith/scalable_expression.cc:
##
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file tvm/arith/scalable_expression.cc
+ * \brief Analyze scalable expressions.
+ */
+
+#include "scalable_expression.h"
+
+#include 
+#include 
+
+#include "../tir/transforms/replace_selected_expr.h"
+#include "./pattern_match.h"
+
+namespace tvm {
+namespace arith {
+
+bool IsVScaleCall(const PrimExpr& expr) {
+  if (auto call = expr.as()) {
+return call->op.same_as(tir::builtin::vscale());
+  }
+  return false;
+}
+
+PrimExpr CanonicalizeScalableLanes(const PrimExpr& lanes) {
+  PVar multiplier;
+  PCallExpr vscale;
+
+  PrimExpr new_lanes;
+
+  if ((multiplier * vscale).Match(lanes)) {
+new_lanes = lanes;
+  } else if ((vscale * multiplier).Match(lanes)) {
+new_lanes =

Review Comment:
   > Is this canonicalization necessary? It seems like it would occur by 
default when applying the simplification passes. If it is necessary, we should 
match the behavior of the simplifier, with constants collected to the RHS of 
expressions.
   
   I suppose this canonicalization is not strictly necessary, but it has proven 
useful in things like testing the scalable Ramps in isolation. It shouldn't be 
a lot of computational overhead, so I think it wouldn't hurt to force the 
order. But yes, let's go for `vscale * 4` to match the behaviour of the rest of 
the stack.
   
   > I'd recommend instead having a std::optional 
ExtractScalableLanes(const PrimExpr& lanes) method. That way, the calling scope 
could use if(auto scale = ExtractScalableLanes(lanes)) { ... }, to immediately 
have access to the extracted value. As it is, the two uses of this function 
need to first check if the expression is scalable, then manually extract the 
scale factor.
   
   ```
   if (PMatchesOneOf{
 multiplier * vscale,
 vscale * multiplier,
   }.Match(lanes)) {
 return vscale.Eval()->value;
   } else  {
 return std::nullopt;
   }
   ```
   
   Thanks, yes that sounds much better :) 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [SVE] Change the dtype of Ramp and Broadcast lanes to PrimExpr [tvm]

2024-02-07 Thread via GitHub


ekalda commented on code in PR #16523:
URL: https://github.com/apache/tvm/pull/16523#discussion_r1481333743


##
src/arith/rewrite_simplify.h:
##
@@ -221,6 +221,8 @@ class RewriteSimplifier::Impl : public 
IRMutatorWithAnalyzer {
   bool CanProveGreaterEqual(const PrimExpr& x, int64_t val) {
 return analyzer_->CanProveGreaterEqual(x, val);
   }
+  // Whether the lanes are scalable
+  bool ScalableLanes(const PrimExpr& lanes) { return !lanes.as(); }

Review Comment:
   Agreed, I'll change it to explicitly check for `vscale`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [SVE] Change the dtype of Ramp and Broadcast lanes to PrimExpr [tvm]

2024-02-07 Thread via GitHub


ekalda commented on code in PR #16523:
URL: https://github.com/apache/tvm/pull/16523#discussion_r1481333106


##
src/arith/int_set.cc:
##
@@ -466,14 +466,21 @@ class IntervalSetEvaluator : public 
ExprFunctor {
 if (stride.Match(op->stride)) {
   DataType t = op->base.dtype();
   int64_t vstride = stride.Eval()->value;
-  if (vstride > 0) {
-return Combine(analyzer_, base,
-IntervalSet(make_zero(t), make_const(t, vstride * 
(op->lanes - 1))),
-op->dtype);
-  } else {
-return Combine(analyzer_, base,
-IntervalSet(make_const(t, vstride * (op->lanes - 
1)), make_zero(t)),
-op->dtype);
+  if (op->lanes->IsInstance()) {
+int lanes = static_cast(Downcast(op->lanes)->value);
+if (vstride > 0) {
+  return Combine(analyzer_, base,
+  IntervalSet(make_zero(t), make_const(t, vstride 
* (lanes - 1))),
+  op->dtype);
+} else {
+  return Combine(analyzer_, base,
+  IntervalSet(make_const(t, vstride * (lanes - 
1)), make_zero(t)),
+  op->dtype);
+}
+  } else { /* Scalable vector */
+if (vstride > 0) {
+  return Combine(analyzer_, base, IntervalSet(make_zero(t), 
pos_inf()), op->dtype);
+}

Review Comment:
   Yeah we probably should :) 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [SVE] Change the dtype of Ramp and Broadcast lanes to PrimExpr [tvm]

2024-02-07 Thread via GitHub


ekalda commented on code in PR #16523:
URL: https://github.com/apache/tvm/pull/16523#discussion_r1481332421


##
include/tvm/runtime/data_type.h:
##
@@ -114,17 +118,28 @@ class DataType {
   /*! \return whether type is a handle type. */
   bool is_handle() const { return code() == DataType::kHandle && !is_void(); }
   /*! \return whether type is a vector type. */
-  bool is_vector() const { return lanes() > 1; }
+  bool is_vector() const {
+int encoded_lanes = static_cast(data_.lanes);
+return encoded_lanes != 0 && encoded_lanes != 1;
+  }
   /*! \return whether type is a bool vector type. */
   bool is_vector_bool() const { return is_vector() && bits() == 1; }
   /*! \return whether type is a Void type. */
   bool is_void() const { return code() == DataType::kHandle && bits() == 0 && 
lanes() == 0; }
+  /*! \return Whether the type is scalable. */
+  bool is_scalable() const { return static_cast(data_.lanes) < 0; }
   /*!
* \brief Create a new data type by change lanes to a specified value.
* \param lanes The target number of lanes.
* \return the result type.
*/
   DataType with_lanes(int lanes) const { return DataType(data_.code, 
data_.bits, lanes); }
+  /*!
+   * \brief Create a new scalable data type by changing the lanes to a 
specified value.
+   * \param lanes The target number of lanes.

Review Comment:
   Thanks @tqchen, thinking about it, you're right, the "lanes" of the scalable 
vectors in the current implementation is a bit of a misnomer and in general 
causes a lot of issues where the scalability is silently ignored or dropped in 
the passes. So I'm in favour of separating the APIs for fixed length and 
scalable vectors. Here's a proposal for cleaning it up (it should address your 
other comments in this file as well):
   
   * Rename `is_scalable()` -> `is_scalable_vector()` - return `True` if it is 
scalable vector, `False` otherwise
   * Add `is_fixed_length_vector()` method to check if it is a fixed length 
vector
   * `is_vector()` should return `True` if it is a vector (scalable or fixed 
length) and `False` otherwise
   * Reserve `lanes()` for fixed length vectors, i.e. if this function is 
called on a scalable vector, return an error
   * Add `vscale_factor()` that returns the integer multiplier (so it would be 
a scalable vector equivalent of `lanes()`)
   * Rename `with_scalable_lanes(lanes)` -> 
`with_scalable_vscale_factor(vscale_factor)`
   
   
   Throughout the codebase `dtype.lanes() != 1` is used as a shorthand to test 
for "vectorness", it would be good to replace these instances with 
`is_vector()`/`is_fixed_length_vector()`/`is_scalable_vector()` as appropriate.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[I] [BUG] NCHW4c is an unsupported convolution layout for `arm_cpu` [tvm]

2024-02-07 Thread via GitHub


lhutton1 opened a new issue, #16537:
URL: https://github.com/apache/tvm/issues/16537

### Expected behaviour:
When compiled with target `arm_cpu` the model should compile successfully.
   
   ### Actual behaviour:
   The test fails to run and gives the following error:
   ```
   def conv2d_strategy_arm_cpu(attrs, inputs, out_type, target):
   ...
   else:
   >   raise RuntimeError(f"Unsupported conv2d layout {layout} for 
arm cpu")
   E   RuntimeError: Unsupported conv2d layout NCHW4c for arm cpu
   
   python/tvm/relay/op/strategy/arm_cpu.py:273: RuntimeError
   ```
   
   ### Environment:
   Tested with TVM at 
https://github.com/apache/tvm/commit/6a3fadc0654ecf9557ffe08d24677684c96e80b0. 
The issue was found as a result of the changes in 
https://github.com/apache/tvm/pull/16513, however it can be reproduced without 
as described below.
   
   
   ### How to reproduce:
   ```
   pytest tests/python/relay/test_pass_alter_op_layout.py -k 
test_alter_layout_nonscalar_broadcast
   pytest tests/python/relay/test_pass_alter_op_layout.py -k 
test_alter_layout_blocked_no_broadcast
   pytest tests/python/relay/test_pass_alter_op_layout.py -k 
test_alter_layout_blocked_broadcast
   pytest tests/python/relay/test_pass_alter_op_layout.py -k 
test_alter_layout_re_blocking_broadcast
   pytest tests/python/relay/test_pass_alter_op_layout.py -k 
test_broadcast_non_adaptable
   pytest tests/python/relay/test_pass_alter_op_layout.py -k 
test_broadcast_respect_input_layouts
   ```
   Run any of the above tests with an `arm_cpu` target. Note: Reminder to 
remove any skip condition that exists in the test currently.
   
   
  
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Keras] Enable Dense operator for any input dims [tvm]

2024-02-07 Thread via GitHub


masahi merged PR #16526:
URL: https://github.com/apache/tvm/pull/16526


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



(tvm) branch main updated: [Keras] Enable Dense operator for any input dims (#16526)

2024-02-07 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 2dcf9ec5a6 [Keras] Enable Dense operator for any input dims (#16526)
2dcf9ec5a6 is described below

commit 2dcf9ec5a6d6873cee0461d4c3f3e6990916e020
Author: Egor Churaev 
AuthorDate: Wed Feb 7 13:40:21 2024 +0300

[Keras] Enable Dense operator for any input dims (#16526)

Our dense op expects 2D, but there are no limitation in Keras on the
shape of the input tensor. Reshaping of all "batch" axes into one was
added in this commit. After that, it is possible to import Dense layer
with ND input tensor from Keras to TVM.
---
 python/tvm/relay/frontend/keras.py  | 14 --
 tests/python/frontend/keras/test_forward.py | 10 ++
 2 files changed, 18 insertions(+), 6 deletions(-)

diff --git a/python/tvm/relay/frontend/keras.py 
b/python/tvm/relay/frontend/keras.py
index 2186208994..d53647cc68 100644
--- a/python/tvm/relay/frontend/keras.py
+++ b/python/tvm/relay/frontend/keras.py
@@ -266,11 +266,12 @@ def _convert_dense(
 # In case of RNN dense, input shape will be (1, 1, n)
 if input_dim > 2:
 input_shape = tuple(dim if dim else 1 for dim in 
_as_list(input_shape)[0])
-if input_dim != 3 or input_shape[0] != 1 or input_shape[1] != 1:
-raise tvm.error.OpAttributeInvalid(
-f"Input shape {input_shape} is not valid for operator Dense."
-)
-inexpr = _op.squeeze(inexpr, axis=[0])
+# Keras has no limitations on the shape of the input tensor. But our
+# dense op expects 2D input. All inputs with number of dimensions > 2
+# are reshaped all "batch" axes into one.
+# For example: (N, d1, d2, d3) -> (N * d1 * d2, d3)
+new_batch_size = np.prod(input_shape[:-1])
+inexpr = _op.reshape(inexpr, newshape=(new_batch_size, 
input_shape[-1]))
 out = _op.nn.dense(data=inexpr, **params)
 if keras_layer.use_bias:
 bias = etab.new_const(weightList[1])
@@ -283,7 +284,8 @@ def _convert_dense(
 if act_type != "linear":
 out = _convert_activation(out, act_type, etab, data_layout)
 if input_dim > 2:
-out = _op.expand_dims(out, axis=0)
+out_shape = (*input_shape[:-1], units)
+out = _op.reshape(out, newshape=out_shape)
 return out
 
 
diff --git a/tests/python/frontend/keras/test_forward.py 
b/tests/python/frontend/keras/test_forward.py
index aef137e634..0d05e34a15 100644
--- a/tests/python/frontend/keras/test_forward.py
+++ b/tests/python/frontend/keras/test_forward.py
@@ -285,6 +285,16 @@ class TestKeras:
 keras_model = keras_mod.models.Model(data, x)
 verify_keras_frontend(keras_model, need_transpose=False)
 
+data = keras_mod.layers.Input(shape=(120, 2560), name="image_set")
+x = keras_mod.layers.Dense(1, activation="linear", name="e")(data)
+keras_model = keras_mod.models.Model(data, x)
+verify_keras_frontend(keras_model, need_transpose=False)
+
+data = keras_mod.layers.Input(shape=(10, 12, 2560), name="image_set")
+x = keras_mod.layers.Dense(32, activation="linear", name="e")(data)
+keras_model = keras_mod.models.Model(data, x)
+verify_keras_frontend(keras_model, need_transpose=False)
+
 def test_forward_permute(self, keras_mod):
 data = keras_mod.layers.Input(shape=(2, 3, 4))
 x = keras_mod.layers.Permute([2, 3, 1])(data)



[I] [BUG] Schedule that doesn't support dynamic height/width is selected when compiling convolution for `arm_cpu` [tvm]

2024-02-07 Thread via GitHub


lhutton1 opened a new issue, #16536:
URL: https://github.com/apache/tvm/issues/16536

   ### Expected behaviour:
   When an `arm_cpu` target is used, the model should compile successfully 
without an error.
   
   ### Actual behaviour:
   When compiled on an `arm_cpu` target, the models results in the following 
error:
   ```
   python/tvm/autotvm/task/topi_integration.py:165: in wrapper
   node = topi_compute(cfg, *args)
   python/tvm/topi/arm_cpu/conv2d.py:50: in conv2d_nchw_spatial_pack
   return conv2d_spatial_pack_nchw(
   
   def conv2d_spatial_pack_nchw(cfg, data, kernel, strides, padding, dilation, 
out_dtype, num_tile):
   """compute define for Conv2d Spatial Pack with NCHW layout"""
   out_dtype = out_dtype or data.dtype
   N, CI, IH, IW = get_const_tuple(data.shape)
   if isinstance(N, tvm.tir.Any):
   N = tvm.te.size_var("n")
   if not isinstance(IH, int) or not isinstance(IW, int):
   >   raise RuntimeError("ARM winograd conv2d doesn't support dynamic 
input height or width.")
   E   RuntimeError: ARM winograd conv2d doesn't support dynamic input 
height or width.
   ```
   
   ### Environment:
   Tested with TVM at 
https://github.com/apache/tvm/commit/6a3fadc0654ecf9557ffe08d24677684c96e80b0. 
The issue was found as a result of the changes in 
https://github.com/apache/tvm/pull/16513, however it can be reproduced without 
as described below.
   
   ### How to reproduce:
   Run the following tests:
   - pytest tests/python/relay/test_any.py -k test_any_conv2d
   
   with an `arm_cpu` target. Note: Reminder to remove any skip condition that 
exists in the test currently.
   
   ---
   Likely schedule selection in `relay/strategy/arm_cpu.py` needs to be fixed 
not to select a schedule that doesn't support dynamic height/width.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org