Lunderberg commented on code in PR #77:
URL: https://github.com/apache/tvm-rfcs/pull/77#discussion_r892859018


##########
rfcs/0077-layout-transform-padding.md:
##########
@@ -0,0 +1,2540 @@
+- Feature Name: Layout Transformation Padding Roadmap
+- Authors: [Eric Lunderberg](https://github.com/Lunderberg/),
+           [Chris Sullivan](https://github.com/csullivan),
+           [Wuwei Lin](https://github.com/vinx13/),
+           [Junru Shao](https://github.com/junrushao1994)
+- Start Date: 2022-06-06
+- RFC PR: [apache/tvm-rfcs#0077](https://github.com/apache/tvm-rfcs/pull/0077)
+- GitHub Issue: TBD
+
+# Table of contents
+- [Table of contents](#table-of-contents)
+- [Summary](#summary)
+- [Motivation](#motivation)
+- [Guide-level explanation](#guide-level-explanation)
+  - [Padded Transformations](#padded-transformations)
+  - [Defining Padded Values](#defining-padded-values)
+  - [Overcompute vs Branching](#overcompute-vs-branching)
+- [Reference-level explanation](#reference-level-explanation)
+  - [TIR Changes](#tir-changes)
+    - [Buffer Annotation of Padding Predicate/Constraint 
Pairs](#buffer-annotation-of-padding-predicateconstraint-pairs)
+    - [New TIR Op, `tir::builtin::undef`](#new-tir-op-tirbuiltinundef)
+    - [Buffer Annotation of Layout 
Transforms](#buffer-annotation-of-layout-transforms)
+  - [Transformations/Metaschedule 
Primitives](#transformationsmetaschedule-primitives)
+    - [Enhancement - transform_layout](#enhancement---transform_layout)
+    - [New Primitive - Add buffer 
constraint](#new-primitive---add-buffer-constraint)
+    - [New Utility - Reorder Loops According to 
Buffer](#new-utility---reorder-loops-according-to-buffer)
+    - [Enhancement - Predicate for 
DomainTouched](#enhancement---predicate-for-domaintouched)
+    - [Enhancement - Remove No Op](#enhancement---remove-no-op)
+    - [Enhancement - Simplify](#enhancement---simplify)
+    - [New Transform - Hoist Expression](#new-transform---hoist-expression)
+    - [New Transform - Reduce Loop 
Extents](#new-transform---reduce-loop-extents)
+    - [Utility - Merge Adjacent Loops](#utility---merge-adjacent-loops)
+    - [New Primitive - Remove Branching Through 
Overcompute](#new-primitive---remove-branching-through-overcompute)
+    - [New Primitive - Remove Overcompute Through 
Branching](#new-primitive---remove-overcompute-through-branching)
+    - [New Lowering Transform - Remove 
T.Undef](#new-lowering-transform---remove-tundef)
+  - [Implementation options](#implementation-options)
+    - [Never write to transformation 
padding](#never-write-to-transformation-padding)
+    - [Never read from transformation 
padding](#never-read-from-transformation-padding)
+    - [Allocate internal buffer containing transformation 
padding](#allocate-internal-buffer-containing-transformation-padding)
+    - [Explicitly write next operator's desired default at end of 
function](#explicitly-write-next-operators-desired-default-at-end-of-function)
+    - [Implicitly write default value of next 
operator](#implicitly-write-default-value-of-next-operator)
+    - [Apply operator element-wise over the transformation 
padding](#apply-operator-element-wise-over-the-transformation-padding)
+    - [Multiple Buffer Semantics](#multiple-buffer-semantics)
+  - [Points of Communication](#points-of-communication)
+- [Drawbacks](#drawbacks)
+- [Rationale and alternatives](#rationale-and-alternatives)
+- [Prior art](#prior-art)
+- [Unresolved questions](#unresolved-questions)
+- [Future possibilities](#future-possibilities)
+
+# Summary
+[summary]: #summary
+
+Buffer layout transformations can require padding in the transformed
+buffer.  The efficiency of an operator depends on the semantics used
+for loads and stores to values in the required padding.  The choice of
+buffer semantics can reduce branch divergence and avoid repeated
+setting of default values, but also imposes constraints between the
+producer and consumer of a buffer.
+
+This RFC discusses a general plan for specifying buffer semantics to
+be used, and the constraints imposed.  Subsequent RFCs will follow
+describing the design for support of each of the semantics proposed in
+this roadmap.
+
+# Motivation
+[motivation]: #motivation
+
+Suppose a buffer of shape `[14]` is transformed such that each index
+`i` is mapped to `[i//4, i%4]`.  The first index can range from 0
+(`0//4`) to 3 (`14//4`), and the second index can range from 0 (`0%4`)
+to 3 (`3%4`).  Therefore, the transformed shape is `[4,4]`.  However,
+this has 16 elements, because the transformed coordinates `(3,2)` and `(3,3)` 
do
+not have a corresponding index on the workload range `0 <= i < 14`.  The final
+result in these locations is not determined by the compute definition,
+so we have flexibility in what to store in the padding that is
+introduced by the transformation, and what assumptions can be made
+when reading from those locations.
+
+For example, an element-wise function may be most efficiently written
+using vectorized instructions over all values, regardless of whether
+they exist in the compute definition.  Or a maxpool may be most
+efficiently written if input tensors have `-INF` stored in the
+transformation padding.  Satisfying both of these at the same time may
+not be possible.  While the compute definition doesn't impose
+constraints on the values in the transformation padding, there are
+still constraints imposed by the usage of those values by different
+operators.
+
+
+```
+ ┌─Logical-index-space───────────────────┐
+ │                                       │
+┌▼─┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬─▼┌──┬──┐
+│00│01│02│03│04│05│06│07│08│09│10│11│12│13│14│15│
+└▲─┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴─▲┘
+ │                                             │
+ └─Physical-index-space────────────────────────┘
+
+ ┌─Transformed-index-space─┐
+ │                         │
+ │      ┌────┬────┬────┬───▼┐
+ │      │ 00 │ 01 │ 02 │ 03 │
+ │      ├────┼────┼────┼────┤
+ │      │ 04 │ 05 │ 06 │ 07 │
+ │      ├────┼────┼────┼────┤
+ │      │ 08 │ 09 │ 10 │ 11 │
+ │      ├────┼────┼────┼────┤
+ └──────► 12 │ 13 │ 14 │ 15 │
+        └────┴────┴────┴────┘
+```
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+## Padded Transformations
+
+In general, a transformation will introduce the minimum amount of
+padding such that all values in the original buffer can be stored in
+the layout specified.  As a result, whether a transformation
+introduces padding depends on the transformation being applied and the
+buffer shape on which it is being applied.  For example, consider a
+schedule that contains tensor `A` with shape `[16]` and tensor `B` with shape
+`[14]`.
+
+```python
+# This transformation does not introduce padding.  The original shape
+# of [16] produces the transformed shape [2,8], which contains the
+# original 16 values no additional padding.
+sched[A].transform_layout(lambda i: [i//8, i%8])
+
+# This transform introduces padding.  The original shape of [14] also
+# produces the transformed shape [2,8], which contains the original 14
+# values and an additional 2 values of padding.  These are located at
+# transformed indices [1,6] and [1,7].
+sched[B].transform_layout(lambda i: [i//8, i%8])
+```
+
+The above example introduces padding at the end of a buffer.  By
+including an offset in the layout transformation, we can instead place
+the padding at the beginning of a buffer.
+
+```python
+# This transform introduces padding.  For 0 <= i < 14, the transformed
+# index (i+2)//8 can have values of 0 or 1, so the transformed shape
+# is [2,8].  There are no valid values of i that would produce [0,0]
+# or [0,1], so these transformed indices contain padding.
+sched[B].transform_layout(lambda i: [(i+2)//8, (i+2)%8])
+```
+
+In addition to moving the location of the padded indices, use of an
+offset in a layout transformation can introduce additional padding.
+
+```python
+# This transformation introduces padding.  For 0 <= i < 16, the
+# transformed index (i+2)//8 can have values of 0, 1, or 2, so the
+# transformed shape is [3,8].  Padding is introduced from [0,0] to
+# [0,1], and from [2,2] to [2,7].
+sched[A].transform_layout(lambda i: [(i+2)//8, (i+2)%8])
+```
+
+
+## Defining Padded Values
+
+When a buffer is transformed, the majority of values in the
+transformed buffer are constrained to have the corresponding value in
+the original buffer.  However, when a buffer is padded to meet some
+alignment criteria, these additional padded values have no such
+constraint.
+
+To specify the values stored in the padding, the `transform_layout`
+function takes an optional argument `pad_value` that
+specifies the value that should be present in the padding.  This
+should be a function that maps from transformed indices to an
+`Optional[PrimExpr]`.
+
+```python
+# B.shape is [14]
+transform = lambda i: [i//4, i%4]
+
+# Three equivalent calls to perform the same layout transformation.
+# Padding is introduced, but access of the padding is forbidden.
+sched[B].transform_layout(transform)
+sched[B].transform_layout(transform, pad_value=None)
+sched[B].transform_layout(transform, pad_value=lambda io,ii: None)
+
+# Padding is introduced, and contains zeros.
+sched[B].transform_layout(transform, pad_value=0.0)
+sched[B].transform_layout(transform, pad_value=lambda io,ii: 0.0)
+
+# Padding is introduced, and contains undefined values.
+sched[B].transform_layout(transform, pad_value=tir.undef(dtype="float32"))
+sched[B].transform_layout(transform, pad_value=lambda io,ii: 
tir.undef(dtype="float32"))
+
+# Padding is introduced, and wraps to the beginning of the array.
+sched[B].transform_layout(transform, pad_value=lambda io,ii: B[0, (io-14)%4])
+```
+
+The `Buffer` object stores a predicate to identify which indices
+contain padding, along with the expression given in `pad_value`.  This
+expression may only contain constants and the transformed buffer
+itself, and may not introduce dependencies on another buffer.
+
+For a producer of the transformed buffer, if `pad_value` is defined,
+the padding value must be written to the padding prior to the
+completion of the operator.  Effectively, the producer must have a
+postlude as follows:
+
+```python
+for transformed_indices in T.grid(*transformed_shape):
+    if padding_predicate(*transformed_indices):
+        B[transformed_indices] = pad_value(*transformed_indices)
+```
+
+For a consumer of the transformed buffer, these padding values are
+initially unused, but may be used in later simplifications.
+
+## Overcompute vs Branching
+
+Depending on the computation being performed and the value stored in
+the padding, there can be trade-offs between branching and
+overcompute.  For example, consider the following `PrimFunc`, which
+computes the sum over each row of the input data.
+
+```python
+@T.prim_func
+def row_summation(a: T.handle, b: T.handle):
+    A = T.match_buffer(shape=(16, 14), dtype="float32")
+    B = T.match_buffer(shape=(16,), dtype="float32")
+    for i in T.serial(16):
+        B[i] = 0.0
+        for j in T.serial(14):
+            B[i] = B[i] + A[i, j]
+```
+
+We'd like to transform the layout of buffer `A` from `[i, j]` to `[i,
+j//4, j%4]`, along with the loop iteration.  By default, after using
+the `transform_layout` and `split` metaschedule primitives, we have
+the following function.
+
+```python
+@T.prim_func
+def row_summation(a: T.handle, b: T.handle):
+    A = T.match_buffer(shape=(16, 4, 4), dtype="float32")
+    B = T.match_buffer(shape=(16,), dtype="float32")
+    for i in T.serial(16):
+        B[i] = 0.0
+        for j_outer, j_inner in T.grid(4, 4):
+            if 4*j_outer + j_inner < 14:
+                B[i] = B[i] + A[i, j_outer, j_inner]
+```
+
+If the conditional can be removed, this function would be much more
+amenable for later vectorization, or to reduce branch divergence when
+bound to a thread index.  If the padding in `A` is pre-filled with
+zero, then `B[i] = B[i] + 0.0` is a no-op, and can be performed
+without changing the final computation.
+
+```python
+@T.prim_func
+def row_summation(a: T.handle, b: T.handle):
+    A = T.match_buffer(shape=(16, 4, 4), dtype="float32")
+    B = T.match_buffer(shape=(16,), dtype="float32")
+    for i in T.serial(16):
+        B[i] = 0.0
+        for j_outer, j_inner in T.grid(4, 4):
+            B[i] = B[i] + A[i, j_outer, j_inner]
+```
+
+By annotating the layout transformation with the value stored in the
+padding, this condition can be proven, allowing this conditional to
+automatically be removed.  Since the tradeoff between branching and
+overcompute may or may not be beneficial dependent on the schedule,
+these options are exposed as two additional transformations,
+`tir.transform.RemoveBranchingThroughOvercompute` and
+`tir.transform.RemoveOvercomputeThroughBranching`.
+
+
+# Reference-level explanation
+[reference-level-explanation]: #reference-level-explanation
+
+## TIR Changes
+
+### Buffer Annotation of Padding Predicate/Constraint Pairs
+
+`BufferNode` has a new member `std::vector<BufferConstraint>
+constraints` that describes known properties of this buffer.  Any
+transformation that introduces padding will also add a buffer
+constraint.
+
+```c++
+struct BufferConstraintNode {
+  Array<Var> indices;
+  PrimExpr predicate;
+  Optional<PrimExpr> value
+};
+```
+
+The `indices` holds variables that represent the index being used to
+access the buffer.  Both `predicate` and `value` are in terms of the
+variables stored in `indices`.  If `predicate` is true for a given
+value of the indices, then the buffer has contents of `value` at those
+indices.  If `value` is empty, then any indices that match the
+predicate may not be accessed.
+
+The `indices` field is automatically populated based on the
+post-transformation indices.  The `predicate` field is automatically
+determined based on the transformation, and is true for any index
+corresponding to the transformation padding.  The `value` field is
+defined by the user input in `pad_value`
+
+### New TIR Op, `tir::builtin::undef`
+
+A placeholder that represents a valid, but arbitrary value.  This is
+intended for use as `BufferConstraintNode::value`, to indicate that it
+is legal to access the address, but that no further constraints are
+placed on the value present in the buffer.  This is primarily used to
+allow simplifications in a producer, as any partial computations
+written to this space (e.g. by vectorized operations) may be left
+as-is.
+
+
+* Multiplication of `0 * undef` may be simplified to zero, for both
+  integer and floating-point types.
+
+* A pure expression that uses `undef` can be simplified to `undef`.
+
+* `undef` may not occur in the indices used to access a buffer.
+
+* Two separate invocations instances of `undef` may not be assumed to
+  be identical.  For example, the expression `undef - undef` may not
+  be simplified to zero.  If this behavior is desired, the `undef` may
+  be assigned in a `tir::LetStmt`,
+
+* Storing a value of `undef` to a buffer is a no-op, and is removed
+  during lowering.  (See [section on
+  `tir.transform.RemoveUndefStore`](#new-lowering-transform-remove-tundef).)
+
+See [section on element-wise
+transformations](#apply-operator-element-wise-over-the-transformation-padding)
+for example usage.
+
+
+### Buffer Annotation of Layout Transforms
+
+TODO: Should a buffer remember which layout transforms have been
+applied to it?  It would be useful for generating converters between
+logical/transformed/physical layout.  As it is, users must provide
+inputs that have the transformed layout.
+
+## Transformations/Metaschedule Primitives
+
+### Enhancement - transform_layout
+
+The `te.Stage.transform_layout` and `tir.Schedule.transform_layout`
+methods will be updated to take an additional argument `pad_value:
+Optional[Union[int, float, Callable]]`.  This provides the `value`
+field of the `BufferConstraintNode`.
+
+For buffer consumers, the buffer constraint is updated, and no further
+changes are required based on the padding value.  For buffer
+producers, the buffer constraint is updated, and an additional loop is
+added to write `pad_value` to the padding that has been introduced.
+
+```python
+# Before transforming A
+@T.prim_func
+def func(A: T.Buffer[(14,), "float32"]):
+    for i in T.serial(14):
+        A[i] = i
+
+# After applying transform_layout(lambda i: [i//4, i%4], pad_value=-1)
+@T.prim_func
+def func(A: T.Buffer[(4,4), "int32"]):
+    # This loop writes the same values, but to the new locations in
+    # `A`.
+    for i in T.serial(14):
+        A[i//4, i%4] = i
+
+    # This loop writes the padding values.  In this case, `io==3 and
+    # ii>2` is the predicate, and `-1` is the value.
+    for io,ii in T.grid(4,4):
+        if io==3 and ii>2:
+            A[io, ii] = -1
+```
+
+It is expected that the loop that writes padding may be simplified
+later.  In this case, the loop over `io` can be removed, and the range
+of the loop over `ii` can be reduced to `2 <= ii < 4`.  However, the
+default implementation should not perform these simplification yet, as
+this form is useful for [merging
+loopnests](#utility-merge-adjacent-loops) after [rewriting for
+sequential buffer
+access](#new-utility-reorder-loops-according-to-buffer).
+
+In TE, the producer is the stage that outputs the transformed tensor.
+In TIR, the producer is the block that writes to all values of the
+pre-transformation tensor.
+
+
+
+### New Primitive - Add buffer constraint
+
+Similar to `Schedule.set_axis_separators`, this adds an annotation to
+an existing buffer, and can be used independently of
+`transform_layout`.  This can be useful for hardware that provides a
+default value for out-of-bounds reads (e.g. texture memory clamping on
+a GPU).
+
+### New Utility - Reorder Loops According to Buffer
+
+By default in S-TIR, `transform_layout` modifies the underlying layout
+of a buffer, but does not re-order loops that iterate over the buffer.
+The loop iterators can be re-written using split/fuse/reorder, but
+doing so requires the user to manually translate the layout
+transformation into the appropriate sequence of schedule primitives.
+
+A new utility method `Schedule.sequential_buffer_access` should be
+introduced, which generates and applies the sequence of
+split/fuse/reorder schedule primitives such that the loop iterators are
+rewritten for sequential access of a specific buffer.
+
+```python
+# Original function
+@T.prim_func
+def func(A: T.Buffer[(16,), "int32"]):
+    with T.block('compute'):
+        for i in T.serial(16):
+            A[i] = i
+
+
+# sched.transform_layout(block='compute', buffer='A', lambda i: [i//4, i%4])
+@T.prim_func
+def func(A: T.Buffer[(4, 4), "int32"]):
+    with T.block('compute'):
+        for i in T.serial(16):
+            A[i // 4, i % 4] = i
+
+
+# sched.sequential_buffer_access(block='compute', buffer='A')
+@T.prim_func
+def func(A: T.Buffer[(4, 4), "int32"]):
+    with T.block('compute'):
+        for io, ii in T.grid(4, 4):
+            A[io, ii] = 4 * io + ii
+```
+
+This transformation is similar to what can be done using
+split/fuse/reorder, but has two key differences.  First, it presents a
+simpler user experience, as a transformed buffer can be accessed
+sequentially without needing to duplicate the information in the
+transformation.
+
+Similar to `Schedule.split`, if the loop extents do not evenly divide
+the transformation being applied, this primitive must introduce
+conditionals to avoid accessing elements that were not previously
+accessed.
+
+```python
+# Original function
+@T.prim_func
+def func(A: T.Buffer[(14,), "int32"]):
+    with T.block('compute'):
+        for i in T.serial(14):
+            A[i] = i
+
+
+# sched.transform_layout(block='compute', buffer='A', lambda i: [i//4, i%4])
+@T.prim_func
+def func(A: T.Buffer[(4, 4), "int32"]):
+    with T.block('compute'):
+        for i in T.serial(14):
+            A[i // 4, i % 4] = i
+
+
+# sched.sequential_buffer_access(block='compute', buffer='A')
+@T.prim_func
+def func(A: T.Buffer[(4, 4), "int32"]):
+    with T.block('compute'):
+        for io, ii in T.grid(4, 4):
+            if 4 * io + ii < 14:
+                A[io, ii] = 4 * io + ii
+```
+
+`Schedule.sequential_buffer_access` can operate on input buffers as
+well as output buffers.
+
+```python
+# Original function
+@T.prim_func
+def func(
+    A: T.Buffer[(16,), "int32"],
+    F: T.Buffer[(3,), "int32"],
+    B: T.Buffer[(14,), "int32"],
+):
+    with T.block('compute'):
+        for i in T.serial(14):
+            B[i] = 0.0
+            for f in T.serial(3):
+                B[i] = B[i] + F[f] * A[i + f]
+
+
+# After transforming A's layout and B's layout, before rewriting loops
+#
+# sched.transform_layout(block='compute', buffer='A', lambda i: [i//4, i%4])
+# sched.transform_layout(block='compute', buffer='B', lambda i: [i//4, i%4])
+@T.prim_func
+def func(
+    A: T.Buffer[(4, 4), "int32"],
+    F: T.Buffer[(3,), "int32"],
+    B: T.Buffer[(4, 4), "int32"],
+):
+    with T.block('compute'):
+        for i in T.serial(14):
+            B[i // 4, i % 4] = 0.0
+            for f in T.serial(3):
+                B[i // 4, i % 4] = B[i // 4, i % 4] + F[f] * A[(i + f) // 4, 
(i + f) % 4]
+
+
+# Option 1: Rewriting loops to match B's layout
+# sched.sequential_buffer_access(block='compute', buffer='A')
+#
+# New iterators defined by B's access indices
+# io = i//4
+# ii = i%4
+#
+# Invert to find non-reduction axes to be replaced.
+# i = 4*io + ii
+@T.prim_func
+def func(
+    A: T.Buffer[(4, 4), "int32"],
+    F: T.Buffer[(3,), "int32"],
+    B: T.Buffer[(4, 4), "int32"],
+):
+    with T.block('compute'):
+        for io, ii in T.grid(4, 4):
+            if 4 * io + ii < 14:
+                B[io, ii] = 0.0
+                for f in T.serial(3):
+                    # A's indices simplify from
+                    #      [(i + f) // 4, (i + f) % 4]
+                    #   => [(4*io + ii + f) // 4, (4*io + ii + f) % 4]
+                    #   => [io + (ii + f) // 4, (ii + f) % 4]
+                    B[io, ii] = B[io, ii] + F[f] * A[io + (ii + f) // 4, (ii + 
f) % 4]
+
+
+# Option 2: Rewriting loops to match A's layout
+# sched.sequential_buffer_access(block='compute', buffer='A')
+#
+# New iterators defined by A's access indices
+# io = (i+f)//4
+# ii = (i+f)%4
+#
+# Invert to find non-reduction axes to be replaced.
+# i = 4*io + ii - f
+@T.prim_func
+def func(
+    A: T.Buffer[(4, 4), "int32"],
+    F: T.Buffer[(3,), "int32"],
+    B: T.Buffer[(4, 4), "int32"],
+):
+    # Because the initialization of B[i//4, i%4] does not depend on f,
+    # it cannot be expressed solely in terms of io and ii.  Therefore,
+    # the initialization must be split into a separate loopnest.
+    with T.block('init_compute'):
+        for i in T.serial(14):
+            B[i // 4, i % 4] = 0.0
+
+    with T.block('compute'):
+        for io,ii in T.grid(4,4):
+            for f in T.serial(3):
+                if 0 <= 4*io + ii - f < 14:
+                    # B's indices simplify from
+                    #      [i // 4, i%4]
+                    #   => [(4*io + ii - f) // 4, (4*io + ii - f)%4]
+                    #   => [io + (ii - f) // 4, (ii - f)%4]
+                    B[io + (ii - f) // 4, (ii - f) % 4] = (
+                        B[io + (ii - f) // 4, (ii - f) % 4] + F[f] * A[io, ii]
+                    )
+```
+
+In some cases, it may not be possible to separate out the
+initialization and computation in order to rewrite the loops for
+sequential buffer accesss.  In this case,
+`Schedule.sequential_buffer_access` will raise an error.
+
+```python
+# Original function
+@T.prim_func
+def conv1d_cumsum(
+    A: T.Buffer[(16,), "int32"],
+    F: T.Buffer[(3,), "int32"],
+    B: T.Buffer[(14,), "int32"],
+):
+    with T.block('compute'):
+        for i in T.serial(14):
+            if i == 0:
+                B[i] = 0
+            else:
+                B[i] = B[i - 1]
+
+            for f in T.serial(3):
+                B[i] = B[i] + F[f] * A[i + f]
+
+
+# After transforming A's layout and B's layout, before rewriting loops
+#
+# sched.transform_layout(block='compute', buffer='A', lambda i: [i//4, i%4])
+# sched.transform_layout(block='compute', buffer='B', lambda i: [i//4, i%4])
+@T.prim_func
+def conv1d_cumsum(
+    A: T.Buffer[(4, 4), "int32"],
+    F: T.Buffer[(3,), "int32"],
+    B: T.Buffer[(4, 4), "int32"],
+):
+    with T.block('compute'):
+        for i in T.serial(14):
+            if i == 0:
+                B[i // 4, i % 4] = 0
+            else:
+                B[i // 4, i % 4] = B[(i - 1) // 4, (i - 1) % 4]
+
+            for f in T.serial(3):
+                B[i // 4, i % 4] = B[i // 4, i % 4] + F[f] * A[(i + f) // 4, 
(i + f) % 4]
+
+
+# Intermediate formed when attempting to re-order access to be
+# sequential along A's layout.  This is not a legal transformation,
+# because the initialization step requires the previous result the
+# computation loop.  Therefore, Schedule.sequential_buffer_access will
+# raise an error.
+#
+# sched.sequential_buffer_access(block='compute', buffer='A')
+@T.prim_func
+def conv1d_cumsum(
+    A: T.Buffer[(4, 4), "int32"],
+    F: T.Buffer[(3,), "int32"],
+    B: T.Buffer[(4, 4), "int32"],
+):
+    with T.block('init_compute'):
+        for i in T.serial(14):
+            if i == 0:
+                B[i // 4, i % 4] = 0
+            else:
+                B[i // 4, i % 4] = B[(i - 1) // 4, (i - 1) % 4]
+
+    with T.block('compute'):
+        for i in T.serial(14):
+            for f in T.serial(3):
+                B[i // 4, i % 4] = B[i // 4, i % 4] + F[f] * A[(i + f) // 4, 
(i + f) % 4]
+```
+
+This utility is not required for the TE interface, as the loopnest of
+an output tensor is automatically rewritten to a row-major traversal.
+
+
+### Enhancement - Predicate for DomainTouched
+
+In `tvm::arith::DomainTouched`, track the condition for which a buffer
+is touched, in addition to the indices that are touched.
+
+### Enhancement - Remove No Op
+
+Changes to be made to `tvm::tir::NoOpRemover`, which implements the
+`tir.transform.RemoveNoOp` transform.
+
+* If two sequential `BufferStore` occur, both of which write to the
+  same buffer/index, and the second value stored does not read out the
+  first value, then the first store is a no-op.
+
+* If there exist two sequential blocks, the buffers/indices written by
+  the second block are a superset of the buffers/indices written by
+  the first block, and the second block does not read the
+  buffer/indices written by the first block, then the first block is a
+  no-op.
+
+* Reading a value then immediately writing it back is a no-op.  A
+  `BufferLoad` that is immediately used as a value to a `BufferStore`,
+  with the same buffer and indices, can be removed.
+
+  This functionality is currently part of
+  `tvm::arith::StmtSimplifier`, but is needed here to recognize
+  strings of no-op.  (Thought: Merge the Simplify and RemoveNoOp
+  passes?)
+
+
+### Enhancement - Simplify
+
+Changes to be made to `tvm::arith::StmtSimplifier` mutator, used in
+the `tir.transform.Simplify` transform.
+
+* When visiting an `IfThenElseStmt`, if the `then_case` and
+  `else_case` are identical, replace with
+  `SeqStmt({Evaluate(condition)}, then_case)`.
+
+  Currently, the `tvm::arith::StmtSimplifier` mutator, checks if a
+  condition can be proven, but doesn't do any checks on the body.
+
+  TODO: Double-check that functionality doesn't already exist.
+
+* If two sequential `IfThenElseStmt` have identical conditions, they
+  should be merged.  Conditions are identical if each condition can be
+  used to prove the other is true, even if they do not have the same
+  functional form.
+
+  ```python
+  # Before merging identical conditionals
+  @T.prim_func
+  def func(A: T.Buffer[16, "float32"], B: T.Buffer[16, "float32"]):
+      for i in T.serial(16):
+          if i < 8:
+              A[i] = 0.0
+          else:
+              A[i] = 1.0
+
+          if i//8 == 1:
+              B[i] = 2.0
+          else:
+              B[i] = 3.0
+
+  # After merging identical conditionals
+  @T.prim_func
+  def func(A: T.Buffer[16, "float32"], B: T.Buffer[16, "float32"]):
+      for i in T.serial(16):
+          if i < 8:
+              A[i] = 0.0
+              B[i] = 2.0
+          else:
+              A[i] = 1.0
+              B[i] = 3.0
+  ```
+
+  Similarly, if two sequential `IfThenElseStmt` have complementary
+  conditions, they should be merged, with the `else_case` of the
+  second conditional appended to the `then_case` of the first, and
+  vice versa.  Conditions are complementary if assuming either
+  condition can be used to prove the other is false.
+
+  (Example usage in [later producer/consumer
+  
section](#explicitly-write-next-operators-desired-default-at-end-of-function).)
+
+  ```python
+  # Before merging complementary conditionals
+  @T.prim_func
+  def func(A: T.Buffer[(4,4), "float32"], B: T.Buffer[(4,4), "float32"]):
+      for i,j in T.grid(4,4):
+          if 4*i + j < 14:
+              A[i] = 0.0
+          else:
+              A[i] = 1.0
+
+          if i==3 and j>=2:
+              B[i] = 2.0
+          else:
+              B[i] = 3.0
+
+
+  # After merging complementary conditionals
+  @T.prim_func
+  def func(A: T.Buffer[(4,4), "float32"], B: T.Buffer[(4,4), "float32"]):
+      for i,j in T.grid(4,4):
+          if 4*i + j < 14:
+              A[i] = 0.0
+              B[i] = 3.0
+          else:
+              A[i] = 1.0
+              B[i] = 2.0
+  ```
+
+  Because the body of one conditional may alter the result of the next
+  conditional, conditionals should not be merged if they depend on
+  buffer values for data-dependent conditionals.  Only conditionals
+  that do not depend on mutable values should be merged.
+
+  ```python
+  # Data-dependent conditional, may not be merged
+  @T.prim_func
+  def func(A: T.Buffer[16, "float32"], B: T.Buffer[16, "float32"]):
+      for i in T.serial(16):
+          if A[i] < 0.0:
+              A[i] = A[i] + 1.0
+
+          if A[i] < 0.0:
+              A[i] = 0.0
+
+
+  # INCORRECT result of illegal merging of conditionals
+  @T.prim_func
+  def func(A: T.Buffer[16, "float32"], B: T.Buffer[16, "float32"]):
+      for i in T.serial(16):
+          if A[i] < 0.0:
+              A[i] = A[i] + 1.0
+              A[i] = 0.0
+  ```
+
+### New Transform - Hoist Expression

Review Comment:
   > (IIUC, we may also insert some "arbitrary" value filling code on edges and 
optimize them out then?)
   
   Yup, the loop that defines writes `T.undef()` into the padding values would 
be present as an intermediate.  This allows `RemoveNoOp` to be much more 
general, since it only needs to look for two sequential writes to the same 
indices to conclude that the first is a no-op.  As a result, a matching 
`else_case` would be a no-op, and therefore safe to insert without impacting 
the final result



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to