Beya2019 commented on a change in pull request #6443:
URL: https://github.com/apache/incubator-tvm/pull/6443#discussion_r486734730



##########
File path: python/tvm/relay/op/vision/rcnn.py
##########
@@ -24,7 +24,7 @@ def roi_align(data, rois, pooled_size, spatial_scale, 
sample_ratio=-1, layout='N
     Parameters
     ----------
     data : relay.Expr
-        4-D tensor with shape [batch, channel, height, width]
+        4-D tensor with shape [batch, channel, height, width] or [batch, 
height, width, channel]

Review comment:
       Hi @kevinthesun and @anijain2305,
   
    This NHWC layout topi implementation for roi_align is really not currently 
supported for traditional TVM targets. But in relay/frontend/mxnet.py the 
_mx_roi_align operator is fixed to NCHW(new_attrs["layout"] = "NCHW"),  that is 
only the nchw topi implemention process is called normally. For the special 
target which is only support nhwc layout, we can add extra code in mxnet.py(or 
other frontend py) as 
follows(RFC:https://tvm.apache.org/docs/dev/convert_layout.html#usage):
   ```
   desired_layouts = {'vision.roi_align': ['NHWC', `default`]}
   
   # Convert the layout to NCHW
   # RemoveUnunsedFunctions is used to clean up the graph.
   seq = tvm.transform.Sequential([relay.transform.RemoveUnusedFunctions(),
                                   
relay.transform.ConvertLayout(desired_layouts)])
   with tvm.transform.PassContext(opt_level=3):
       mod = seq(mod)
   ```
   So, it doesn't cause the compilation failure for the nhwc topi 
implementation process can not be called for traditional TVM targets. but it 
expanded the nhwc relay ir implement with their own backend code for the 
special target which is only support nhwc layout.
   
   
   Certainly, the nhwc layout topi implementation is to be realized sooner or 
later, I think it can be improved gradually in the future.

##########
File path: python/tvm/relay/op/vision/rcnn.py
##########
@@ -24,7 +24,7 @@ def roi_align(data, rois, pooled_size, spatial_scale, 
sample_ratio=-1, layout='N
     Parameters
     ----------
     data : relay.Expr
-        4-D tensor with shape [batch, channel, height, width]
+        4-D tensor with shape [batch, channel, height, width] or [batch, 
height, width, channel]

Review comment:
       @kevinthesun and @anijain2305, In addition, I would like to add one more 
point.
   
   
   In our own target, this implementation method really affect our performance 
for it add many layout_transpose operator(Our convolution only supports nhwc, 
if roi_align only supports nchw, it means that many additional layout_transform 
operators need to be inserted into the network), I also think really need nhwc 
layout implementation for this op considering performance. And this submission 
is needed for layout convert support whether nhwc layout realized or not. The 
two mentioned function are not conflicting but complementary, I also hope the 
nhwc layout topi implementation as soon as possible.

##########
File path: python/tvm/relay/op/vision/rcnn.py
##########
@@ -24,7 +24,7 @@ def roi_align(data, rois, pooled_size, spatial_scale, 
sample_ratio=-1, layout='N
     Parameters
     ----------
     data : relay.Expr
-        4-D tensor with shape [batch, channel, height, width]
+        4-D tensor with shape [batch, channel, height, width] or [batch, 
height, width, channel]

Review comment:
       "we need remove this constraint for relay type inference and move it to 
op strategy for corresponding targets"
   
   For the issue, there already exists the constraint as:
   
   `python/tvm/relay/op/strategy/cuda.py`
   ```
   @roi_align_strategy.register(["cuda", "gpu"])
   def roi_align_strategy_cuda(attrs, inputs, out_type, target):
       """roi_align cuda strategy"""
       strategy = _op.OpStrategy()
       
strategy.add_implementation(wrap_compute_roi_align(topi.vision.rcnn.roi_align_nchw),
                                   
wrap_topi_schedule(topi.cuda.schedule_roi_align),
                                   name="roi_align_nchw.cuda")
       return strategy
   ```
   
   and `python/tvm/relay/op/strategy/x86.py`
   
   ```
   @roi_align_strategy.register("cpu")
   def roi_align_strategy_cpu(attrs, inputs, out_type, target):
       """roi_align x86 strategy"""
       strategy = _op.OpStrategy()
       
strategy.add_implementation(wrap_compute_roi_align(topi.x86.roi_align_nchw),
                                   
wrap_topi_schedule(topi.generic.schedule_roi_align),
                                   name="roi_align.x86")
       return strategy
   ```
   
   In future, we just need to realize the topi.vision.rcnn.roi_align_nhwc and 
add the constraint(call topi.vision.rcnn.roi_align_nhwc) to the special target. 
And I will realize this job as soon as possible.

##########
File path: python/tvm/relay/op/vision/rcnn.py
##########
@@ -24,7 +24,7 @@ def roi_align(data, rois, pooled_size, spatial_scale, 
sample_ratio=-1, layout='N
     Parameters
     ----------
     data : relay.Expr
-        4-D tensor with shape [batch, channel, height, width]
+        4-D tensor with shape [batch, channel, height, width] or [batch, 
height, width, channel]

Review comment:
       @kevinthesun 
   
   "we need remove this constraint for relay type inference and move it to op 
strategy for corresponding targets"
   
   For the issue, there already exists the constraint as:
   
   `python/tvm/relay/op/strategy/cuda.py`
   ```
   @roi_align_strategy.register(["cuda", "gpu"])
   def roi_align_strategy_cuda(attrs, inputs, out_type, target):
       """roi_align cuda strategy"""
       strategy = _op.OpStrategy()
       
strategy.add_implementation(wrap_compute_roi_align(topi.vision.rcnn.roi_align_nchw),
                                   
wrap_topi_schedule(topi.cuda.schedule_roi_align),
                                   name="roi_align_nchw.cuda")
       return strategy
   ```
   
   and `python/tvm/relay/op/strategy/x86.py`
   
   ```
   @roi_align_strategy.register("cpu")
   def roi_align_strategy_cpu(attrs, inputs, out_type, target):
       """roi_align x86 strategy"""
       strategy = _op.OpStrategy()
       
strategy.add_implementation(wrap_compute_roi_align(topi.x86.roi_align_nchw),
                                   
wrap_topi_schedule(topi.generic.schedule_roi_align),
                                   name="roi_align.x86")
       return strategy
   ```
   
   In future, we just need to realize the topi.vision.rcnn.roi_align_nhwc and 
add the constraint(call topi.vision.rcnn.roi_align_nhwc) to the special target. 
And I will realize this job as soon as possible.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to