Beya2019 commented on a change in pull request #6443:
URL: https://github.com/apache/incubator-tvm/pull/6443#discussion_r486813081



##########
File path: python/tvm/relay/op/vision/rcnn.py
##########
@@ -24,7 +24,7 @@ def roi_align(data, rois, pooled_size, spatial_scale, 
sample_ratio=-1, layout='N
     Parameters
     ----------
     data : relay.Expr
-        4-D tensor with shape [batch, channel, height, width]
+        4-D tensor with shape [batch, channel, height, width] or [batch, 
height, width, channel]

Review comment:
       "we need remove this constraint for relay type inference and move it to 
op strategy for corresponding targets"
   
   For the issue, there already exists the constraint as:
   
   `python/tvm/relay/op/strategy/cuda.py`
   ```
   @roi_align_strategy.register(["cuda", "gpu"])
   def roi_align_strategy_cuda(attrs, inputs, out_type, target):
       """roi_align cuda strategy"""
       strategy = _op.OpStrategy()
       
strategy.add_implementation(wrap_compute_roi_align(topi.vision.rcnn.roi_align_nchw),
                                   
wrap_topi_schedule(topi.cuda.schedule_roi_align),
                                   name="roi_align_nchw.cuda")
       return strategy
   ```
   
   and `python/tvm/relay/op/strategy/x86.py`
   
   ```
   @roi_align_strategy.register("cpu")
   def roi_align_strategy_cpu(attrs, inputs, out_type, target):
       """roi_align x86 strategy"""
       strategy = _op.OpStrategy()
       
strategy.add_implementation(wrap_compute_roi_align(topi.x86.roi_align_nchw),
                                   
wrap_topi_schedule(topi.generic.schedule_roi_align),
                                   name="roi_align.x86")
       return strategy
   ```
   
   In future, we just need to realize the topi.vision.rcnn.roi_align_nhwc and 
add the constraint(call topi.vision.rcnn.roi_align_nhwc) to the special target. 
And I will realize this job as soon as possible.

##########
File path: python/tvm/relay/op/vision/rcnn.py
##########
@@ -24,7 +24,7 @@ def roi_align(data, rois, pooled_size, spatial_scale, 
sample_ratio=-1, layout='N
     Parameters
     ----------
     data : relay.Expr
-        4-D tensor with shape [batch, channel, height, width]
+        4-D tensor with shape [batch, channel, height, width] or [batch, 
height, width, channel]

Review comment:
       @kevinthesun 
   
   "we need remove this constraint for relay type inference and move it to op 
strategy for corresponding targets"
   
   For the issue, there already exists the constraint as:
   
   `python/tvm/relay/op/strategy/cuda.py`
   ```
   @roi_align_strategy.register(["cuda", "gpu"])
   def roi_align_strategy_cuda(attrs, inputs, out_type, target):
       """roi_align cuda strategy"""
       strategy = _op.OpStrategy()
       
strategy.add_implementation(wrap_compute_roi_align(topi.vision.rcnn.roi_align_nchw),
                                   
wrap_topi_schedule(topi.cuda.schedule_roi_align),
                                   name="roi_align_nchw.cuda")
       return strategy
   ```
   
   and `python/tvm/relay/op/strategy/x86.py`
   
   ```
   @roi_align_strategy.register("cpu")
   def roi_align_strategy_cpu(attrs, inputs, out_type, target):
       """roi_align x86 strategy"""
       strategy = _op.OpStrategy()
       
strategy.add_implementation(wrap_compute_roi_align(topi.x86.roi_align_nchw),
                                   
wrap_topi_schedule(topi.generic.schedule_roi_align),
                                   name="roi_align.x86")
       return strategy
   ```
   
   In future, we just need to realize the topi.vision.rcnn.roi_align_nhwc and 
add the constraint(call topi.vision.rcnn.roi_align_nhwc) to the special target. 
And I will realize this job as soon as possible.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to