jwfromm commented on a change in pull request #6748:
URL: https://github.com/apache/incubator-tvm/pull/6748#discussion_r512228109



##########
File path: python/tvm/relay/transform/transform.py
##########
@@ -386,6 +386,33 @@ def AlterOpLayout():
     return _ffi_api.AlterOpLayout()
 
 
+class LayoutConfig(object):
+    """A structure for customizing the ConvertLayout pass."""
+
+    current = None
+
+    def __init__(self, skip_layers=None):
+        self.skip_counter = 0
+        self.skip_layers = skip_layers if skip_layers is not None else []

Review comment:
       In this case, the first layer of most networks does not have a 
sufficient number of channels for our tensorcore schedules to be applied. 
Although this would in theory not be a problem, there aren't HWNC schedules for 
GPU. So if you blindly apply ConvertLayout to all layers, you end up with a 
first layer that cant be executed. Skipping it during conversion is an elegant 
way to avoid this issue. I imagine a similar pathology could apply to other 
situations.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to