slyubomirsky commented on a change in pull request #7006: URL: https://github.com/apache/tvm/pull/7006#discussion_r533867812
########## File path: python/tvm/relay/frontend/keras.py ########## @@ -927,6 +1031,66 @@ def _convert_repeat_vector(inexpr, keras_layer, _): return out +def _convert_time_distributed(inexpr, keras_layer, etab, input_shape=None, data_layout=None): + # TimeDistributed: split input tensor along the second dimension (assumed to be time), + # apply inner layer to each split individually, + # and then combine the results + if input_shape is None: + input_shape = keras_layer.input_shape + if data_layout is None: + data_layout = etab.data_layout + + assert len(input_shape) >= 2, "Input to TimeDistributed must have at least two dimensions" + + inner_layer = keras_layer.layer + inner_input_shape = [d for (i, d) in enumerate(input_shape) if i != 1] + + # for NDHWC, inner data layout will drop the D + inner_data_layout = None + if data_layout == "NDHWC": + inner_data_layout = "NHWC" + + # some code duplication from keras_op_to_relay + # but it's useful to avoid cluttering the etab + inner_layer_op_name = type(keras_layer.layer).__name__ + if inner_layer_op_name not in _convert_map: + raise tvm.error.OpNotImplemented( + "The inner layer for TimeDistributed {} is not supported for frontend Keras.".format( + inner_layer_op_name + ) + ) + + conversion_func = lambda expr: _convert_map[inner_layer_op_name]( + expr, inner_layer, etab, input_shape=inner_input_shape, data_layout=inner_data_layout Review comment: After reading over the `ExprTable` implementation, I do not think it was a good idea to attach the data layout (a property that has nothing to do with the `ExprTable` specifically) to the `etab` in the first place -- it seems like unnecessary coupling. Maybe there should be a different way of handling the shared state across these conversions. Fwiw, I could just overwrite the `data_layout` field in the `ExprTable` when handling the `TimeDistributed` case and avoid having to modify the other conversion functions. I am not sure whether that would be a good idea from the standpoint of maintainability, though. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org