[GitHub] reminisce commented on issue #14040: Reformat of TensorRT to use subgraph API

2019-02-07 Thread GitBox
reminisce commented on issue #14040: Reformat of TensorRT to use subgraph API URL: https://github.com/apache/incubator-mxnet/pull/14040#issuecomment-461542427 > Also an argument for onnx-tensorrt is that there is more Ops supported with plugins implemented (slice, some activation, resize

[GitHub] reminisce commented on issue #14040: Reformat of TensorRT to use subgraph API

2019-02-07 Thread GitBox
reminisce commented on issue #14040: Reformat of TensorRT to use subgraph API URL: https://github.com/apache/incubator-mxnet/pull/14040#issuecomment-461539504 > weights for TensorRT node are on CPU while the rest of the graph is on GPU. This is not true. When binding completes, you

[GitHub] reminisce commented on issue #14040: Reformat of TensorRT to use subgraph API

2019-02-06 Thread GitBox
reminisce commented on issue #14040: Reformat of TensorRT to use subgraph API URL: https://github.com/apache/incubator-mxnet/pull/14040#issuecomment-461291971 > @reminisce > you can't mix context in a single graph What do you mean "mix context"? We only have one context which gpu in

[GitHub] reminisce commented on issue #14040: Reformat of TensorRT to use subgraph API

2019-02-06 Thread GitBox
reminisce commented on issue #14040: Reformat of TensorRT to use subgraph API URL: https://github.com/apache/incubator-mxnet/pull/14040#issuecomment-461280671 @Caenorst Just want to clarify, I'm not blocking this PR. We can think through about these comments and make incremental

[GitHub] reminisce commented on issue #14040: Reformat of TensorRT to use subgraph API

2019-02-05 Thread GitBox
reminisce commented on issue #14040: Reformat of TensorRT to use subgraph API URL: https://github.com/apache/incubator-mxnet/pull/14040#issuecomment-460792134 Great to see this is happening. I have two high-level comments: 1. If you use the subgraph API, there should be no needs to add