cjolivier01 commented on a change in pull request #10078: Support float16 in L2Normalization operator URL: https://github.com/apache/incubator-mxnet/pull/10078#discussion_r173944782
########## File path: src/operator/l2_normalization.cc ########## @@ -26,13 +26,18 @@ namespace mxnet { namespace op { template<> -Operator* CreateOp<cpu>(L2NormalizationParam param) { - return new L2NormalizationOp<cpu>(param); +Operator* CreateOp<cpu>(L2NormalizationParam param, int dtype) { + Operator* op = NULL; + MSHADOW_REAL_TYPE_SWITCH(dtype, DType, { + op = new L2NormalizationOp<cpu, DType>(param); + }); + return op; } // DO_BIND_DISPATCH comes from static_operator_common.h -Operator* L2NormalizationProp::CreateOperator(Context ctx) const { - DO_BIND_DISPATCH(CreateOp, param_); +Operator* L2NormalizationProp::CreateOperatorEx(Context ctx, std::vector<TShape> *in_shape, + std::vector<int> *in_type) const { + DO_BIND_DISPATCH(CreateOp, param_, in_type->at(0)); Review comment: Just FYI, usually, DType is determined within the Forward() and Backward() functions using the type switch from the actual input blob at runtime. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services