[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-02-02 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 1c2245b  Bump the publish timestamp.
1c2245b is described below

commit 1c2245b10b2a324d57c5399290888921d46aa851
Author: mxnet-ci 
AuthorDate: Sun Feb 3 07:08:57 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..4f4d489
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sun Feb  3 07:08:57 UTC 2019



[GitHub] zhreshold commented on a change in pull request #14031: Fix transposed convolution in CPU w/o MKLDNN.

2019-02-02 Thread GitBox
zhreshold commented on a change in pull request #14031: Fix transposed 
convolution in CPU w/o MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/14031#discussion_r253289470
 
 

 ##
 File path: tests/python/unittest/test_gluon.py
 ##
 @@ -503,6 +503,40 @@ def test_deconv():
 # layer = nn.Conv3DTranspose(16, (3, 3, 3), layout='NDHWC', in_channels=4)
 # # check_layer_forward(layer, (1, 10, 10, 10, 4))
 
+@with_seed()
+def test_deconv_dilation():
 
 Review comment:
   Since deconv is a really important OP,  I suggest to visit the original 
deconv test cases and add dilation > 1 cases alongside the old tests. This 
ensures better coverage than this single test case. 
   Feel free to keep this unittest which LGTM as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on issue #14040: Reformat of TensorRT to use subgraph API

2019-02-02 Thread GitBox
KellenSunderland commented on issue #14040: Reformat of TensorRT to use 
subgraph API
URL: https://github.com/apache/incubator-mxnet/pull/14040#issuecomment-460027375
 
 
   ONNX-Tensorrt might fail to build as it adds a new default target that 
requires a header not in a standard search path.  There's a few ways to work 
around the problem, we could just build the library and not that tool in CI, or 
we could include the header folder location in the search path like so: 
https://github.com/apache/incubator-mxnet/pull/13906/files#diff-56133c25b5a238b76f54c0928f05a8e6
   
   It should allow the TensorRT build to pass CI if we make that change in this 
PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland removed a comment on issue #13906: [MXNET-703] Update onnx-tensorrt for int8/fp16 support

2019-02-02 Thread GitBox
KellenSunderland removed a comment on issue #13906: [MXNET-703] Update 
onnx-tensorrt for int8/fp16 support
URL: https://github.com/apache/incubator-mxnet/pull/13906#issuecomment-460020295
 
 
   18 days, no review, closing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland opened a new pull request #13906: [MXNET-703] Update onnx-tensorrt for int8/fp16 support

2019-02-02 Thread GitBox
KellenSunderland opened a new pull request #13906: [MXNET-703] Update 
onnx-tensorrt for int8/fp16 support
URL: https://github.com/apache/incubator-mxnet/pull/13906
 
 
   ## Description ##
   Update onnx-tensorrt with new support for different data types, including 
(importantly for v100) fp16.  This will not fully enable int8/fp16 inference 
yet, but lays the groundwork for further updates.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on a change in pull request #14040: Reformat of TensorRT to use subgraph API

2019-02-02 Thread GitBox
KellenSunderland commented on a change in pull request #14040: Reformat of 
TensorRT to use subgraph API
URL: https://github.com/apache/incubator-mxnet/pull/14040#discussion_r253288751
 
 

 ##
 File path: src/operator/subgraph/tensorrt/tensorrt-inl.h
 ##
 @@ -0,0 +1,217 @@
+#ifndef MXNET_OPERATOR_SUBGRAPH_TENSORRT_TENSORRT_INL_H_
+#define MXNET_OPERATOR_SUBGRAPH_TENSORRT_TENSORRT_INL_H_
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file tensorrt-inl.h
+ * \brief TensorRT operation registration
+ * \author Marek Kolodziej, Clement Fuji Tsang
+*/
+
+#if MXNET_USE_TENSORRT
+
+
+#include "../common.h"
+#include "../subgraph_property.h"
+#include "nnvm_to_onnx-inl.h"
+#include "./onnx_to_tensorrt.h"
+
+namespace mxnet {
+namespace op {
+
+using int64 = ::google::protobuf::int64;
+
+struct TRTParam {
+  std::unordered_map inputs_to_idx;
+  std::unordered_map outputs_to_idx;
+  std::unordered_map params_map;
+};
+
+struct TRTEngineParam {
+  nvinfer1::IExecutionContext* trt_executor = nullptr;
+  std::vector > binding_vec;
+};
+
+
+class TensorrtSelector : public SubgraphSelector {
+ public:
+  const std::unordered_set unconditionalTRTops = {
+"Convolution",
+"BatchNorm",
+"elemwise_add",
+"elemwise_sub",
+"elemwise_mul",
+"rsqrt",
+"pad",
+"Pad",
+"mean",
+"FullyConnected",
+"Flatten",
+"SoftmaxOutput",
+  };
+
+  const std::unordered_set withWeightsOps = {
+"Convolution",
+"BatchNorm",
+"FullyConnected"
+  };
+
+  bool isTRTCompatible(const nnvm::Node &n) {
+const std::string op_name = n.op()->name;
+if (op_name == "Pooling") {
+  return (n.attrs.dict.at("pool_type") == "avg" ||
+  n.attrs.dict.at("pool_type") == "max");
+}
+
+if (unconditionalTRTops.count(op_name)) {
+  return true;
+}
+
+if (op_name == "Activation") {
+  return n.attrs.dict.at("act_type") == "relu" ||
+n.attrs.dict.at("act_type") == "tanh" ||
+n.attrs.dict.at("act_type") == "sigmoid";
+}
+
+return false;
+  }
+
+  bool Select(const nnvm::Node &n) override {
+return !n.is_variable() && isTRTCompatible(n);
+  }
+
+  bool SelectInput(const nnvm::Node &n, const nnvm::Node &new_node) override {
+if (new_node.is_variable()) {
+  if (withWeightsOps.count(n.op()->name)) {
+return n.inputs[0].node->attrs.name != new_node.attrs.name;
+  } else {
+return false;
+  }
+}
+if (isTRTCompatible(new_node))
+  return true;
+return false;
+  }
+
+  bool SelectOutput(const nnvm::Node &n, const nnvm::Node &new_node) override {
+   return isTRTCompatible(new_node);
+  }
+
+  std::vector Filter(const std::vector& candidates) 
override {
+bool found_one = false;
+// TensorRT is interesting with at least 2 operations
+for (auto& n : candidates) {
+  if (!n->is_variable()) {
+if (found_one) {
+  return candidates;
+} else {
+  found_one = true;
+}
+  }
+}
+return std::vector();
+  }
+};
+
+class TensorrtProperty : public SubgraphProperty {
+ public:
+  static SubgraphPropertyPtr Create() {
+return std::make_shared();
+  }
+
+  nnvm::NodePtr CreateSubgraphNode(const nnvm::Symbol &sym,
+   const int subgraph_id = 0) const override {
 
 Review comment:
   If possible can we avoid default values in overriding functions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on a change in pull request #14040: Reformat of TensorRT to use subgraph API

2019-02-02 Thread GitBox
KellenSunderland commented on a change in pull request #14040: Reformat of 
TensorRT to use subgraph API
URL: https://github.com/apache/incubator-mxnet/pull/14040#discussion_r253288427
 
 

 ##
 File path: src/operator/subgraph/tensorrt/tensorrt-inl.h
 ##
 @@ -0,0 +1,217 @@
+#ifndef MXNET_OPERATOR_SUBGRAPH_TENSORRT_TENSORRT_INL_H_
+#define MXNET_OPERATOR_SUBGRAPH_TENSORRT_TENSORRT_INL_H_
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file tensorrt-inl.h
+ * \brief TensorRT operation registration
+ * \author Marek Kolodziej, Clement Fuji Tsang
+*/
+
+#if MXNET_USE_TENSORRT
+
+
+#include "../common.h"
+#include "../subgraph_property.h"
+#include "nnvm_to_onnx-inl.h"
+#include "./onnx_to_tensorrt.h"
+
+namespace mxnet {
+namespace op {
+
+using int64 = ::google::protobuf::int64;
+
+struct TRTParam {
+  std::unordered_map inputs_to_idx;
+  std::unordered_map outputs_to_idx;
+  std::unordered_map params_map;
+};
+
+struct TRTEngineParam {
+  nvinfer1::IExecutionContext* trt_executor = nullptr;
+  std::vector > binding_vec;
+};
+
+
+class TensorrtSelector : public SubgraphSelector {
+ public:
+  const std::unordered_set unconditionalTRTops = {
+"Convolution",
+"BatchNorm",
+"elemwise_add",
+"elemwise_sub",
+"elemwise_mul",
+"rsqrt",
+"pad",
+"Pad",
+"mean",
+"FullyConnected",
+"Flatten",
+"SoftmaxOutput",
+  };
+
+  const std::unordered_set withWeightsOps = {
+"Convolution",
+"BatchNorm",
+"FullyConnected"
+  };
+
+  bool isTRTCompatible(const nnvm::Node &n) {
+const std::string op_name = n.op()->name;
+if (op_name == "Pooling") {
+  return (n.attrs.dict.at("pool_type") == "avg" ||
+  n.attrs.dict.at("pool_type") == "max");
+}
+
+if (unconditionalTRTops.count(op_name)) {
+  return true;
+}
+
+if (op_name == "Activation") {
+  return n.attrs.dict.at("act_type") == "relu" ||
+n.attrs.dict.at("act_type") == "tanh" ||
+n.attrs.dict.at("act_type") == "sigmoid";
+}
+
+return false;
+  }
+
+  bool Select(const nnvm::Node &n) override {
+return !n.is_variable() && isTRTCompatible(n);
+  }
+
+  bool SelectInput(const nnvm::Node &n, const nnvm::Node &new_node) override {
+if (new_node.is_variable()) {
+  if (withWeightsOps.count(n.op()->name)) {
+return n.inputs[0].node->attrs.name != new_node.attrs.name;
+  } else {
+return false;
+  }
+}
+if (isTRTCompatible(new_node))
+  return true;
+return false;
+  }
+
+  bool SelectOutput(const nnvm::Node &n, const nnvm::Node &new_node) override {
+   return isTRTCompatible(new_node);
+  }
+
+  std::vector Filter(const std::vector& candidates) 
override {
+bool found_one = false;
+// TensorRT is interesting with at least 2 operations
+for (auto& n : candidates) {
+  if (!n->is_variable()) {
+if (found_one) {
+  return candidates;
+} else {
+  found_one = true;
+}
+  }
+}
+return std::vector();
+  }
+};
+
+class TensorrtProperty : public SubgraphProperty {
+ public:
+  static SubgraphPropertyPtr Create() {
+return std::make_shared();
+  }
+
+  nnvm::NodePtr CreateSubgraphNode(const nnvm::Symbol &sym,
+   const int subgraph_id = 0) const override {
+nnvm::NodePtr n = nnvm::Node::Create();
+nnvm::Symbol new_sym;
+std::unique_copy(sym.outputs.begin(), sym.outputs.end(),
+std::back_inserter(new_sym.outputs), [](
+nnvm::NodeEntry lhs, nnvm::NodeEntry rhs) {
+  return lhs.index == rhs.index && lhs.node.get() == rhs.node.get();
+});
+n->attrs.name = "TensorRT" + std::to_string(subgraph_id);
+n->attrs.op = Op::Get("_TensorRT");
+CHECK(n->attrs.op);
+n->attrs.subgraphs.emplace_back(std::make_shared(new_sym));
+std::ostringstream params_oss;
+for (auto &e : new_sym.ListInputNames(nnvm::Symbol::kAll)) {
+  params_oss << e << ";";
+}
+auto tensorrt_params_names = params_oss.str();
+tensorrt_params_names.pop_back();
+n->attrs.dict["subgraph_params_names"] = tensorrt_params_names;
+TRTParam param;

[GitHub] KellenSunderland commented on a change in pull request #14040: Reformat of TensorRT to use subgraph API

2019-02-02 Thread GitBox
KellenSunderland commented on a change in pull request #14040: Reformat of 
TensorRT to use subgraph API
URL: https://github.com/apache/incubator-mxnet/pull/14040#discussion_r253288413
 
 

 ##
 File path: src/operator/subgraph/tensorrt/tensorrt-inl.h
 ##
 @@ -0,0 +1,217 @@
+#ifndef MXNET_OPERATOR_SUBGRAPH_TENSORRT_TENSORRT_INL_H_
+#define MXNET_OPERATOR_SUBGRAPH_TENSORRT_TENSORRT_INL_H_
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file tensorrt-inl.h
+ * \brief TensorRT operation registration
+ * \author Marek Kolodziej, Clement Fuji Tsang
+*/
+
+#if MXNET_USE_TENSORRT
+
+
+#include "../common.h"
+#include "../subgraph_property.h"
+#include "nnvm_to_onnx-inl.h"
+#include "./onnx_to_tensorrt.h"
+
+namespace mxnet {
+namespace op {
+
+using int64 = ::google::protobuf::int64;
+
+struct TRTParam {
+  std::unordered_map inputs_to_idx;
+  std::unordered_map outputs_to_idx;
+  std::unordered_map params_map;
+};
+
+struct TRTEngineParam {
+  nvinfer1::IExecutionContext* trt_executor = nullptr;
+  std::vector > binding_vec;
+};
+
+
+class TensorrtSelector : public SubgraphSelector {
+ public:
+  const std::unordered_set unconditionalTRTops = {
+"Convolution",
+"BatchNorm",
+"elemwise_add",
+"elemwise_sub",
+"elemwise_mul",
+"rsqrt",
+"pad",
+"Pad",
+"mean",
+"FullyConnected",
+"Flatten",
+"SoftmaxOutput",
+  };
+
+  const std::unordered_set withWeightsOps = {
+"Convolution",
+"BatchNorm",
+"FullyConnected"
+  };
+
+  bool isTRTCompatible(const nnvm::Node &n) {
+const std::string op_name = n.op()->name;
+if (op_name == "Pooling") {
+  return (n.attrs.dict.at("pool_type") == "avg" ||
+  n.attrs.dict.at("pool_type") == "max");
+}
+
+if (unconditionalTRTops.count(op_name)) {
+  return true;
+}
+
+if (op_name == "Activation") {
+  return n.attrs.dict.at("act_type") == "relu" ||
+n.attrs.dict.at("act_type") == "tanh" ||
+n.attrs.dict.at("act_type") == "sigmoid";
+}
+
+return false;
+  }
+
+  bool Select(const nnvm::Node &n) override {
+return !n.is_variable() && isTRTCompatible(n);
+  }
+
+  bool SelectInput(const nnvm::Node &n, const nnvm::Node &new_node) override {
+if (new_node.is_variable()) {
+  if (withWeightsOps.count(n.op()->name)) {
+return n.inputs[0].node->attrs.name != new_node.attrs.name;
+  } else {
+return false;
+  }
+}
+if (isTRTCompatible(new_node))
 
 Review comment:
   Can be simplified to 
   ```C++
   return isTRTCompatible(new_node);
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on a change in pull request #14040: Reformat of TensorRT to use subgraph API

2019-02-02 Thread GitBox
KellenSunderland commented on a change in pull request #14040: Reformat of 
TensorRT to use subgraph API
URL: https://github.com/apache/incubator-mxnet/pull/14040#discussion_r253288287
 
 

 ##
 File path: src/operator/subgraph/tensorrt/onnx_to_tensorrt.cc
 ##
 @@ -127,16 +128,15 @@ nvinfer1::ICudaEngine* onnxToTrtCtx(
   }
   throw dmlc::Error("Cannot parse ONNX into TensorRT Engine");
   }
-
-  bool fp16 = trt_builder->platformHasFastFp16();
-
+  if (dmlc::GetEnv("MXNET_TENSORRT_USE_FP16", true)) {
+if (trt_builder->platformHasFastFp16()) {
+  trt_builder->setFp16Mode(true);
+} else {
+  LOG(INFO) << "WARNING: TensorRT can't use fp16 on this plateform";
 
 Review comment:
   Also we're logging INFO level logs but have WARNING in the message.  I'd 
remove the warning from the message and set log level to warning.  (this is a 
common issue in our codebase).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on a change in pull request #14040: Reformat of TensorRT to use subgraph API

2019-02-02 Thread GitBox
KellenSunderland commented on a change in pull request #14040: Reformat of 
TensorRT to use subgraph API
URL: https://github.com/apache/incubator-mxnet/pull/14040#discussion_r253288274
 
 

 ##
 File path: src/operator/subgraph/tensorrt/onnx_to_tensorrt.cc
 ##
 @@ -127,16 +128,15 @@ nvinfer1::ICudaEngine* onnxToTrtCtx(
   }
   throw dmlc::Error("Cannot parse ONNX into TensorRT Engine");
   }
-
-  bool fp16 = trt_builder->platformHasFastFp16();
-
+  if (dmlc::GetEnv("MXNET_TENSORRT_USE_FP16", true)) {
+if (trt_builder->platformHasFastFp16()) {
+  trt_builder->setFp16Mode(true);
+} else {
+  LOG(INFO) << "WARNING: TensorRT can't use fp16 on this plateform";
 
 Review comment:
   plateform -> platform


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on a change in pull request #14040: Reformat of TensorRT to use subgraph API

2019-02-02 Thread GitBox
KellenSunderland commented on a change in pull request #14040: Reformat of 
TensorRT to use subgraph API
URL: https://github.com/apache/incubator-mxnet/pull/14040#discussion_r253288158
 
 

 ##
 File path: src/operator/subgraph/tensorrt/nnvm_to_onnx.cc
 ##
 @@ -411,6 +383,49 @@ void ConvertElementwiseAdd(NodeProto* node_proto, const 
NodeAttrs& /*attrs*/,
   node_proto->set_op_type("Add");
 }
 
+inline TensorProto_DataType ConvertDType(int dtype) {
+  switch (dtype) {
+case mshadow::kFloat64:
+  return TensorProto_DataType_DOUBLE;
+case mshadow::kFloat32:
+  return TensorProto_DataType_FLOAT;
+case mshadow::kFloat16:
+  return TensorProto_DataType_FLOAT16;
+case mshadow::kUint8:
+  return TensorProto_DataType_UINT8;
+case mshadow::kInt32:
+  return TensorProto_DataType_INT32;
+case mshadow::kInt8:
+  return TensorProto_DataType_INT8;
+case mshadow::kInt64:
+  return TensorProto_DataType_INT64;
+default:
+  return TensorProto_DataType_UNDEFINED;
+  }
+}
+
+inline std::string StringDType(int dtype) {
 
 Review comment:
   Is this still needed, don't see any references to it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on a change in pull request #14040: Reformat of TensorRT to use subgraph API

2019-02-02 Thread GitBox
KellenSunderland commented on a change in pull request #14040: Reformat of 
TensorRT to use subgraph API
URL: https://github.com/apache/incubator-mxnet/pull/14040#discussion_r253288021
 
 

 ##
 File path: include/mxnet/c_api.h
 ##
 @@ -1835,12 +1835,6 @@ MXNET_DLL int MXExecutorReshape(int partial_shaping,
 ExecutorHandle shared_exec,
 ExecutorHandle *out);
 
-/*!
- * \brief get optimized graph from graph executor
- */
-MXNET_DLL int MXExecutorGetOptimizedSymbol(ExecutorHandle handle,
-   SymbolHandle *out);
-
 
 Review comment:
   It affect semantic versioning if we remove it (it can break compilation on 
downstream projects) so if there's not a strong reason to remove it we should 
leave it in for that reason.  Can it still preform a useful function (for 
example showing the graph after optimization)?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on a change in pull request #14040: Reformat of TensorRT to use subgraph API

2019-02-02 Thread GitBox
KellenSunderland commented on a change in pull request #14040: Reformat of 
TensorRT to use subgraph API
URL: https://github.com/apache/incubator-mxnet/pull/14040#discussion_r253288021
 
 

 ##
 File path: include/mxnet/c_api.h
 ##
 @@ -1835,12 +1835,6 @@ MXNET_DLL int MXExecutorReshape(int partial_shaping,
 ExecutorHandle shared_exec,
 ExecutorHandle *out);
 
-/*!
- * \brief get optimized graph from graph executor
- */
-MXNET_DLL int MXExecutorGetOptimizedSymbol(ExecutorHandle handle,
-   SymbolHandle *out);
-
 
 Review comment:
   It affects semantic versioning if we remove it (it can break compilation on 
downstream projects) so if there's not a strong reason to remove it we should 
leave it in for that reason.  Can it still preform a useful function (for 
example showing the graph after optimization)?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Caenorst commented on a change in pull request #14040: Reformat of TensorRT to use subgraph API

2019-02-02 Thread GitBox
Caenorst commented on a change in pull request #14040: Reformat of TensorRT to 
use subgraph API
URL: https://github.com/apache/incubator-mxnet/pull/14040#discussion_r253287088
 
 

 ##
 File path: include/mxnet/c_api.h
 ##
 @@ -1835,12 +1835,6 @@ MXNET_DLL int MXExecutorReshape(int partial_shaping,
 ExecutorHandle shared_exec,
 ExecutorHandle *out);
 
-/*!
- * \brief get optimized graph from graph executor
- */
-MXNET_DLL int MXExecutorGetOptimizedSymbol(ExecutorHandle handle,
-   SymbolHandle *out);
-
 
 Review comment:
   I actually created this API just for the previous TensorRT implementation, 
I'm not sure it is used anywhere else. It could actually be used in case you 
are calling a subgraph backend with variable environment. Do you want to keep 
it ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland closed pull request #13906: [MXNET-703] Update onnx-tensorrt for int8/fp16 support

2019-02-02 Thread GitBox
KellenSunderland closed pull request #13906: [MXNET-703] Update onnx-tensorrt 
for int8/fp16 support
URL: https://github.com/apache/incubator-mxnet/pull/13906
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on issue #13906: [MXNET-703] Update onnx-tensorrt for int8/fp16 support

2019-02-02 Thread GitBox
KellenSunderland commented on issue #13906: [MXNET-703] Update onnx-tensorrt 
for int8/fp16 support
URL: https://github.com/apache/incubator-mxnet/pull/13906#issuecomment-460020295
 
 
   18 days, no review, closing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rajeshii opened a new pull request #14060: Exclude concat layer for gpu quantization

2019-02-02 Thread GitBox
rajeshii opened a new pull request #14060: Exclude concat layer  for gpu 
quantization
URL: https://github.com/apache/incubator-mxnet/pull/14060
 
 
   ## Description ##
   Exclude concat layer for gpu quantization since #13297 enable 
quantizaed_concat op for cpu.
   below is the error log before this fix:
   ```
   python imagenet_inference.py 
--symbol-file=./model/imagenet1k-inception-bn-quantized-5batches-naive-symbol.json
 --param-file=./model/imagenet1k-inception-bn-quantized-.params 
--rgb-mean=123.68,116.779,103.939 --num-skipped-batches=50 
--num-inference-batches=500 --dataset=./data/val_256_q90.rec
   INFO:logger:batch size = 32 for inference
   INFO:logger:rgb_mean = 123.68,116.779,103.939
   INFO:logger:rgb_std = 1,1,1
   INFO:logger:label_name = softmax_label
   INFO:logger:Input data shape = (3, 224, 224)
   INFO:logger:Dataset for inference: ./data/val_256_q90.rec
   [10:15:59] src/io/iter_image_recordio_2.cc:172: ImageRecordIOParser2: 
./data/val_256_q90.rec, use 39 threads for decoding..
   INFO:logger:Loading symbol from file 
/home/chenxiny/s8_conv/example/quantization/./model/imagenet1k-inception-bn-quantized-5batches-naive-symbol.json
   INFO:logger:Loading params from file 
/home/chenxiny/s8_conv/example/quantization/./model/imagenet1k-inception-bn-quantized-.params
   INFO:logger:Skipping the first 50 batches
   INFO:logger:Running model 
./model/imagenet1k-inception-bn-quantized-5batches-naive-symbol.json for 
inference
   [10:16:04] src/executor/attach_op_execs_pass.cc:351: Neither FCompute nor 
FComputeEx registered _contrib_quantized_concat
   ```
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #13668: Update MKL-DNN to fix the Dense layer issue

2019-02-02 Thread GitBox
pengzhao-intel commented on issue #13668: Update MKL-DNN to fix the Dense layer 
issue
URL: https://github.com/apache/incubator-mxnet/pull/13668#issuecomment-460016722
 
 
   @zheng-da please help verify the fix.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #13668: Update MKL-DNN to fix the Dense layer issue

2019-02-02 Thread GitBox
pengzhao-intel commented on issue #13668: Update MKL-DNN to fix the Dense layer 
issue
URL: https://github.com/apache/incubator-mxnet/pull/13668#issuecomment-460016667
 
 
   We will update the latest MKLDNN CI before the merge. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on issue #14052: flaky test: test_operator.test_depthwise_convolution

2019-02-02 Thread GitBox
ZhennanQin commented on issue #14052: flaky test: 
test_operator.test_depthwise_convolution
URL: 
https://github.com/apache/incubator-mxnet/issues/14052#issuecomment-460016527
 
 
   @mseth10, Thanks for reporting this. Confirmed it's caused by mkldnn bug. 
The fix is already in mkldnn master, but not available in its latest release. 
We will upgrade mkldnn when its next release comes out.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #14040: Reformat of TensorRT to use subgraph API

2019-02-02 Thread GitBox
zheng-da commented on a change in pull request #14040: Reformat of TensorRT to 
use subgraph API
URL: https://github.com/apache/incubator-mxnet/pull/14040#discussion_r253284118
 
 

 ##
 File path: include/mxnet/c_api.h
 ##
 @@ -1835,12 +1835,6 @@ MXNET_DLL int MXExecutorReshape(int partial_shaping,
 ExecutorHandle shared_exec,
 ExecutorHandle *out);
 
-/*!
- * \brief get optimized graph from graph executor
- */
-MXNET_DLL int MXExecutorGetOptimizedSymbol(ExecutorHandle handle,
-   SymbolHandle *out);
-
 
 Review comment:
   why do you need to remove an API?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] huangzhiyuan commented on issue #14056: A better split-2D(SliceChannel) op forward kernel for CPU

2019-02-02 Thread GitBox
huangzhiyuan commented on issue #14056: A better split-2D(SliceChannel) op 
forward kernel for CPU
URL: https://github.com/apache/incubator-mxnet/pull/14056#issuecomment-460014063
 
 
   Seems the PR submitted recently block in the same test case, I will rebase 
my code after CNY, ths! 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-02-02 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new c4de8b2  Bump the publish timestamp.
c4de8b2 is described below

commit c4de8b225f331ead9fc92f13b854eaa74f8caebc
Author: mxnet-ci 
AuthorDate: Sun Feb 3 01:08:22 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..ab916b7
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sun Feb  3 01:08:22 UTC 2019



[GitHub] pengzhao-intel commented on issue #13697: [MKLDNN] Enable signed int8 support for convolution.

2019-02-02 Thread GitBox
pengzhao-intel commented on issue #13697: [MKLDNN] Enable signed int8 support 
for convolution.
URL: https://github.com/apache/incubator-mxnet/pull/13697#issuecomment-460013352
 
 
   @xinyu-intel could you help verify the GPU accuracy with this PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] stephenrawls commented on issue #14053: in-place reshape ops

2019-02-02 Thread GitBox
stephenrawls commented on issue #14053: in-place reshape ops
URL: https://github.com/apache/incubator-mxnet/pull/14053#issuecomment-460005020
 
 
   @szha -- Thanks for putting in this patch!
   
   I have a couple questions about the operator `expand_dims`, which I 
understand you are actually changing the python code to *not* use that 
operator, but I think still has relevance.
   
   (1) I originally discovered this because I was using the C API to call 
MXImperativeInvoke() using the expand_dims operator, and I noticed it was 
causing a copy. This fix only effects the Python version when operating on 
ndarrays, not users of the expand_dims operator.
   
   I see that there is a path in the operator that checks if it is an in-place 
operation. Presumably it uses that path if I pass an output array that is the 
same NDArrayHandle as the input array? But what if I still need the original 
input array handle, and I want to create a new output array handle with the 
expanded dim but still not make a copy?
   
   (2) In the issue I created you commented that: "For symbol (and thus the 
hybridized version), since in-place identity is possible it should not matter".
   
   Can you talk a little more about that? I assume you mean that in this case:
   ```
   x_expanded = x.expand_dims(1)
   y = x_expanded + foo
   ```
   The engine can figure out that x is not needed again, and can thus turn the 
expand_dims(1) into an in-place operation that doesn't make a copy?
   
   I'm not very familiar with how this part of the code works, so what happens 
if you had code that looked like this?
   ```
   x_expanded = x.expand_dims(1)
   y = x_expanded + foo
   z = 2 * x
   ```
   i.e. the code still makes a reference to the original x, and thus presumably 
the engine can't decide to use the in-place version of expand_dims in that 
case, right? So I guess my question is -- Does the ability for the Syblolic / 
hybridized engine to elide the copy depend on the code not referencing the 
un-expanded version of the array after calling expand_dims()? If so, it seems 
like there will still be some use cases where an unexpected copy is happening.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] stephenrawls commented on issue #13680: [MXNET-1121] Example to demonstrate the inference workflow using RNN

2019-02-02 Thread GitBox
stephenrawls commented on issue #13680: [MXNET-1121] Example to demonstrate the 
inference workflow using RNN
URL: https://github.com/apache/incubator-mxnet/pull/13680#issuecomment-46730
 
 
   Looks good to me, thanks for the example. Made a few small comments in the 
code about sharing executor memory, and extracting model output.
   
   It would probably be nice to showcase batching too, but I know that 
complicates the current example code & bucketing executor, so not really a 
requirement! Thanks again for getting an example of doing inference in the c++ 
api.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] stephenrawls commented on a change in pull request #13680: [MXNET-1121] Example to demonstrate the inference workflow using RNN

2019-02-02 Thread GitBox
stephenrawls commented on a change in pull request #13680: [MXNET-1121] Example 
to demonstrate the inference workflow using RNN
URL: https://github.com/apache/incubator-mxnet/pull/13680#discussion_r253278924
 
 

 ##
 File path: cpp-package/example/inference/sentiment_analysis_rnn.cpp
 ##
 @@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*
+ * This example demonstrates sentiment prediction workflow with pre-trained 
RNN model using MXNet C++ API.
+ * The example performs following tasks.
+ * 1. Load the pre-trained RNN model,
+ * 2. Load the dictionary file that contains word to index mapping.
+ * 3. Convert the input string to vector of indices and padded to match the 
input data length.
+ * 4. Run the forward pass and predict the output string.
+ * The example uses a pre-trained RNN model that is trained with the IMDB 
dataset.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "mxnet-cpp/MxNetCpp.h"
+
+using namespace mxnet::cpp;
+
+static const int DEFAULT_NUM_WORDS = 5;
+static const char DEFAULT_S3_URL[] = 
"https://s3.amazonaws.com/mxnet-cpp/RNN_model/";;
+
+/*
+ * class Predictor
+ *
+ * This class encapsulates the functionality to load the model, process input 
image and run the forward pass.
+ */
+
+class Predictor {
+ public:
+Predictor() {}
+Predictor(const std::string& model_json,
+  const std::string& model_params,
+  const std::string& input_dictionary,
+  bool use_gpu = false,
+  int num_words = DEFAULT_NUM_WORDS);
+float PredictSentiment(const std::string &input_sequence);
+~Predictor();
+
+ private:
+void LoadModel(const std::string& model_json_file);
+void LoadParameters(const std::string& model_parameters_file);
+void LoadDictionary(const std::string &input_dictionary);
+inline bool FileExists(const std::string& name) {
+struct stat buffer;
+return (stat(name.c_str(), &buffer) == 0);
+}
+int ConverToIndexVector(const std::string& input,
+  std::vector *input_vector);
+int GetIndexForOutputSymbolName(const std::string& output_symbol_name);
+float GetIndexForWord(const std::string& word);
+std::map args_map;
+std::map aux_map;
+std::map  wordToIndex;
+Symbol net;
+Executor *executor;
+Context global_ctx = Context::cpu();
+int num_words;
+};
+
+
+/*
+ * The constructor takes the following parameters as input:
+ * 1. model_json:  The RNN model in json formatted file.
+ * 2. model_params: File containing model parameters
+ * 3. input_dictionary: File containing the word and associated index.
+ * 4. num_words: Number of words which will be used to predict the sentiment.
+ *
+ * The constructor:
+ *  1. Loads the model and parameter files.
+ *  2. Loads the dictionary file to create index to word and word to index 
maps.
+ *  3. Invokes the SimpleBind to bind the input argument to the model and 
create an executor.
+ *
+ *  The SimpleBind is expected to be invoked only once.
+ */
+Predictor::Predictor(const std::string& model_json,
+ const std::string& model_params,
+ const std::string& input_dictionary,
+ bool use_gpu,
+ int num_words):num_words(num_words) {
+  if (use_gpu) {
+global_ctx = Context::gpu();
+  }
+
+  /*
+   * Load the dictionary file that contains the word and its index.
+   * The function creates word to index and index to word map. The maps are 
used to create index
+   * vector for the input sentence.
+   */
+  LoadDictionary(input_dictionary);
+
+  // Load the model
+  LoadModel(model_json);
+
+  // Load the model parameters.
+  LoadParameters(model_params);
+
+  args_map["data0"] = NDArray(Shape(num_words, 1), global_ctx, false);
+  args_map["data1"] = NDArray(Shape(1), global_ctx, false);
+
+  executor = net.SimpleBind(global_ctx, args_map, std::map(),
+  std::map(), aux_map);
+}
+
+
+/*
+ * The following function loads the model from json file.
+ */
+void Predictor::LoadModel(const std::string& model_json_file) {
+  if (!FileExists(model_json_file)) {

[GitHub] stephenrawls commented on a change in pull request #13680: [MXNET-1121] Example to demonstrate the inference workflow using RNN

2019-02-02 Thread GitBox
stephenrawls commented on a change in pull request #13680: [MXNET-1121] Example 
to demonstrate the inference workflow using RNN
URL: https://github.com/apache/incubator-mxnet/pull/13680#discussion_r253278383
 
 

 ##
 File path: cpp-package/example/inference/sentiment_analysis_rnn.cpp
 ##
 @@ -0,0 +1,464 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*
+ * This example demonstrates sentiment prediction workflow with pre-trained 
RNN model using MXNet C++ API.
+ * The example performs following tasks.
+ * 1. Load the pre-trained RNN model,
+ * 2. Load the dictionary file that contains word to index mapping.
+ * 3. Create executors for pre-determined input lengths.
+ * 4. Convert each line in the input to the vector of indices.
+ * 5. Predictor finds the right executor for each line.
+ * 4. Run the forward pass for each line and predicts the sentiment scores.
+ * The example uses a pre-trained RNN model that is trained with the IMDB 
dataset.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "mxnet-cpp/MxNetCpp.h"
+
+using namespace mxnet::cpp;
+
+static const int DEFAULT_BUCKET_KEYS[] = {5, 10, 15, 20, 25, 30};
+static const char DEFAULT_S3_URL[] = 
"https://s3.amazonaws.com/mxnet-cpp/RNN_model/";;
+
+/*
+ * class Predictor
+ *
+ * This class encapsulates the functionality to load the model, process input 
image and run the forward pass.
+ */
+
+class Predictor {
+ public:
+Predictor() {}
+Predictor(const std::string& model_json,
+  const std::string& model_params,
+  const std::string& input_dictionary,
+  const std::vector& bucket_keys,
+  bool use_gpu = false);
+float PredictSentiment(const std::string &input_review);
+~Predictor();
+
+ private:
+void LoadModel(const std::string& model_json_file);
+void LoadParameters(const std::string& model_parameters_file);
+void LoadDictionary(const std::string &input_dictionary);
+inline bool FileExists(const std::string& name) {
+struct stat buffer;
+return (stat(name.c_str(), &buffer) == 0);
+}
+float PredictSentimentForOneLine(const std::string &input_line);
+int ConvertToIndexVector(const std::string& input,
+  std::vector *input_vector);
+int GetIndexForOutputSymbolName(const std::string& output_symbol_name);
+float GetIndexForWord(const std::string& word);
+int GetClosestBucketKey(int num_words);
+std::map args_map;
+std::map aux_map;
+std::map  wordToIndex;
+Symbol net;
+std::map executor_buckets;
+Context global_ctx = Context::cpu();
+};
+
+
+/*
+ * The constructor takes the following parameters as input:
+ * 1. model_json:  The RNN model in json formatted file.
+ * 2. model_params: File containing model parameters
+ * 3. input_dictionary: File containing the word and associated index.
+ * 4. num_words: Number of words which will be used to predict the sentiment.
+ *
+ * The constructor:
+ *  1. Loads the model and parameter files.
+ *  2. Loads the dictionary file to create index to word and word to index 
maps.
+ *  3. For each bucket key in the input vector of bucket keys, it invokes the 
SimpleBind to
+ * create the executor. The bucket key determines the length of input data 
required
+ * for that executor.
+ *  4. Creates a map of bucket key to corresponding executor.
+ *  5. The model is loaded only once. The executors share the memory for the 
parameters.
+ */
+Predictor::Predictor(const std::string& model_json,
+ const std::string& model_params,
+ const std::string& input_dictionary,
+ const std::vector& bucket_keys,
+ bool use_gpu) {
+  if (use_gpu) {
+global_ctx = Context::gpu();
+  }
+
+  /*
+   * Load the dictionary file that contains the word and its index.
+   * The function creates word to index and index to word map. The maps are 
used to create index
+   * vector for the input sentence.
+   */
+  LoadDictionary(input_dictionary);
+
+  // Load the model
+  LoadModel(model_json);
+
+  // Load the model parameters.
+  LoadParameters(model_params);
+
+

[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-02-02 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 67dc46d  Bump the publish timestamp.
67dc46d is described below

commit 67dc46d713b1486507aaff61fc991e10a3aa8843
Author: mxnet-ci 
AuthorDate: Sat Feb 2 20:40:43 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..1789ea1
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sat Feb  2 20:40:43 UTC 2019



[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-02-02 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new ba27f54  Bump the publish timestamp.
ba27f54 is described below

commit ba27f54dd3072f02ce834e53dc108053272dea97
Author: mxnet-ci 
AuthorDate: Sat Feb 2 19:07:14 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..118832a
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sat Feb  2 19:07:14 UTC 2019



[GitHub] szha commented on issue #14053: in-place reshape ops

2019-02-02 Thread GitBox
szha commented on issue #14053: in-place reshape ops
URL: https://github.com/apache/incubator-mxnet/pull/14053#issuecomment-459990251
 
 
   @ZhennanQin I didn't test that specifically based on the assumption that the 
backend implementation should not cause existing APIs such as `NDArray.reshape` 
or `NDArray.shape` to break.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] renganxu edited a comment on issue #14047: mxnet.base.MXNetError: Cannot find argument 'cudnn_algo_verbose'

2019-02-02 Thread GitBox
renganxu edited a comment on issue #14047: mxnet.base.MXNetError: Cannot find 
argument 'cudnn_algo_verbose'
URL: 
https://github.com/apache/incubator-mxnet/issues/14047#issuecomment-459983879
 
 
   @ptrendx I know NGC container works, but I just want to run without 
container. Now I figured out NGC MXNet changed the Convolution operator by 
adding more parameters: cudnn_algo_verbose, cudnn_algo_fwd, 
cudnn_algo_bwd_data, cudnn_algo_bwd_filter, and cudnn_tensor_core_only. But 
they are not available in MXNet repo.
   
   Do you know whether the MXNet repo has any plan to integrate these changes 
made by Nvidia? Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] renganxu commented on issue #14047: mxnet.base.MXNetError: Cannot find argument 'cudnn_algo_verbose'

2019-02-02 Thread GitBox
renganxu commented on issue #14047: mxnet.base.MXNetError: Cannot find argument 
'cudnn_algo_verbose'
URL: 
https://github.com/apache/incubator-mxnet/issues/14047#issuecomment-459983879
 
 
   @ptrendx I know NGC container works, but I just want to run without 
container. Now I figured out NGC MXNet changed the Convolution operator by 
adding more parameters: cudnn_algo_verbose, cudnn_algo_fwd, 
cudnn_algo_bwd_data, cudnn_algo_bwd_filter, and cudnn_tensor_core_only. 
   
   Do you know whether the MXNet repo has any plan to integrate these changes 
made by Nvidia? Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy closed pull request #13834: Reduce maven verbosity which is filling up the logs of the builds

2019-02-02 Thread GitBox
larroy closed pull request #13834: Reduce maven verbosity which is filling up 
the logs of the builds
URL: https://github.com/apache/incubator-mxnet/pull/13834
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mxnet-label-bot commented on issue #14059: ModuleNotFoundError: No module named 'mxnet'

2019-02-02 Thread GitBox
mxnet-label-bot commented on issue #14059: ModuleNotFoundError: No module named 
'mxnet'
URL: 
https://github.com/apache/incubator-mxnet/issues/14059#issuecomment-459979035
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended labels: Installation


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mahmoodn opened a new issue #14059: ModuleNotFoundError: No module named 'mxnet'

2019-02-02 Thread GitBox
mahmoodn opened a new issue #14059: ModuleNotFoundError: No module named 'mxnet'
URL: https://github.com/apache/incubator-mxnet/issues/14059
 
 
   Hi,
   Although I have built mxnet with `make -j8 USE_OPENCV=1 USE_BLAS=openblas 
USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1`, I am not able to run an 
example
   
   $ python main.py --configfile default.cfg
   Traceback (most recent call last):
 File "main.py", line 23, in 
   from config_util import parse_args, parse_contexts, generate_file_path
 File "/home/mahmood/mx/mxnet/example/speech_recognition/config_util.py", 
line 23, in 
   import mxnet as mx
   ModuleNotFoundError: No module named 'mxnet'
   
   I think I missed some steps. As stated 
`[here](https://mxnet.incubator.apache.org/versions/master/install/build_from_source.html)`,
 there a Ninja thing! Is that mandatory?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid commented on a change in pull request #13993: [Clojure] Add resource scope to clojure package

2019-02-02 Thread GitBox
gigasquid commented on a change in pull request #13993: [Clojure] Add resource 
scope to clojure package
URL: https://github.com/apache/incubator-mxnet/pull/13993#discussion_r253270361
 
 

 ##
 File path: 
contrib/clojure-package/test/org/apache/clojure_mxnet/resource_scope_test.clj
 ##
 @@ -0,0 +1,149 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns org.apache.clojure-mxnet.resource-scope-test
+  (:require [org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.resource-scope :as resource-scope]
+[clojure.test :refer :all]))
+
+
+(deftest test-resource-scope-with-ndarray
+  (let [native-resources (atom {})
+x (ndarray/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (ndarray/ones [3 1])
+  temp-y (ndarray/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+  (swap! native-resources assoc :temp-y temp-y)
+  (ndarray/+ temp-x 1)))]
+(is (true? (ndarray/is-disposed (:temp-x @native-resources
+(is (true? (ndarray/is-disposed (:temp-y @native-resources
+(is (false? (ndarray/is-disposed return-val)))
+(is (false? (ndarray/is-disposed x)))
+(is (= [2.0 2.0 2.0] (ndarray/->vec return-val)
+
+(deftest test-nested-resource-scope-with-ndarray
+  (let [native-resources (atom {})
+x (ndarray/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (ndarray/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+ (resource-scope/using
+  (let [temp-y (ndarray/ones [3 1])]
+(swap! native-resources assoc :temp-y temp-y)]
+(is (true? (ndarray/is-disposed (:temp-y @native-resources
+(is (true? (ndarray/is-disposed (:temp-x @native-resources
+(is (false? (ndarray/is-disposed x)
+
+(deftest test-resource-scope-with-sym
+  (let [native-resources (atom {})
+x (sym/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (sym/ones [3 1])
+  temp-y (sym/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+  (swap! native-resources assoc :temp-y temp-y)
+  (sym/+ temp-x 1)))]
+(is (true? (sym/is-disposed (:temp-x @native-resources
+(is (true? (sym/is-disposed (:temp-y @native-resources
+(is (false? (sym/is-disposed return-val)))
+(is (false? (sym/is-disposed x)
+
+(deftest test-nested-resource-scope-with-ndarray
+  (let [native-resources (atom {})
+x (ndarray/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (ndarray/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+ (resource-scope/using
+  (let [temp-y (ndarray/ones [3 1])]
+(swap! native-resources assoc :temp-y temp-y)]
+(is (true? (ndarray/is-disposed (:temp-y @native-resources
+(is (true? (ndarray/is-disposed (:temp-x @native-resources
+(is (false? (ndarray/is-disposed x)
+
+(deftest test-nested-resource-scope-with-sym
+  (let [native-resources (atom {})
+x (sym/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (sym/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+ (resource-scope/using
+  (let [temp-y (sym/ones [3 1])]
+(swap! native-resources assoc :temp-y temp-y)]
+(is (true? (sym/is-disposed (:temp-y @native-resources
+(is (true? (sym/is-disposed (:temp-x @native-resources
+(is (false? (sym/is-disposed x)
+
+;;; Note that if first is returned the rest of the collection ndarrays will
+;;; NOT be disposed
 
 Review comment:
   oh yes - the curse of the leftover misleading comments :)


This is an automated message fr

[GitHub] kedarbellare commented on issue #13993: [Clojure] Add resource scope to clojure package

2019-02-02 Thread GitBox
kedarbellare commented on issue #13993: [Clojure] Add resource scope to clojure 
package
URL: https://github.com/apache/incubator-mxnet/pull/13993#issuecomment-459977844
 
 
   can't wait to use this 😃 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kedarbellare commented on a change in pull request #13993: [Clojure] Add resource scope to clojure package

2019-02-02 Thread GitBox
kedarbellare commented on a change in pull request #13993: [Clojure] Add 
resource scope to clojure package
URL: https://github.com/apache/incubator-mxnet/pull/13993#discussion_r253270291
 
 

 ##
 File path: 
contrib/clojure-package/test/org/apache/clojure_mxnet/resource_scope_test.clj
 ##
 @@ -0,0 +1,149 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns org.apache.clojure-mxnet.resource-scope-test
+  (:require [org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.resource-scope :as resource-scope]
+[clojure.test :refer :all]))
+
+
+(deftest test-resource-scope-with-ndarray
+  (let [native-resources (atom {})
+x (ndarray/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (ndarray/ones [3 1])
+  temp-y (ndarray/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+  (swap! native-resources assoc :temp-y temp-y)
+  (ndarray/+ temp-x 1)))]
+(is (true? (ndarray/is-disposed (:temp-x @native-resources
+(is (true? (ndarray/is-disposed (:temp-y @native-resources
+(is (false? (ndarray/is-disposed return-val)))
+(is (false? (ndarray/is-disposed x)))
+(is (= [2.0 2.0 2.0] (ndarray/->vec return-val)
+
+(deftest test-nested-resource-scope-with-ndarray
+  (let [native-resources (atom {})
+x (ndarray/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (ndarray/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+ (resource-scope/using
+  (let [temp-y (ndarray/ones [3 1])]
+(swap! native-resources assoc :temp-y temp-y)]
+(is (true? (ndarray/is-disposed (:temp-y @native-resources
+(is (true? (ndarray/is-disposed (:temp-x @native-resources
+(is (false? (ndarray/is-disposed x)
+
+(deftest test-resource-scope-with-sym
+  (let [native-resources (atom {})
+x (sym/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (sym/ones [3 1])
+  temp-y (sym/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+  (swap! native-resources assoc :temp-y temp-y)
+  (sym/+ temp-x 1)))]
+(is (true? (sym/is-disposed (:temp-x @native-resources
+(is (true? (sym/is-disposed (:temp-y @native-resources
+(is (false? (sym/is-disposed return-val)))
+(is (false? (sym/is-disposed x)
+
+(deftest test-nested-resource-scope-with-ndarray
+  (let [native-resources (atom {})
+x (ndarray/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (ndarray/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+ (resource-scope/using
+  (let [temp-y (ndarray/ones [3 1])]
+(swap! native-resources assoc :temp-y temp-y)]
+(is (true? (ndarray/is-disposed (:temp-y @native-resources
+(is (true? (ndarray/is-disposed (:temp-x @native-resources
+(is (false? (ndarray/is-disposed x)
+
+(deftest test-nested-resource-scope-with-sym
+  (let [native-resources (atom {})
+x (sym/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (sym/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+ (resource-scope/using
+  (let [temp-y (sym/ones [3 1])]
+(swap! native-resources assoc :temp-y temp-y)]
+(is (true? (sym/is-disposed (:temp-y @native-resources
+(is (true? (sym/is-disposed (:temp-x @native-resources
+(is (false? (sym/is-disposed x)
+
+;;; Note that if first is returned the rest of the collection ndarrays will
+;;; NOT be disposed
 
 Review comment:
   update comment?


This is an automated message from the Apache Git Service.
To respond t

[GitHub] gigasquid commented on a change in pull request #13993: [Clojure] Add resource scope to clojure package

2019-02-02 Thread GitBox
gigasquid commented on a change in pull request #13993: [Clojure] Add resource 
scope to clojure package
URL: https://github.com/apache/incubator-mxnet/pull/13993#discussion_r253269849
 
 

 ##
 File path: 
contrib/clojure-package/examples/imclassification/src/imclassification/train_mnist.clj
 ##
 @@ -96,18 +75,33 @@
 (do
   (println "Starting Training of MNIST ")
   (println "Running with context devices of" devs)
-  (let [_mod (m/module (get-symbol) {:contexts devs})]
-(m/fit _mod {:train-data train-data
-:eval-data test-data
+  (resource-scope/with-let [_mod (m/module (get-symbol) {:contexts devs})]
+(-> _mod
+(m/fit {:train-data (mx-io/mnist-iter {:image (str data-dir 
"train-images-idx3-ubyte")
 
 Review comment:
   It will work if they are `defs` outside, but you will still get the 
undisposed ndarray warning. I refactored to move it out to functions which does 
work and looks better too :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid commented on a change in pull request #13993: [Clojure] Add resource scope to clojure package

2019-02-02 Thread GitBox
gigasquid commented on a change in pull request #13993: [Clojure] Add resource 
scope to clojure package
URL: https://github.com/apache/incubator-mxnet/pull/13993#discussion_r253269610
 
 

 ##
 File path: 
contrib/clojure-package/test/org/apache/clojure_mxnet/resource_scope_test.clj
 ##
 @@ -0,0 +1,151 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns org.apache.clojure-mxnet.resource-scope-test
+  (:require [org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.resource-scope :as resource-scope]
+[clojure.test :refer :all]))
+
+
+(deftest test-resource-scope-with-ndarray
+  (let [native-resources (atom {})
+x (ndarray/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (ndarray/ones [3 1])
+  temp-y (ndarray/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+  (swap! native-resources assoc :temp-y temp-y)
+  (ndarray/+ temp-x 1)))]
+(is (true? (ndarray/is-disposed (:temp-x @native-resources
+(is (true? (ndarray/is-disposed (:temp-y @native-resources
+(is (false? (ndarray/is-disposed return-val)))
+(is (false? (ndarray/is-disposed x)))
+(is (= [2.0 2.0 2.0] (ndarray/->vec return-val)
+
+(deftest test-nested-resource-scope-with-ndarray
+  (let [native-resources (atom {})
+x (ndarray/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (ndarray/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+ (resource-scope/using
+  (let [temp-y (ndarray/ones [3 1])]
+(swap! native-resources assoc :temp-y temp-y)]
+(is (true? (ndarray/is-disposed (:temp-y @native-resources
+(is (true? (ndarray/is-disposed (:temp-x @native-resources
+(is (false? (ndarray/is-disposed x)
+
+(deftest test-resource-scope-with-sym
+  (let [native-resources (atom {})
+x (sym/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (sym/ones [3 1])
+  temp-y (sym/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+  (swap! native-resources assoc :temp-y temp-y)
+  (sym/+ temp-x 1)))]
+(is (true? (sym/is-disposed (:temp-x @native-resources
+(is (true? (sym/is-disposed (:temp-y @native-resources
+(is (false? (sym/is-disposed return-val)))
+(is (false? (sym/is-disposed x)
+
+(deftest test-nested-resource-scope-with-ndarray
+  (let [native-resources (atom {})
+x (ndarray/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (ndarray/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+ (resource-scope/using
+  (let [temp-y (ndarray/ones [3 1])]
+(swap! native-resources assoc :temp-y temp-y)]
+(is (true? (ndarray/is-disposed (:temp-y @native-resources
+(is (true? (ndarray/is-disposed (:temp-x @native-resources
+(is (false? (ndarray/is-disposed x)
+
+(deftest test-nested-resource-scope-with-sym
+  (let [native-resources (atom {})
+x (sym/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (sym/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+ (resource-scope/using
+  (let [temp-y (sym/ones [3 1])]
+(swap! native-resources assoc :temp-y temp-y)]
+(is (true? (sym/is-disposed (:temp-y @native-resources
+(is (true? (sym/is-disposed (:temp-x @native-resources
+(is (false? (sym/is-disposed x)
+
+;;; Note that if first is returned the rest of the collection ndarrays will
+;;; NOT be disposed
+(deftest test-list-creation-with-returning-first
+  (let [native-resources (atom {})
+return-val (resource-scope/using
+(let [temp-ndarrays (mapv (c

[GitHub] mahmoodn closed issue #14002: error: ‘__cpuid’ was not declared in this scope

2019-02-02 Thread GitBox
mahmoodn closed issue #14002: error: ‘__cpuid’ was not declared in this scope
URL: https://github.com/apache/incubator-mxnet/issues/14002
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mahmoodn commented on issue #14002: error: ‘__cpuid’ was not declared in this scope

2019-02-02 Thread GitBox
mahmoodn commented on issue #14002: error: ‘__cpuid’ was not declared in this 
scope
URL: 
https://github.com/apache/incubator-mxnet/issues/14002#issuecomment-459975490
 
 
   It seems that I was able to compile the latest git version. I will close the 
issue. The last thing in the build output is
   
   ```
   g++ -DMSHADOW_FORCE_STREAM -Wall -Wsign-compare -O3 -DNDEBUG=1 
-I/home/mahmood/mx/mxnet/3rdparty/mshadow/ 
-I/home/mahmood/mx/mxnet/3rdparty/dmlc-core/include -fPIC 
-I/home/mahmood/mx/mxnet/3rdparty/tvm/nnvm/include 
-I/home/mahmood/mx/mxnet/3rdparty/dlpack/include 
-I/home/mahmood/mx/mxnet/3rdparty/tvm/include -Iinclude -funroll-loops 
-Wno-unused-parameter -Wno-unknown-pragmas -Wno-unused-local-typedefs -msse3 
-mf16c -I/usr/local/cuda/include -DMSHADOW_USE_CBLAS=1 -DMSHADOW_USE_MKL=0 
-I/home/mahmood/mx/mxnet/3rdparty/mkldnn/build/install/include 
-DMSHADOW_RABIT_PS=0 -DMSHADOW_DIST_PS=0 -DMSHADOW_USE_PASCAL=0 
-DMXNET_USE_MKLDNN=1 -DUSE_MKL=1 
-I/home/mahmood/mx/mxnet/src/operator/nn/mkldnn/ 
-I/home/mahmood/mx/mxnet/3rdparty/mkldnn/build/install/include 
-DMXNET_USE_OPENCV=1 -I/usr/include/opencv -fopenmp 
-DMXNET_USE_OPERATOR_TUNING=1 -DMSHADOW_USE_CUDNN=1  
-I/home/mahmood/mx/mxnet/3rdparty/cub -DMXNET_ENABLE_CUDA_RTC=1 
-DMXNET_USE_NCCL=0 -DMXNET_USE_LIBJPEG_TURBO=0 -shared -o lib/libmxnet.so 
build/src/operator/quantization/mkldnn/mkldnn_quantized_conv.o 
build/src/operator/quantization/mkldnn/mkldnn_quantized_pooling.o 
build/src/operator/quantization/mkldnn/mkldnn_quantized_concat.o 
build/src/operator/subgraph/mkldnn/mkldnn_conv_property.o 
build/src/operator/subgraph/mkldnn/mkldnn_conv_post_quantize_property.o 
build/src/operator/subgraph/mkldnn/mkldnn_conv.o 
build/src/operator/nn/mkldnn/mkldnn_convolution.o 
build/src/operator/nn/mkldnn/mkldnn_concat.o 
build/src/operator/nn/mkldnn/mkldnn_base.o 
build/src/operator/nn/mkldnn/mkldnn_slice.o 
build/src/operator/nn/mkldnn/mkldnn_act.o 
build/src/operator/nn/mkldnn/mkldnn_softmax.o 
build/src/operator/nn/mkldnn/mkldnn_deconvolution.o 
build/src/operator/nn/mkldnn/mkldnn_copy.o 
build/src/operator/nn/mkldnn/mkldnn_fully_connected.o 
build/src/operator/nn/mkldnn/mkldnn_pooling.o 
build/src/operator/nn/mkldnn/mkldnn_sum.o 
build/src/operator/nn/cudnn/cudnn_algoreg.o 
build/src/operator/nn/cudnn/cudnn_batch_norm.o 
build/src/operator/tensor/elemwise_binary_broadcast_op_basic.o 
build/src/operator/tensor/elemwise_binary_op_logic.o 
build/src/operator/tensor/square_sum.o build/src/operator/tensor/matrix_op.o 
build/src/operator/tensor/init_op.o build/src/operator/tensor/elemwise_sum.o 
build/src/operator/tensor/la_op.o build/src/operator/tensor/histogram.o 
build/src/operator/tensor/broadcast_reduce_op_index.o 
build/src/operator/tensor/dot.o build/src/operator/tensor/elemwise_scatter_op.o 
build/src/operator/tensor/elemwise_unary_op_basic.o 
build/src/operator/tensor/elemwise_binary_broadcast_op_extended.o 
build/src/operator/tensor/ravel.o 
build/src/operator/tensor/broadcast_reduce_op_value.o 
build/src/operator/tensor/control_flow_op.o 
build/src/operator/tensor/elemwise_binary_op_basic.o 
build/src/operator/tensor/elemwise_binary_op_extended.o 
build/src/operator/tensor/indexing_op.o 
build/src/operator/tensor/elemwise_binary_broadcast_op_logic.o 
build/src/operator/tensor/diag_op.o build/src/operator/tensor/ordering_op.o 
build/src/operator/tensor/sparse_retain.o 
build/src/operator/tensor/elemwise_binary_scalar_op_extended.o 
build/src/operator/tensor/elemwise_binary_scalar_op_basic.o 
build/src/operator/tensor/elemwise_binary_scalar_op_logic.o 
build/src/operator/tensor/cast_storage.o 
build/src/operator/tensor/elemwise_binary_op.o 
build/src/operator/tensor/elemwise_unary_op_trig.o 
build/src/operator/contrib/tensorrt.o 
build/src/operator/contrib/multibox_target.o 
build/src/operator/contrib/sync_batch_norm.o 
build/src/operator/contrib/count_sketch.o 
build/src/operator/contrib/roi_align.o 
build/src/operator/contrib/bilinear_resize.o build/src/operator/contrib/nnz.o 
build/src/operator/contrib/multibox_detection.o 
build/src/operator/contrib/nnvm_to_onnx.o 
build/src/operator/contrib/deformable_psroi_pooling.o 
build/src/operator/contrib/dgl_graph.o build/src/operator/contrib/fft.o 
build/src/operator/contrib/multibox_prior.o 
build/src/operator/contrib/gradient_multiplier_op.o 
build/src/operator/contrib/adamw.o build/src/operator/contrib/transformer.o 
build/src/operator/contrib/krprod.o build/src/operator/contrib/multi_proposal.o 
build/src/operator/contrib/index_copy.o 
build/src/operator/contrib/optimizer_op.o 
build/src/operator/contrib/bounding_box.o build/src/operator/contrib/proposal.o 
build/src/operator/contrib/boolean_mask.o 
build/src/operator/contrib/psroi_pooling.o 
build/src/operator/contrib/quadratic_op.o 
build/src/operator/contrib/deformable_convolution.o 
build/src/operator/contrib/ifft.o 
build/src/operator/contrib/adaptive_avg_pooling.o 
build/src/operator/random/sample_multinomial_op.o 
build/src/operator/ra

[GitHub] kedarbellare commented on a change in pull request #13993: [Clojure] Add resource scope to clojure package

2019-02-02 Thread GitBox
kedarbellare commented on a change in pull request #13993: [Clojure] Add 
resource scope to clojure package
URL: https://github.com/apache/incubator-mxnet/pull/13993#discussion_r253268961
 
 

 ##
 File path: 
contrib/clojure-package/examples/imclassification/src/imclassification/train_mnist.clj
 ##
 @@ -96,18 +75,33 @@
 (do
   (println "Starting Training of MNIST ")
   (println "Running with context devices of" devs)
-  (let [_mod (m/module (get-symbol) {:contexts devs})]
-(m/fit _mod {:train-data train-data
-:eval-data test-data
+  (resource-scope/with-let [_mod (m/module (get-symbol) {:contexts devs})]
+(-> _mod
+(m/fit {:train-data (mx-io/mnist-iter {:image (str data-dir 
"train-images-idx3-ubyte")
 
 Review comment:
   just for my understanding, does `resource-scope` not work when the 
`train-data` and `eval-data` are defined outside the `with-let` block?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kedarbellare commented on a change in pull request #13993: [Clojure] Add resource scope to clojure package

2019-02-02 Thread GitBox
kedarbellare commented on a change in pull request #13993: [Clojure] Add 
resource scope to clojure package
URL: https://github.com/apache/incubator-mxnet/pull/13993#discussion_r253268889
 
 

 ##
 File path: 
contrib/clojure-package/test/org/apache/clojure_mxnet/resource_scope_test.clj
 ##
 @@ -0,0 +1,151 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns org.apache.clojure-mxnet.resource-scope-test
+  (:require [org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.resource-scope :as resource-scope]
+[clojure.test :refer :all]))
+
+
+(deftest test-resource-scope-with-ndarray
+  (let [native-resources (atom {})
+x (ndarray/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (ndarray/ones [3 1])
+  temp-y (ndarray/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+  (swap! native-resources assoc :temp-y temp-y)
+  (ndarray/+ temp-x 1)))]
+(is (true? (ndarray/is-disposed (:temp-x @native-resources
+(is (true? (ndarray/is-disposed (:temp-y @native-resources
+(is (false? (ndarray/is-disposed return-val)))
+(is (false? (ndarray/is-disposed x)))
+(is (= [2.0 2.0 2.0] (ndarray/->vec return-val)
+
+(deftest test-nested-resource-scope-with-ndarray
+  (let [native-resources (atom {})
+x (ndarray/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (ndarray/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+ (resource-scope/using
+  (let [temp-y (ndarray/ones [3 1])]
+(swap! native-resources assoc :temp-y temp-y)]
+(is (true? (ndarray/is-disposed (:temp-y @native-resources
+(is (true? (ndarray/is-disposed (:temp-x @native-resources
+(is (false? (ndarray/is-disposed x)
+
+(deftest test-resource-scope-with-sym
+  (let [native-resources (atom {})
+x (sym/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (sym/ones [3 1])
+  temp-y (sym/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+  (swap! native-resources assoc :temp-y temp-y)
+  (sym/+ temp-x 1)))]
+(is (true? (sym/is-disposed (:temp-x @native-resources
+(is (true? (sym/is-disposed (:temp-y @native-resources
+(is (false? (sym/is-disposed return-val)))
+(is (false? (sym/is-disposed x)
+
+(deftest test-nested-resource-scope-with-ndarray
+  (let [native-resources (atom {})
+x (ndarray/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (ndarray/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+ (resource-scope/using
+  (let [temp-y (ndarray/ones [3 1])]
+(swap! native-resources assoc :temp-y temp-y)]
+(is (true? (ndarray/is-disposed (:temp-y @native-resources
+(is (true? (ndarray/is-disposed (:temp-x @native-resources
+(is (false? (ndarray/is-disposed x)
+
+(deftest test-nested-resource-scope-with-sym
+  (let [native-resources (atom {})
+x (sym/ones [2 2])
+return-val (resource-scope/using
+(let [temp-x (sym/ones [3 1])]
+  (swap! native-resources assoc :temp-x temp-x)
+ (resource-scope/using
+  (let [temp-y (sym/ones [3 1])]
+(swap! native-resources assoc :temp-y temp-y)]
+(is (true? (sym/is-disposed (:temp-y @native-resources
+(is (true? (sym/is-disposed (:temp-x @native-resources
+(is (false? (sym/is-disposed x)
+
+;;; Note that if first is returned the rest of the collection ndarrays will
+;;; NOT be disposed
+(deftest test-list-creation-with-returning-first
+  (let [native-resources (atom {})
+return-val (resource-scope/using
+(let [temp-ndarrays (mapv

[GitHub] TaoLv commented on issue #13749: Add NHWC layout support to Pooling (cpu, gpu cuda, gpu cuDNN)

2019-02-02 Thread GitBox
TaoLv commented on issue #13749: Add NHWC layout support to Pooling (cpu, gpu 
cuda, gpu cuDNN)
URL: https://github.com/apache/incubator-mxnet/pull/13749#issuecomment-459970152
 
 
   Maybe it's out of the scope of this PR, but I would suggest to remove layout 
parameters from all operator APIs in the next major release. It's framework 
that should take care of the performance of model. We cannot ask users to 
complicate their models to get some performance benefit from one specific 
hardware or backend. The change may be nightmare if they want to change back to 
other hardwares or backends.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on a change in pull request #13749: Add NHWC layout support to Pooling (cpu, gpu cuda, gpu cuDNN)

2019-02-02 Thread GitBox
TaoLv commented on a change in pull request #13749: Add NHWC layout support to 
Pooling (cpu, gpu cuda, gpu cuDNN)
URL: https://github.com/apache/incubator-mxnet/pull/13749#discussion_r253267148
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_pooling-inl.h
 ##
 @@ -104,7 +104,8 @@ class MKLDNNPoolingBwd {
 inline bool SupportMKLDNNPooling(const PoolingParam ¶m) {
   return param.kernel.ndim() == 2 &&
  (param.pool_type == pool_enum::kMaxPooling ||
-  param.pool_type == pool_enum::kAvgPooling);
+  param.pool_type == pool_enum::kAvgPooling) &&
+ (!param.layout.has_value() || param.layout.value() == mshadow::kNCHW);
 
 Review comment:
   Thank you for changing this. Just to clarify, MKL-DNN pooling primitive does 
support NHWC input format. But when we integrated MKL-DNN backend into MXNet, 
we assumed that user input data should always be NCHW for 4D tesnsor. We  need 
re-evaluated the workflow and integration to see if we need support NHWC here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on a change in pull request #13749: Add NHWC layout support to Pooling (cpu, gpu cuda, gpu cuDNN)

2019-02-02 Thread GitBox
TaoLv commented on a change in pull request #13749: Add NHWC layout support to 
Pooling (cpu, gpu cuda, gpu cuDNN)
URL: https://github.com/apache/incubator-mxnet/pull/13749#discussion_r253266876
 
 

 ##
 File path: tests/python/gpu/test_operator_gpu.py
 ##
 @@ -608,6 +608,72 @@ def test_convolution_versions():
 
 
 @with_seed()
+def test_pooling_with_convention():
 
 Review comment:
   Only max pooling is tested?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on a change in pull request #13749: Add NHWC layout support to Pooling (cpu, gpu cuda, gpu cuDNN)

2019-02-02 Thread GitBox
TaoLv commented on a change in pull request #13749: Add NHWC layout support to 
Pooling (cpu, gpu cuda, gpu cuDNN)
URL: https://github.com/apache/incubator-mxnet/pull/13749#discussion_r253266821
 
 

 ##
 File path: src/operator/nn/pooling.cc
 ##
 @@ -421,11 +463,16 @@ NNVM_REGISTER_OP(_backward_Pooling)
 .set_attr(
 "FInplaceOption",
 [](const NodeAttrs &attrs) {
-#if MXNET_USE_CUDNN == 1
-  return std::vector >();
-#else
-  return std::vector >{{1, 0}};
+#if MXNET_USE_MKLDNN == 1 && MXNET_USE_CUDA == 0 && MXNET_USE_CUDNN == 0
 
 Review comment:
   Ummm, do you have any case to validate this scenario? If MXNet is built with 
both MKL-DNN and cuDNN enabled (eg. mxnet-cu90mkl package) and user gives 
`ctx=cpu`. Is there any functionality or performance regression?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-02-02 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 0755472  Bump the publish timestamp.
0755472 is described below

commit 075547271591247ac99512b83e60cfc52be41da3
Author: mxnet-ci 
AuthorDate: Sat Feb 2 13:08:55 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..5bc02f7
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sat Feb  2 13:08:55 UTC 2019



[GitHub] rongzha1 commented on a change in pull request #14056: A better split-2D(SliceChannel) op forward kernel for CPU

2019-02-02 Thread GitBox
rongzha1 commented on a change in pull request #14056: A better 
split-2D(SliceChannel) op forward kernel for CPU
URL: https://github.com/apache/incubator-mxnet/pull/14056#discussion_r253264609
 
 

 ##
 File path: src/operator/channel_op_common.h
 ##
 @@ -101,6 +101,42 @@ void Split(const mshadow::Tensor &input,
 split_helper(input, output, dimension, req);
   }
 }
+
+template
+void Split_2D(const mshadow::Tensor &input,
+   std::vector > *output,
+   const int dimension, const std::vector &req) {
+  if (dimension != 1) {
+LOG(FATAL) << "dimension (" << dimension << ") must == 1";
+  }
+  if (dim != 3) {
 
 Review comment:
   same above


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rongzha1 commented on a change in pull request #14056: A better split-2D(SliceChannel) op forward kernel for CPU

2019-02-02 Thread GitBox
rongzha1 commented on a change in pull request #14056: A better 
split-2D(SliceChannel) op forward kernel for CPU
URL: https://github.com/apache/incubator-mxnet/pull/14056#discussion_r253264597
 
 

 ##
 File path: src/operator/channel_op_common.h
 ##
 @@ -101,6 +101,42 @@ void Split(const mshadow::Tensor &input,
 split_helper(input, output, dimension, req);
   }
 }
+
+template
+void Split_2D(const mshadow::Tensor &input,
+   std::vector > *output,
+   const int dimension, const std::vector &req) {
+  if (dimension != 1) {
 
 Review comment:
   Same coding style as  functions in this file did such as Split(), 
Concatenate()


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on a change in pull request #14056: A better split-2D(SliceChannel) op forward kernel for CPU

2019-02-02 Thread GitBox
ZhennanQin commented on a change in pull request #14056: A better 
split-2D(SliceChannel) op forward kernel for CPU
URL: https://github.com/apache/incubator-mxnet/pull/14056#discussion_r253263944
 
 

 ##
 File path: src/operator/channel_op_common.h
 ##
 @@ -101,6 +101,42 @@ void Split(const mshadow::Tensor &input,
 split_helper(input, output, dimension, req);
   }
 }
+
+template
+void Split_2D(const mshadow::Tensor &input,
+   std::vector > *output,
+   const int dimension, const std::vector &req) {
+  if (dimension != 1) {
+LOG(FATAL) << "dimension (" << dimension << ") must == 1";
+  }
+  if (dim != 3) {
+LOG(FATAL) << "dimension (" << dim << ") must == 3";
+  } else {
+std::vector > out = *output;
+size_t size = out.size();
+std::vectorslice_len;
+std::vectorbegin_pos;
+begin_pos.push_back(0);
+
+for (index_t i = 0; i < size; ++i) {
+  slice_len.push_back(out[i].size(dimension));
+  begin_pos.push_back(begin_pos[i] + out[i].size(dimension));
+}
+#pragma omp parallel for 
num_threads(engine::OpenMP::Get()->GetRecommendedOMPThreadCount())
+for (int i = 0; i < input.shape_[0]; i++) {
+  int iRow = i*input.shape_[1];
 
 Review comment:
   Add blank before and after operator. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on a change in pull request #14056: A better split-2D(SliceChannel) op forward kernel for CPU

2019-02-02 Thread GitBox
ZhennanQin commented on a change in pull request #14056: A better 
split-2D(SliceChannel) op forward kernel for CPU
URL: https://github.com/apache/incubator-mxnet/pull/14056#discussion_r253262104
 
 

 ##
 File path: src/operator/channel_op_common.h
 ##
 @@ -101,6 +101,42 @@ void Split(const mshadow::Tensor &input,
 split_helper(input, output, dimension, req);
   }
 }
+
+template
+void Split_2D(const mshadow::Tensor &input,
+   std::vector > *output,
+   const int dimension, const std::vector &req) {
+  if (dimension != 1) {
+LOG(FATAL) << "dimension (" << dimension << ") must == 1";
+  }
+  if (dim != 3) {
 
 Review comment:
   Same above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on a change in pull request #14056: A better split-2D(SliceChannel) op forward kernel for CPU

2019-02-02 Thread GitBox
ZhennanQin commented on a change in pull request #14056: A better 
split-2D(SliceChannel) op forward kernel for CPU
URL: https://github.com/apache/incubator-mxnet/pull/14056#discussion_r253262122
 
 

 ##
 File path: src/operator/channel_op_common.h
 ##
 @@ -101,6 +101,42 @@ void Split(const mshadow::Tensor &input,
 split_helper(input, output, dimension, req);
   }
 }
+
+template
+void Split_2D(const mshadow::Tensor &input,
+   std::vector > *output,
+   const int dimension, const std::vector &req) {
+  if (dimension != 1) {
+LOG(FATAL) << "dimension (" << dimension << ") must == 1";
+  }
+  if (dim != 3) {
+LOG(FATAL) << "dimension (" << dim << ") must == 3";
+  } else {
+std::vector > out = *output;
+size_t size = out.size();
+std::vectorslice_len;
 
 Review comment:
   Add blank between type and variable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on a change in pull request #14056: A better split-2D(SliceChannel) op forward kernel for CPU

2019-02-02 Thread GitBox
ZhennanQin commented on a change in pull request #14056: A better 
split-2D(SliceChannel) op forward kernel for CPU
URL: https://github.com/apache/incubator-mxnet/pull/14056#discussion_r253262059
 
 

 ##
 File path: src/operator/channel_op_common.h
 ##
 @@ -101,6 +101,42 @@ void Split(const mshadow::Tensor &input,
 split_helper(input, output, dimension, req);
   }
 }
+
+template
+void Split_2D(const mshadow::Tensor &input,
+   std::vector > *output,
 
 Review comment:
   indent?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on a change in pull request #14056: A better split-2D(SliceChannel) op forward kernel for CPU

2019-02-02 Thread GitBox
ZhennanQin commented on a change in pull request #14056: A better 
split-2D(SliceChannel) op forward kernel for CPU
URL: https://github.com/apache/incubator-mxnet/pull/14056#discussion_r253262094
 
 

 ##
 File path: src/operator/channel_op_common.h
 ##
 @@ -101,6 +101,42 @@ void Split(const mshadow::Tensor &input,
 split_helper(input, output, dimension, req);
   }
 }
+
+template
+void Split_2D(const mshadow::Tensor &input,
+   std::vector > *output,
+   const int dimension, const std::vector &req) {
+  if (dimension != 1) {
 
 Review comment:
   Strange code style. Change to CHECK instead?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on a change in pull request #14056: A better split-2D(SliceChannel) op forward kernel for CPU

2019-02-02 Thread GitBox
ZhennanQin commented on a change in pull request #14056: A better 
split-2D(SliceChannel) op forward kernel for CPU
URL: https://github.com/apache/incubator-mxnet/pull/14056#discussion_r253262164
 
 

 ##
 File path: src/operator/channel_op_common.h
 ##
 @@ -101,6 +101,42 @@ void Split(const mshadow::Tensor &input,
 split_helper(input, output, dimension, req);
   }
 }
+
+template
+void Split_2D(const mshadow::Tensor &input,
+   std::vector > *output,
+   const int dimension, const std::vector &req) {
+  if (dimension != 1) {
+LOG(FATAL) << "dimension (" << dimension << ") must == 1";
+  }
+  if (dim != 3) {
+LOG(FATAL) << "dimension (" << dim << ") must == 3";
+  } else {
+std::vector > out = *output;
 
 Review comment:
   Seems redundant assignment. Why don't use output directly?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on a change in pull request #14056: A better split-2D(SliceChannel) op forward kernel for CPU

2019-02-02 Thread GitBox
ZhennanQin commented on a change in pull request #14056: A better 
split-2D(SliceChannel) op forward kernel for CPU
URL: https://github.com/apache/incubator-mxnet/pull/14056#discussion_r253262215
 
 

 ##
 File path: src/operator/channel_op_common.h
 ##
 @@ -101,6 +101,42 @@ void Split(const mshadow::Tensor &input,
 split_helper(input, output, dimension, req);
   }
 }
+
+template
+void Split_2D(const mshadow::Tensor &input,
+   std::vector > *output,
+   const int dimension, const std::vector &req) {
+  if (dimension != 1) {
+LOG(FATAL) << "dimension (" << dimension << ") must == 1";
+  }
+  if (dim != 3) {
+LOG(FATAL) << "dimension (" << dim << ") must == 3";
+  } else {
+std::vector > out = *output;
+size_t size = out.size();
+std::vectorslice_len;
+std::vectorbegin_pos;
+begin_pos.push_back(0);
+
+for (index_t i = 0; i < size; ++i) {
+  slice_len.push_back(out[i].size(dimension));
+  begin_pos.push_back(begin_pos[i] + out[i].size(dimension));
+}
+#pragma omp parallel for 
num_threads(engine::OpenMP::Get()->GetRecommendedOMPThreadCount())
+for (int i = 0; i < input.shape_[0]; i++) {
+  int iRow = i*input.shape_[1];
+  for (int j = 0; j < size; j++) {
+int jRow = i*slice_len[j];
 
 Review comment:
   Add blank before and after operator. Please clean up all code style issue. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on a change in pull request #14056: A better split-2D(SliceChannel) op forward kernel for CPU

2019-02-02 Thread GitBox
ZhennanQin commented on a change in pull request #14056: A better 
split-2D(SliceChannel) op forward kernel for CPU
URL: https://github.com/apache/incubator-mxnet/pull/14056#discussion_r253262126
 
 

 ##
 File path: src/operator/channel_op_common.h
 ##
 @@ -101,6 +101,42 @@ void Split(const mshadow::Tensor &input,
 split_helper(input, output, dimension, req);
   }
 }
+
+template
+void Split_2D(const mshadow::Tensor &input,
+   std::vector > *output,
+   const int dimension, const std::vector &req) {
+  if (dimension != 1) {
+LOG(FATAL) << "dimension (" << dimension << ") must == 1";
+  }
+  if (dim != 3) {
+LOG(FATAL) << "dimension (" << dim << ") must == 3";
+  } else {
+std::vector > out = *output;
+size_t size = out.size();
+std::vectorslice_len;
+std::vectorbegin_pos;
 
 Review comment:
   Add blank between type and variable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on a change in pull request #14056: A better split-2D(SliceChannel) op forward kernel for CPU

2019-02-02 Thread GitBox
ZhennanQin commented on a change in pull request #14056: A better 
split-2D(SliceChannel) op forward kernel for CPU
URL: https://github.com/apache/incubator-mxnet/pull/14056#discussion_r253263942
 
 

 ##
 File path: src/operator/channel_op_common.h
 ##
 @@ -101,6 +101,42 @@ void Split(const mshadow::Tensor &input,
 split_helper(input, output, dimension, req);
   }
 }
+
+template
+void Split_2D(const mshadow::Tensor &input,
+   std::vector > *output,
+   const int dimension, const std::vector &req) {
+  if (dimension != 1) {
+LOG(FATAL) << "dimension (" << dimension << ") must == 1";
+  }
+  if (dim != 3) {
+LOG(FATAL) << "dimension (" << dim << ") must == 3";
+  } else {
+std::vector > out = *output;
+size_t size = out.size();
+std::vectorslice_len;
+std::vectorbegin_pos;
+begin_pos.push_back(0);
+
+for (index_t i = 0; i < size; ++i) {
+  slice_len.push_back(out[i].size(dimension));
+  begin_pos.push_back(begin_pos[i] + out[i].size(dimension));
+}
+#pragma omp parallel for 
num_threads(engine::OpenMP::Get()->GetRecommendedOMPThreadCount())
 
 Review comment:
   Is it possible to parallel both 2 outer loops? Try collapse keyword. And 
beware that it's only works on Linux platform. You can find the example code in 
https://github.com/apache/incubator-mxnet/blob/master/src/operator/l2_normalization.cc


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] bputrycz commented on issue #10715: Wrong params file is not reported as error in C API

2019-02-02 Thread GitBox
bputrycz commented on issue #10715: Wrong params file is not reported as error 
in C API
URL: 
https://github.com/apache/incubator-mxnet/issues/10715#issuecomment-459955392
 
 
   I wasn't blocked by this issue, just noticed it and reported.
   I haven't invested more time into it.
   I haven't tested it with the newest MXNet, and I don't want to do it now.
   It is your call now to decide if it is worth to check again.
   
   I think, I put above easily replicable code: python for generation of the 
sample model and C code with prediction from it. It misses only main() and 
includes.
   Then to see the problem, you need to modify params file with changing the 
names of the keys in the dictionary.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] arcadiaphy opened a new pull request #14058: add backgroud class in box_nms

2019-02-02 Thread GitBox
arcadiaphy opened a new pull request #14058: add backgroud class in box_nms
URL: https://github.com/apache/incubator-mxnet/pull/14058
 
 
   ## Description ##
   This PR is mentioned in #14057 
   
   What I have done in box_nms operator:
   1. add background_id argument
   2. filter out background boxes before sorting operation
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] arcadiaphy opened a new issue #14057: validation stucks when training gluoncv ssd model

2019-02-02 Thread GitBox
arcadiaphy opened a new issue #14057: validation stucks when training gluoncv 
ssd model
URL: https://github.com/apache/incubator-mxnet/issues/14057
 
 
   ## Description
   When training gluoncv ssd model, validation sometimes takes way more longer 
time than the training epoch. After debugging, the problem comes from the 
`box_nms` operator which contributes most of the time.
   
   ## Environment info (Required)
   
   ```
   Centos 7
   CUDA: 9.0
   cudnn: 7 
   mxnet: 1.4.0.rc2
   gluon-cv: latest
   
   ```
   
   ## Minimum reproducible example
   The following snippets show `box_nms` will take very long time when 
processing a lot of prior boxes
   ```
   import mxnet as mx
   import numpy as np
   
   np.random.seed(0)
   
   batch_size = 32
   prior_number = 10
   data = np.zeros((batch_size, prior_number, 6))
   data[:, :, 0] = np.random.randint(-1, 1, (batch_size, prior_number))
   data[:, :, 1] = np.random.random((batch_size, prior_number))
   
   xmin = np.random.random((batch_size, prior_number))
   ymin = np.random.random((batch_size, prior_number))
   width = np.random.random((batch_size, prior_number))
   height = np.random.random((batch_size, prior_number))
   data[:, :, 2] = xmin
   data[:, :, 3] = ymin
   data[:, :, 4] = xmin + width
   data[:, :, 5] = ymin + height
   
   mx_data = mx.nd.array(data, ctx=mx.gpu(0))
   rv = mx.nd.contrib.box_nms(mx_data, overlap_thresh=0.5, valid_thresh=0.01, 
topk=400, score_index=1, id_index=0)
   mx.nd.waitall()
   
   ```
   
   ## What I have found out
   1. The gpu version of stable sort in `SortByKey` function degrades badly on 
sorting length
   2. The `box_nms` operator doesn't remove background boxes in valid box 
filtering which leads to big sorting length
   
   ## What I have done
   1. Add SORT_WITH_THRUST compiling definition in Makefile: the validation 
process is still very slow
   2. Add background boxes filtering in `box_nms`: the validation process 
accelerates dramatically since most of boxes are classified as background.
   
   I will post a PR on the second solution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on issue #14053: in-place reshape ops

2019-02-02 Thread GitBox
ZhennanQin commented on issue #14053: in-place reshape ops
URL: https://github.com/apache/incubator-mxnet/pull/14053#issuecomment-459949587
 
 
   Have you tried this with mkldnn enabled? E.g reshape the output of mkldnn 
convolution?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services