[GitHub] szha commented on issue #7913: what are the dependencies between each modules

2017-09-15 Thread git
szha commented on issue #7913: what are the dependencies between each modules URL: https://github.com/apache/incubator-mxnet/issues/7913#issuecomment-329947781 The best starting point that gives larger picture can be found in this doc:

[GitHub] mxmxlwlw opened a new issue #7913: what are the dependencies between each modules

2017-09-15 Thread git
mxmxlwlw opened a new issue #7913: what are the dependencies between each modules URL: https://github.com/apache/incubator-mxnet/issues/7913 Hi, I like your design of mxnet very much, but I still can't get the idea of dependencies between each modules. I'd be really appreciated if

[GitHub] szha commented on issue #7910: add advanced indexing

2017-09-15 Thread git
szha commented on issue #7910: add advanced indexing URL: https://github.com/apache/incubator-mxnet/pull/7910#issuecomment-329947462 If we introduce a non-grouped symbol class which can only be obtained from a method of non-grouped symbols, we can introduce advanced indexing on that

[GitHub] tqchen commented on a change in pull request #7698: Second order gradient and Subgraph execution

2017-09-15 Thread git
tqchen commented on a change in pull request #7698: Second order gradient and Subgraph execution URL: https://github.com/apache/incubator-mxnet/pull/7698#discussion_r139279115 ## File path: include/mxnet/imperative.h ## @@ -0,0 +1,214 @@ +/* + * Licensed to the Apache

[GitHub] szha commented on issue #7910: add advanced indexing

2017-09-15 Thread git
szha commented on issue #7910: add advanced indexing URL: https://github.com/apache/incubator-mxnet/pull/7910#issuecomment-329947462 If we introduce a non-grouped symbol class which can only be obtained from a method of non-grouped symbols, we can introduce slicing on that non-grouped

[GitHub] yajiedesign commented on issue #7910: add advanced indexing

2017-09-15 Thread git
yajiedesign commented on issue #7910: add advanced indexing URL: https://github.com/apache/incubator-mxnet/pull/7910#issuecomment-329945607 only work with ndarray? This is an automated message from the Apache Git Service. To

[GitHub] liuzhi136 closed issue #7894: The Meaning of parameters of mxnet.gluon.Embedding function

2017-09-15 Thread git
liuzhi136 closed issue #7894: The Meaning of parameters of mxnet.gluon.Embedding function URL: https://github.com/apache/incubator-mxnet/issues/7894 This is an automated message from the Apache Git Service. To respond to

[GitHub] liuzhi136 commented on issue #7894: The Meaning of parameters of mxnet.gluon.Embedding function

2017-09-15 Thread git
liuzhi136 commented on issue #7894: The Meaning of parameters of mxnet.gluon.Embedding function URL: https://github.com/apache/incubator-mxnet/issues/7894#issuecomment-329943591 Ok, Thanks. This is an automated message

[GitHub] liuzhi136 commented on issue #7894: The Meaning of parameters of mxnet.gluon.Embedding function

2017-09-15 Thread git
liuzhi136 commented on issue #7894: The Meaning of parameters of mxnet.gluon.Embedding function URL: https://github.com/apache/incubator-mxnet/issues/7894#issuecomment-329943100 But these two parameter below the Input shape: I'm thinking if it stands for the (num_word, vocab_size), that

[GitHub] eldercrow commented on issue #7717: Subpixel convolution(state of art) implementation rather than using Deconvolution.

2017-09-15 Thread git
eldercrow commented on issue #7717: Subpixel convolution(state of art) implementation rather than using Deconvolution. URL: https://github.com/apache/incubator-mxnet/issues/7717#issuecomment-329941126 From your error message, it seems that cropping is unnecessary since data_shape[3] -

[GitHub] szha commented on issue #7894: The Meaning of parameters of mxnet.gluon.Embedding function

2017-09-15 Thread git
szha commented on issue #7894: The Meaning of parameters of mxnet.gluon.Embedding function URL: https://github.com/apache/incubator-mxnet/issues/7894#issuecomment-329940221 sorry for the delay. @liuzhi136 they don't have meaning and can be arbitrary numbers

[GitHub] piiswrong commented on issue #7912: Do I need to change grad_req when sharing weights?

2017-09-15 Thread git
piiswrong commented on issue #7912: Do I need to change grad_req when sharing weights? URL: https://github.com/apache/incubator-mxnet/issues/7912#issuecomment-329939400 no need to use add. it's automatic within one executor

[GitHub] liuzhi136 commented on issue #7894: The Meaning of parameters of mxnet.gluon.Embedding function

2017-09-15 Thread git
liuzhi136 commented on issue #7894: The Meaning of parameters of mxnet.gluon.Embedding function URL: https://github.com/apache/incubator-mxnet/issues/7894#issuecomment-329938199 @szha Do you know these parameters' meaning?

[GitHub] liyi14 opened a new issue #7912: Do I need to change grad_req when sharing weights?

2017-09-15 Thread git
liyi14 opened a new issue #7912: Do I need to change grad_req when sharing weights? URL: https://github.com/apache/incubator-mxnet/issues/7912 Hi, I was building a conv layer sharing weights with another one, using the #557. Since the default grad_req is 'write' and I need to update the

[GitHub] eric-haibin-lin opened a new pull request #7911: more sparse related docs

2017-09-15 Thread git
eric-haibin-lin opened a new pull request #7911: more sparse related docs URL: https://github.com/apache/incubator-mxnet/pull/7911 Preview at http://ec2-54-187-32-207.us-west-2.compute.amazonaws.com/api/python/ndarray/sparse.html

[GitHub] vuvko commented on issue #7909: Dividing input changes BatchNorm output

2017-09-15 Thread git
vuvko commented on issue #7909: Dividing input changes BatchNorm output URL: https://github.com/apache/incubator-mxnet/issues/7909#issuecomment-329932411 Ok, so the problem I loading model for testing and in that case parameter `use_global_stats` is just ignored (as set to `True`)

[GitHub] cjolivier01 commented on a change in pull request #7903: Refactor AdaGrad optimizer to support sparse tensors

2017-09-15 Thread git
cjolivier01 commented on a change in pull request #7903: Refactor AdaGrad optimizer to support sparse tensors URL: https://github.com/apache/incubator-mxnet/pull/7903#discussion_r139268501 ## File path: python/mxnet/optimizer.py ## @@ -665,26 +667,46 @@ class

[GitHub] eric-haibin-lin commented on issue #7847: Bug on Multi GPU : probably Invalid initialization of optimizer

2017-09-15 Thread git
eric-haibin-lin commented on issue #7847: Bug on Multi GPU : probably Invalid initialization of optimizer URL: https://github.com/apache/incubator-mxnet/issues/7847#issuecomment-329926044 I'll update module to throw proper warning/error messages for such case

[GitHub] cjolivier01 commented on a change in pull request #7903: Refactor AdaGrad optimizer to support sparse tensors

2017-09-15 Thread git
cjolivier01 commented on a change in pull request #7903: Refactor AdaGrad optimizer to support sparse tensors URL: https://github.com/apache/incubator-mxnet/pull/7903#discussion_r139268189 ## File path: python/mxnet/optimizer.py ## @@ -665,26 +667,46 @@ class

[GitHub] vuvko commented on issue #7909: Dividing input changes BatchNorm output

2017-09-15 Thread git
vuvko commented on issue #7909: Dividing input changes BatchNorm output URL: https://github.com/apache/incubator-mxnet/issues/7909#issuecomment-329925746 Also if this can help, I used [this](https://i.imgur.com/XJOBiiO.jpg) image for testing.

[GitHub] cjolivier01 commented on a change in pull request #7903: Refactor AdaGrad optimizer to support sparse tensors

2017-09-15 Thread git
cjolivier01 commented on a change in pull request #7903: Refactor AdaGrad optimizer to support sparse tensors URL: https://github.com/apache/incubator-mxnet/pull/7903#discussion_r139267799 ## File path: src/operator/tensor/elemwise_binary_op_basic.cu ## @@ -36,21 +36,21

[GitHub] eric-haibin-lin commented on a change in pull request #7903: Refactor AdaGrad optimizer to support sparse tensors

2017-09-15 Thread git
eric-haibin-lin commented on a change in pull request #7903: Refactor AdaGrad optimizer to support sparse tensors URL: https://github.com/apache/incubator-mxnet/pull/7903#discussion_r139218368 ## File path: python/mxnet/optimizer.py ## @@ -665,26 +667,46 @@ class

[GitHub] eric-haibin-lin commented on a change in pull request #7903: Refactor AdaGrad optimizer to support sparse tensors

2017-09-15 Thread git
eric-haibin-lin commented on a change in pull request #7903: Refactor AdaGrad optimizer to support sparse tensors URL: https://github.com/apache/incubator-mxnet/pull/7903#discussion_r139217684 ## File path: python/mxnet/optimizer.py ## @@ -665,26 +667,46 @@ class

[GitHub] eric-haibin-lin commented on a change in pull request #7903: Refactor AdaGrad optimizer to support sparse tensors

2017-09-15 Thread git
eric-haibin-lin commented on a change in pull request #7903: Refactor AdaGrad optimizer to support sparse tensors URL: https://github.com/apache/incubator-mxnet/pull/7903#discussion_r139217161 ## File path: python/mxnet/optimizer.py ## @@ -665,26 +667,46 @@ class

[GitHub] eric-haibin-lin commented on a change in pull request #7772: Use memcopy instead of assigning each individual element

2017-09-15 Thread git
eric-haibin-lin commented on a change in pull request #7772: Use memcopy instead of assigning each individual element URL: https://github.com/apache/incubator-mxnet/pull/7772#discussion_r139259687 ## File path: src/operator/tensor/cast_storage-inl.h ## @@ -120,9 +119,13

[GitHub] piiswrong commented on a change in pull request #7698: Second order gradient and Subgraph execution

2017-09-15 Thread git
piiswrong commented on a change in pull request #7698: Second order gradient and Subgraph execution URL: https://github.com/apache/incubator-mxnet/pull/7698#discussion_r139259449 ## File path: include/mxnet/imperative.h ## @@ -0,0 +1,214 @@ +/* + * Licensed to the Apache

[GitHub] piiswrong opened a new pull request #7910: add advanced indexing

2017-09-15 Thread git
piiswrong opened a new pull request #7910: add advanced indexing URL: https://github.com/apache/incubator-mxnet/pull/7910 This is an automated message from the Apache Git Service. To respond to the message, please log on

[GitHub] eric-haibin-lin commented on a change in pull request #7893: Add barriers in kvstore init

2017-09-15 Thread git
eric-haibin-lin commented on a change in pull request #7893: Add barriers in kvstore init URL: https://github.com/apache/incubator-mxnet/pull/7893#discussion_r139257114 ## File path: src/kvstore/kvstore_dist.h ## @@ -147,8 +147,11 @@ class KVStoreDist : public

[GitHub] asmushetzel commented on issue #7883: cuda support for new linear algebra operators

2017-09-15 Thread git
asmushetzel commented on issue #7883: cuda support for new linear algebra operators URL: https://github.com/apache/incubator-mxnet/pull/7883#issuecomment-329896956 So this is done now. From my point of view, it can be merged.

[GitHub] eric-haibin-lin commented on issue #7146: How to compile Amalgamation for android?

2017-09-15 Thread git
eric-haibin-lin commented on issue #7146: How to compile Amalgamation for android? URL: https://github.com/apache/incubator-mxnet/issues/7146#issuecomment-329883455 @arank On 2017?9?13?, at 02:50, IceBo wrote: where should copy the

[GitHub] eric-haibin-lin commented on issue #7888: Distributed multi-GPU training -- hangs for certain batch_size, epoch_number combinations.

2017-09-15 Thread git
eric-haibin-lin commented on issue #7888: Distributed multi-GPU training -- hangs for certain batch_size, epoch_number combinations. URL: https://github.com/apache/incubator-mxnet/issues/7888#issuecomment-329877057 Many iterators inherit from PrefetcherIter such as ImageRecordIter

[GitHub] piiswrong commented on a change in pull request #7698: Second order gradient and Subgraph execution

2017-09-15 Thread git
piiswrong commented on a change in pull request #7698: Second order gradient and Subgraph execution URL: https://github.com/apache/incubator-mxnet/pull/7698#discussion_r139220420 ## File path: src/c_api/c_api_function.cc ## @@ -162,38 +162,35 @@ int

[GitHub] szha commented on a change in pull request #7698: Second order gradient and Subgraph execution

2017-09-15 Thread git
szha commented on a change in pull request #7698: Second order gradient and Subgraph execution URL: https://github.com/apache/incubator-mxnet/pull/7698#discussion_r139209590 ## File path: python/mxnet/autograd.py ## @@ -236,39 +256,96 @@ def backward(heads,

[GitHub] szha commented on a change in pull request #7698: Second order gradient and Subgraph execution

2017-09-15 Thread git
szha commented on a change in pull request #7698: Second order gradient and Subgraph execution URL: https://github.com/apache/incubator-mxnet/pull/7698#discussion_r139211509 ## File path: python/mxnet/ndarray/sparse.py ## @@ -871,7 +871,7 @@ def _ndarray_cls(handle,

[incubator-mxnet] branch master updated: [sparse] add ftrl optimizer for sparse (#7720)

2017-09-15 Thread jxie
This is an automated email from the ASF dual-hosted git repository. jxie pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new e81a3a8 [sparse] add ftrl optimizer for

[GitHub] piiswrong closed pull request #7720: [sparse] add ftrl optimizer for sparse

2017-09-15 Thread git
piiswrong closed pull request #7720: [sparse] add ftrl optimizer for sparse URL: https://github.com/apache/incubator-mxnet/pull/7720 This is an automated message from the Apache Git Service. To respond to the message,

[GitHub] piiswrong closed pull request #7832: Elementwise Sum (add_n) for rowsparse on GPU

2017-09-15 Thread git
piiswrong closed pull request #7832: Elementwise Sum (add_n) for rowsparse on GPU URL: https://github.com/apache/incubator-mxnet/pull/7832 This is an automated message from the Apache Git Service. To respond to the

[GitHub] piiswrong commented on a change in pull request #7875: add mobilenet to gluon model zoo

2017-09-15 Thread git
piiswrong commented on a change in pull request #7875: add mobilenet to gluon model zoo URL: https://github.com/apache/incubator-mxnet/pull/7875#discussion_r139216969 ## File path: python/mxnet/gluon/model_zoo/vision/mobilenet.py ## @@ -0,0 +1,158 @@ +# Licensed to the

[GitHub] madjam commented on issue #7899: Need help: numpy array to mxnet ndarray is too slow.

2017-09-15 Thread git
madjam commented on issue #7899: Need help: numpy array to mxnet ndarray is too slow. URL: https://github.com/apache/incubator-mxnet/issues/7899#issuecomment-329849612 How big is `img_data` and `label_data`? FYI: this involves allocating memory and copying contents of numpy array into

[GitHub] piiswrong commented on a change in pull request #7698: Second order gradient and Subgraph execution

2017-09-15 Thread git
piiswrong commented on a change in pull request #7698: Second order gradient and Subgraph execution URL: https://github.com/apache/incubator-mxnet/pull/7698#discussion_r139207816 ## File path: src/operator/tensor/elemwise_binary_op_basic.cu ## @@ -36,21 +36,21 @@

[GitHub] zheng-da commented on issue #7888: Distributed multi-GPU training -- hangs for certain batch_size, epoch_number combinations.

2017-09-15 Thread git
zheng-da commented on issue #7888: Distributed multi-GPU training -- hangs for certain batch_size, epoch_number combinations. URL: https://github.com/apache/incubator-mxnet/issues/7888#issuecomment-329847276 Does MXNet prefetch data by default in the distributed setting? I see there is a

[GitHub] FrancisTse8 commented on issue #7852: Trouble installing MXNet on Raspberry Pi 3

2017-09-15 Thread git
FrancisTse8 commented on issue #7852: Trouble installing MXNet on Raspberry Pi 3 URL: https://github.com/apache/incubator-mxnet/issues/7852#issuecomment-329847054 OK, I am on the Raspberry Pi 3 running jessie to try to install MXNet. Followed the instructions on

[GitHub] eric-haibin-lin commented on a change in pull request #7903: Refactor AdaGrad optimizer to support sparse tensors

2017-09-15 Thread git
eric-haibin-lin commented on a change in pull request #7903: Refactor AdaGrad optimizer to support sparse tensors URL: https://github.com/apache/incubator-mxnet/pull/7903#discussion_r139201606 ## File path: src/operator/tensor/elemwise_binary_op_basic.cu ## @@ -36,21

[GitHub] eric-haibin-lin commented on issue #7319: [RoadMap] Legacy issue resolution before 1.0 release

2017-09-15 Thread git
eric-haibin-lin commented on issue #7319: [RoadMap] Legacy issue resolution before 1.0 release URL: https://github.com/apache/incubator-mxnet/issues/7319#issuecomment-329841537 @formath Yes, I'll work on the sparse embedding operator to support at least millions of features after I am

[GitHub] vuvko opened a new issue #7909: Dividing input changes BatchNorm output

2017-09-15 Thread git
vuvko opened a new issue #7909: Dividing input changes BatchNorm output URL: https://github.com/apache/incubator-mxnet/issues/7909 I was trying to experiment with ResNet-152 model downloaded from [Model Zoo](https://mxnet.incubator.apache.org/model_zoo/). For some reason first batch

[GitHub] piiswrong commented on a change in pull request #7904: add warning to global norm clip

2017-09-15 Thread git
piiswrong commented on a change in pull request #7904: add warning to global norm clip URL: https://github.com/apache/incubator-mxnet/pull/7904#discussion_r139200143 ## File path: python/mxnet/gluon/utils.py ## @@ -113,11 +114,11 @@ def clip_global_norm(arrays, max_norm):

[GitHub] eric-haibin-lin commented on a change in pull request #7698: Second order gradient and Subgraph execution

2017-09-15 Thread git
eric-haibin-lin commented on a change in pull request #7698: Second order gradient and Subgraph execution URL: https://github.com/apache/incubator-mxnet/pull/7698#discussion_r139200011 ## File path: src/operator/tensor/elemwise_binary_op_basic.cu ## @@ -36,21 +36,21 @@

[GitHub] eric-haibin-lin commented on a change in pull request #7698: Second order gradient and Subgraph execution

2017-09-15 Thread git
eric-haibin-lin commented on a change in pull request #7698: Second order gradient and Subgraph execution URL: https://github.com/apache/incubator-mxnet/pull/7698#discussion_r139199432 ## File path: tests/python/unittest/test_autograd.py ## @@ -117,7 +117,7 @@ def

[GitHub] eric-haibin-lin commented on issue #7888: Distributed multi-GPU training -- hangs for certain batch_size, epoch_number combinations.

2017-09-15 Thread git
eric-haibin-lin commented on issue #7888: Distributed multi-GPU training -- hangs for certain batch_size, epoch_number combinations. URL: https://github.com/apache/incubator-mxnet/issues/7888#issuecomment-329835104 I've seen a similar issue before - during distributed training, each

[GitHub] chowkamlee81 commented on issue #7717: Subpixel convolution(state of art) implementation rather than using Deconvolution.

2017-09-15 Thread git
chowkamlee81 commented on issue #7717: Subpixel convolution(state of art) implementation rather than using Deconvolution. URL: https://github.com/apache/incubator-mxnet/issues/7717#issuecomment-329814975 Thanks for your reply .. featmap_score =

[GitHub] futurely commented on issue #658: Checkpoint every some iterations

2017-09-15 Thread git
futurely commented on issue #658: Checkpoint every some iterations URL: https://github.com/apache/incubator-mxnet/issues/658#issuecomment-329803731 The Gluon API does not need callback anymore.

[GitHub] jonbakerfish commented on issue #658: Checkpoint every some iterations

2017-09-15 Thread git
jonbakerfish commented on issue #658: Checkpoint every some iterations URL: https://github.com/apache/incubator-mxnet/issues/658#issuecomment-329798467 FYI: a checkpoint class for `batch_end_callback`: class BatchCheckpoint(object): def __init__(self, mod, prefix,

[GitHub] eldercrow commented on issue #7717: Subpixel convolution(state of art) implementation rather than using Deconvolution.

2017-09-15 Thread git
eldercrow commented on issue #7717: Subpixel convolution(state of art) implementation rather than using Deconvolution. URL: https://github.com/apache/incubator-mxnet/issues/7717#issuecomment-329785144 Sorry, I was meaning crop not concat. I suggest you to check the feature dim after crop

[GitHub] larroy commented on issue #7852: Trouble installing MXNet on Raspberry Pi 3

2017-09-15 Thread git
larroy commented on issue #7852: Trouble installing MXNet on Raspberry Pi 3 URL: https://github.com/apache/incubator-mxnet/issues/7852#issuecomment-329782023 1. Yes you should install docker either way, from the os or with docker ce 2. I would make sure the mxnet repository is clean

[GitHub] dping1 closed issue #6780: Problems with CNN model training and prediction for text classification

2017-09-15 Thread git
dping1 closed issue #6780: Problems with CNN model training and prediction for text classification URL: https://github.com/apache/incubator-mxnet/issues/6780 This is an automated message from the Apache Git Service. To

[GitHub] chowkamlee81 commented on issue #7717: Subpixel convolution(state of art) implementation rather than using Deconvolution.

2017-09-15 Thread git
chowkamlee81 commented on issue #7717: Subpixel convolution(state of art) implementation rather than using Deconvolution. URL: https://github.com/apache/incubator-mxnet/issues/7717#issuecomment-329748836 iam not using any concat layer, below is the code along with jupyter notebook

[GitHub] chowkamlee81 commented on issue #7717: Subpixel convolution(state of art) implementation rather than using Deconvolution.

2017-09-15 Thread git
chowkamlee81 commented on issue #7717: Subpixel convolution(state of art) implementation rather than using Deconvolution. URL: https://github.com/apache/incubator-mxnet/issues/7717#issuecomment-329748836 iam not using any concat layer, below is the code along with jupyter notebook

[GitHub] novioleo opened a new issue #7908: mxnet_predict.so read the symbol json error

2017-09-15 Thread git
novioleo opened a new issue #7908: mxnet_predict.so read the symbol json error URL: https://github.com/apache/incubator-mxnet/issues/7908 ## Environment info Operating System: Ubuntu 16.04 Compiler: arm-linux-androideabi-clang++ (arch = arm,api=21) i met a problem when i

[GitHub] eldercrow commented on issue #7717: Subpixel convolution(state of art) implementation rather than using Deconvolution.

2017-09-15 Thread git
eldercrow commented on issue #7717: Subpixel convolution(state of art) implementation rather than using Deconvolution. URL: https://github.com/apache/incubator-mxnet/issues/7717#issuecomment-329717925 Seems that the error is from a concat layer, but the function above has no concat

[GitHub] chowkamlee81 opened a new issue #7907: Dense upsamling operation rather than deconvolution implementation

2017-09-15 Thread git
chowkamlee81 opened a new issue #7907: Dense upsamling operation rather than deconvolution implementation URL: https://github.com/apache/incubator-mxnet/issues/7907 I developed a small snippet to replace deconvolution by Dense-up sampling operation according to mxnet example

[GitHub] chowkamlee81 commented on issue #7717: Subpixel convolution(state of art) implementation rather than using Deconvolution.

2017-09-15 Thread git
chowkamlee81 commented on issue #7717: Subpixel convolution(state of art) implementation rather than using Deconvolution. URL: https://github.com/apache/incubator-mxnet/issues/7717#issuecomment-329711401 @eldercrow , kindly suggest for this dense upsampling operation. My i/p is

[GitHub] asmushetzel commented on issue #7883: cuda support for new linear algebra operators

2017-09-15 Thread git
asmushetzel commented on issue #7883: cuda support for new linear algebra operators URL: https://github.com/apache/incubator-mxnet/pull/7883#issuecomment-329710766 Generally, all is there. Just that this code uses a cusolver-function that has been added with Cuda 8.0 and MxNet does builds

[GitHub] chowkamlee81 commented on issue #7717: Subpixel convolution(state of art) implementation rather than using Deconvolution.

2017-09-15 Thread git
chowkamlee81 commented on issue #7717: Subpixel convolution(state of art) implementation rather than using Deconvolution. URL: https://github.com/apache/incubator-mxnet/issues/7717#issuecomment-329710361 Hai Eldercrow, ihave incorporated your suggestions for sub-pixel dense upsampling

[GitHub] chowkamlee81 opened a new issue #7717: Subpixel convolution(state of art) implementation rather than using Deconvolution.

2017-09-15 Thread git
chowkamlee81 opened a new issue #7717: Subpixel convolution(state of art) implementation rather than using Deconvolution. URL: https://github.com/apache/incubator-mxnet/issues/7717 Is there is any mxnet implementation of subpixel CNN rather than using Deconvolution which is the state of

[GitHub] chowkamlee81 commented on issue #1363: Error in FCN

2017-09-15 Thread git
chowkamlee81 commented on issue #1363: Error in FCN URL: https://github.com/apache/incubator-mxnet/issues/1363#issuecomment-329698953 Yeah i too met with the smae problem... Kindly let me know how u solved it..pls This is

[incubator-mxnet] branch master updated: Fix Symbol Index (#7902)

2017-09-15 Thread jxie
This is an automated email from the ASF dual-hosted git repository. jxie pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new c560902 Fix Symbol Index (#7902)