szha commented on issue #7913: what are the dependencies between each modules
URL:
https://github.com/apache/incubator-mxnet/issues/7913#issuecomment-329947781
The best starting point that gives larger picture can be found in this doc:
mxmxlwlw opened a new issue #7913: what are the dependencies between each
modules
URL: https://github.com/apache/incubator-mxnet/issues/7913
Hi,
I like your design of mxnet very much, but I still can't get the idea of
dependencies between each modules. I'd be really appreciated if
szha commented on issue #7910: add advanced indexing
URL: https://github.com/apache/incubator-mxnet/pull/7910#issuecomment-329947462
If we introduce a non-grouped symbol class which can only be obtained from a
method of non-grouped symbols, we can introduce advanced indexing on that
tqchen commented on a change in pull request #7698: Second order gradient and
Subgraph execution
URL: https://github.com/apache/incubator-mxnet/pull/7698#discussion_r139279115
##
File path: include/mxnet/imperative.h
##
@@ -0,0 +1,214 @@
+/*
+ * Licensed to the Apache
szha commented on issue #7910: add advanced indexing
URL: https://github.com/apache/incubator-mxnet/pull/7910#issuecomment-329947462
If we introduce a non-grouped symbol class which can only be obtained from a
method of non-grouped symbols, we can introduce slicing on that non-grouped
yajiedesign commented on issue #7910: add advanced indexing
URL: https://github.com/apache/incubator-mxnet/pull/7910#issuecomment-329945607
only work with ndarray?
This is an automated message from the Apache Git Service.
To
liuzhi136 closed issue #7894: The Meaning of parameters of
mxnet.gluon.Embedding function
URL: https://github.com/apache/incubator-mxnet/issues/7894
This is an automated message from the Apache Git Service.
To respond to
liuzhi136 commented on issue #7894: The Meaning of parameters of
mxnet.gluon.Embedding function
URL:
https://github.com/apache/incubator-mxnet/issues/7894#issuecomment-329943591
Ok, Thanks.
This is an automated message
liuzhi136 commented on issue #7894: The Meaning of parameters of
mxnet.gluon.Embedding function
URL:
https://github.com/apache/incubator-mxnet/issues/7894#issuecomment-329943100
But these two parameter below the Input shape: I'm thinking if it stands for
the (num_word, vocab_size), that
eldercrow commented on issue #7717: Subpixel convolution(state of art)
implementation rather than using Deconvolution.
URL:
https://github.com/apache/incubator-mxnet/issues/7717#issuecomment-329941126
From your error message, it seems that cropping is unnecessary since
data_shape[3] -
szha commented on issue #7894: The Meaning of parameters of
mxnet.gluon.Embedding function
URL:
https://github.com/apache/incubator-mxnet/issues/7894#issuecomment-329940221
sorry for the delay. @liuzhi136
they don't have meaning and can be arbitrary numbers
piiswrong commented on issue #7912: Do I need to change grad_req when sharing
weights?
URL:
https://github.com/apache/incubator-mxnet/issues/7912#issuecomment-329939400
no need to use add. it's automatic within one executor
liuzhi136 commented on issue #7894: The Meaning of parameters of
mxnet.gluon.Embedding function
URL:
https://github.com/apache/incubator-mxnet/issues/7894#issuecomment-329938199
@szha Do you know these parameters' meaning?
liyi14 opened a new issue #7912: Do I need to change grad_req when sharing
weights?
URL: https://github.com/apache/incubator-mxnet/issues/7912
Hi, I was building a conv layer sharing weights with another one, using the
#557. Since the default grad_req is 'write' and I need to update the
eric-haibin-lin opened a new pull request #7911: more sparse related docs
URL: https://github.com/apache/incubator-mxnet/pull/7911
Preview at
http://ec2-54-187-32-207.us-west-2.compute.amazonaws.com/api/python/ndarray/sparse.html
vuvko commented on issue #7909: Dividing input changes BatchNorm output
URL:
https://github.com/apache/incubator-mxnet/issues/7909#issuecomment-329932411
Ok, so the problem I loading model for testing and in that case parameter
`use_global_stats` is just ignored (as set to `True`)
cjolivier01 commented on a change in pull request #7903: Refactor AdaGrad
optimizer to support sparse tensors
URL: https://github.com/apache/incubator-mxnet/pull/7903#discussion_r139268501
##
File path: python/mxnet/optimizer.py
##
@@ -665,26 +667,46 @@ class
eric-haibin-lin commented on issue #7847: Bug on Multi GPU : probably Invalid
initialization of optimizer
URL:
https://github.com/apache/incubator-mxnet/issues/7847#issuecomment-329926044
I'll update module to throw proper warning/error messages for such case
cjolivier01 commented on a change in pull request #7903: Refactor AdaGrad
optimizer to support sparse tensors
URL: https://github.com/apache/incubator-mxnet/pull/7903#discussion_r139268189
##
File path: python/mxnet/optimizer.py
##
@@ -665,26 +667,46 @@ class
vuvko commented on issue #7909: Dividing input changes BatchNorm output
URL:
https://github.com/apache/incubator-mxnet/issues/7909#issuecomment-329925746
Also if this can help, I used [this](https://i.imgur.com/XJOBiiO.jpg) image
for testing.
cjolivier01 commented on a change in pull request #7903: Refactor AdaGrad
optimizer to support sparse tensors
URL: https://github.com/apache/incubator-mxnet/pull/7903#discussion_r139267799
##
File path: src/operator/tensor/elemwise_binary_op_basic.cu
##
@@ -36,21 +36,21
eric-haibin-lin commented on a change in pull request #7903: Refactor AdaGrad
optimizer to support sparse tensors
URL: https://github.com/apache/incubator-mxnet/pull/7903#discussion_r139218368
##
File path: python/mxnet/optimizer.py
##
@@ -665,26 +667,46 @@ class
eric-haibin-lin commented on a change in pull request #7903: Refactor AdaGrad
optimizer to support sparse tensors
URL: https://github.com/apache/incubator-mxnet/pull/7903#discussion_r139217684
##
File path: python/mxnet/optimizer.py
##
@@ -665,26 +667,46 @@ class
eric-haibin-lin commented on a change in pull request #7903: Refactor AdaGrad
optimizer to support sparse tensors
URL: https://github.com/apache/incubator-mxnet/pull/7903#discussion_r139217161
##
File path: python/mxnet/optimizer.py
##
@@ -665,26 +667,46 @@ class
eric-haibin-lin commented on a change in pull request #7772: Use memcopy
instead of assigning each individual element
URL: https://github.com/apache/incubator-mxnet/pull/7772#discussion_r139259687
##
File path: src/operator/tensor/cast_storage-inl.h
##
@@ -120,9 +119,13
piiswrong commented on a change in pull request #7698: Second order gradient
and Subgraph execution
URL: https://github.com/apache/incubator-mxnet/pull/7698#discussion_r139259449
##
File path: include/mxnet/imperative.h
##
@@ -0,0 +1,214 @@
+/*
+ * Licensed to the Apache
piiswrong opened a new pull request #7910: add advanced indexing
URL: https://github.com/apache/incubator-mxnet/pull/7910
This is an automated message from the Apache Git Service.
To respond to the message, please log on
eric-haibin-lin commented on a change in pull request #7893: Add barriers in
kvstore init
URL: https://github.com/apache/incubator-mxnet/pull/7893#discussion_r139257114
##
File path: src/kvstore/kvstore_dist.h
##
@@ -147,8 +147,11 @@ class KVStoreDist : public
asmushetzel commented on issue #7883: cuda support for new linear algebra
operators
URL: https://github.com/apache/incubator-mxnet/pull/7883#issuecomment-329896956
So this is done now. From my point of view, it can be merged.
eric-haibin-lin commented on issue #7146: How to compile Amalgamation for
android?
URL:
https://github.com/apache/incubator-mxnet/issues/7146#issuecomment-329883455
@arank
On 2017?9?13?, at 02:50, IceBo wrote:
where should copy the
eric-haibin-lin commented on issue #7888: Distributed multi-GPU training --
hangs for certain batch_size, epoch_number combinations.
URL:
https://github.com/apache/incubator-mxnet/issues/7888#issuecomment-329877057
Many iterators inherit from PrefetcherIter such as ImageRecordIter
piiswrong commented on a change in pull request #7698: Second order gradient
and Subgraph execution
URL: https://github.com/apache/incubator-mxnet/pull/7698#discussion_r139220420
##
File path: src/c_api/c_api_function.cc
##
@@ -162,38 +162,35 @@ int
szha commented on a change in pull request #7698: Second order gradient and
Subgraph execution
URL: https://github.com/apache/incubator-mxnet/pull/7698#discussion_r139209590
##
File path: python/mxnet/autograd.py
##
@@ -236,39 +256,96 @@ def backward(heads,
szha commented on a change in pull request #7698: Second order gradient and
Subgraph execution
URL: https://github.com/apache/incubator-mxnet/pull/7698#discussion_r139211509
##
File path: python/mxnet/ndarray/sparse.py
##
@@ -871,7 +871,7 @@ def _ndarray_cls(handle,
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new e81a3a8 [sparse] add ftrl optimizer for
piiswrong closed pull request #7720: [sparse] add ftrl optimizer for sparse
URL: https://github.com/apache/incubator-mxnet/pull/7720
This is an automated message from the Apache Git Service.
To respond to the message,
piiswrong closed pull request #7832: Elementwise Sum (add_n) for rowsparse on
GPU
URL: https://github.com/apache/incubator-mxnet/pull/7832
This is an automated message from the Apache Git Service.
To respond to the
piiswrong commented on a change in pull request #7875: add mobilenet to gluon
model zoo
URL: https://github.com/apache/incubator-mxnet/pull/7875#discussion_r139216969
##
File path: python/mxnet/gluon/model_zoo/vision/mobilenet.py
##
@@ -0,0 +1,158 @@
+# Licensed to the
madjam commented on issue #7899: Need help: numpy array to mxnet ndarray is too
slow.
URL:
https://github.com/apache/incubator-mxnet/issues/7899#issuecomment-329849612
How big is `img_data` and `label_data`?
FYI: this involves allocating memory and copying contents of numpy array
into
piiswrong commented on a change in pull request #7698: Second order gradient
and Subgraph execution
URL: https://github.com/apache/incubator-mxnet/pull/7698#discussion_r139207816
##
File path: src/operator/tensor/elemwise_binary_op_basic.cu
##
@@ -36,21 +36,21 @@
zheng-da commented on issue #7888: Distributed multi-GPU training -- hangs for
certain batch_size, epoch_number combinations.
URL:
https://github.com/apache/incubator-mxnet/issues/7888#issuecomment-329847276
Does MXNet prefetch data by default in the distributed setting? I see there
is a
FrancisTse8 commented on issue #7852: Trouble installing MXNet on Raspberry Pi 3
URL:
https://github.com/apache/incubator-mxnet/issues/7852#issuecomment-329847054
OK, I am on the Raspberry Pi 3 running jessie to try to install MXNet.
Followed the instructions on
eric-haibin-lin commented on a change in pull request #7903: Refactor AdaGrad
optimizer to support sparse tensors
URL: https://github.com/apache/incubator-mxnet/pull/7903#discussion_r139201606
##
File path: src/operator/tensor/elemwise_binary_op_basic.cu
##
@@ -36,21
eric-haibin-lin commented on issue #7319: [RoadMap] Legacy issue resolution
before 1.0 release
URL:
https://github.com/apache/incubator-mxnet/issues/7319#issuecomment-329841537
@formath Yes, I'll work on the sparse embedding operator to support at least
millions of features after I am
vuvko opened a new issue #7909: Dividing input changes BatchNorm output
URL: https://github.com/apache/incubator-mxnet/issues/7909
I was trying to experiment with ResNet-152 model downloaded from [Model
Zoo](https://mxnet.incubator.apache.org/model_zoo/). For some reason first
batch
piiswrong commented on a change in pull request #7904: add warning to global
norm clip
URL: https://github.com/apache/incubator-mxnet/pull/7904#discussion_r139200143
##
File path: python/mxnet/gluon/utils.py
##
@@ -113,11 +114,11 @@ def clip_global_norm(arrays, max_norm):
eric-haibin-lin commented on a change in pull request #7698: Second order
gradient and Subgraph execution
URL: https://github.com/apache/incubator-mxnet/pull/7698#discussion_r139200011
##
File path: src/operator/tensor/elemwise_binary_op_basic.cu
##
@@ -36,21 +36,21 @@
eric-haibin-lin commented on a change in pull request #7698: Second order
gradient and Subgraph execution
URL: https://github.com/apache/incubator-mxnet/pull/7698#discussion_r139199432
##
File path: tests/python/unittest/test_autograd.py
##
@@ -117,7 +117,7 @@ def
eric-haibin-lin commented on issue #7888: Distributed multi-GPU training --
hangs for certain batch_size, epoch_number combinations.
URL:
https://github.com/apache/incubator-mxnet/issues/7888#issuecomment-329835104
I've seen a similar issue before - during distributed training, each
chowkamlee81 commented on issue #7717: Subpixel convolution(state of art)
implementation rather than using Deconvolution.
URL:
https://github.com/apache/incubator-mxnet/issues/7717#issuecomment-329814975
Thanks for your reply ..
featmap_score =
futurely commented on issue #658: Checkpoint every some iterations
URL: https://github.com/apache/incubator-mxnet/issues/658#issuecomment-329803731
The Gluon API does not need callback anymore.
jonbakerfish commented on issue #658: Checkpoint every some iterations
URL: https://github.com/apache/incubator-mxnet/issues/658#issuecomment-329798467
FYI: a checkpoint class for `batch_end_callback`:
class BatchCheckpoint(object):
def __init__(self, mod, prefix,
eldercrow commented on issue #7717: Subpixel convolution(state of art)
implementation rather than using Deconvolution.
URL:
https://github.com/apache/incubator-mxnet/issues/7717#issuecomment-329785144
Sorry, I was meaning crop not concat. I suggest you to check the feature dim
after crop
larroy commented on issue #7852: Trouble installing MXNet on Raspberry Pi 3
URL:
https://github.com/apache/incubator-mxnet/issues/7852#issuecomment-329782023
1. Yes you should install docker either way, from the os or with docker ce
2. I would make sure the mxnet repository is clean
dping1 closed issue #6780: Problems with CNN model training and prediction for
text classification
URL: https://github.com/apache/incubator-mxnet/issues/6780
This is an automated message from the Apache Git Service.
To
chowkamlee81 commented on issue #7717: Subpixel convolution(state of art)
implementation rather than using Deconvolution.
URL:
https://github.com/apache/incubator-mxnet/issues/7717#issuecomment-329748836
iam not using any concat layer, below is the code along with jupyter
notebook
chowkamlee81 commented on issue #7717: Subpixel convolution(state of art)
implementation rather than using Deconvolution.
URL:
https://github.com/apache/incubator-mxnet/issues/7717#issuecomment-329748836
iam not using any concat layer, below is the code along with jupyter
notebook
novioleo opened a new issue #7908: mxnet_predict.so read the symbol json error
URL: https://github.com/apache/incubator-mxnet/issues/7908
## Environment info
Operating System:
Ubuntu 16.04
Compiler:
arm-linux-androideabi-clang++ (arch = arm,api=21)
i met a problem when i
eldercrow commented on issue #7717: Subpixel convolution(state of art)
implementation rather than using Deconvolution.
URL:
https://github.com/apache/incubator-mxnet/issues/7717#issuecomment-329717925
Seems that the error is from a concat layer, but the function above has no
concat
chowkamlee81 opened a new issue #7907: Dense upsamling operation rather than
deconvolution implementation
URL: https://github.com/apache/incubator-mxnet/issues/7907
I developed a small snippet to replace deconvolution by Dense-up sampling
operation according to mxnet example
chowkamlee81 commented on issue #7717: Subpixel convolution(state of art)
implementation rather than using Deconvolution.
URL:
https://github.com/apache/incubator-mxnet/issues/7717#issuecomment-329711401
@eldercrow , kindly suggest for this dense upsampling operation.
My i/p is
asmushetzel commented on issue #7883: cuda support for new linear algebra
operators
URL: https://github.com/apache/incubator-mxnet/pull/7883#issuecomment-329710766
Generally, all is there. Just that this code uses a cusolver-function that
has been added with Cuda 8.0 and MxNet does builds
chowkamlee81 commented on issue #7717: Subpixel convolution(state of art)
implementation rather than using Deconvolution.
URL:
https://github.com/apache/incubator-mxnet/issues/7717#issuecomment-329710361
Hai Eldercrow, ihave incorporated your suggestions for sub-pixel dense
upsampling
chowkamlee81 opened a new issue #7717: Subpixel convolution(state of art)
implementation rather than using Deconvolution.
URL: https://github.com/apache/incubator-mxnet/issues/7717
Is there is any mxnet implementation of subpixel CNN rather than using
Deconvolution which is the state of
chowkamlee81 commented on issue #1363: Error in FCN
URL:
https://github.com/apache/incubator-mxnet/issues/1363#issuecomment-329698953
Yeah i too met with the smae problem... Kindly let me know how u solved
it..pls
This is
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new c560902 Fix Symbol Index (#7902)
66 matches
Mail list logo