[GitHub] szha commented on a change in pull request #9986: gluon language modeling dataset and text token reader

2018-03-07 Thread GitBox
szha commented on a change in pull request #9986: gluon language modeling 
dataset and text token reader
URL: https://github.com/apache/incubator-mxnet/pull/9986#discussion_r173085151
 
 

 ##
 File path: python/mxnet/gluon/data/text/base.py
 ##
 @@ -0,0 +1,99 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+# pylint: disable=
+
+"""Base classes for text datasets and readers."""
+
+__all__ = ['WordLanguageReader']
+
+import io
+import os
+
+from ..dataset import SimpleDataset
+from ..datareader import DataReader
+from .utils import flatten_samples, collate, pair
+
+class WordLanguageReader(DataReader):
+"""Text reader that reads a whole corpus and produces samples based on 
provided sample splitter
+and word tokenizer.
+
+Parameters
+--
+filename : str
+Path to the input text file.
+encoding : str, default 'utf8'
+File encoding format.
+sample_splitter : function, default str.splitlines
+A function that splits the dataset string into samples.
+tokenizer : function, default str.split
+A function that splits each sample string into list of tokens.
+seq_len : int or None
+The length of each of the samples. If None, samples are divided 
according to
+`sample_splitter` only, and may have variable lengths.
+bos : str or None, default None
+The token to add at the begining of each sentence. If None, nothing is 
added.
+eos : str or None, default None
+The token to add at the end of each sentence. If None, nothing is 
added.
+pad : str or None, default None
+The padding token to add at the end of dataset if `seq_len` is 
specified and the total
+number of tokens in the corpus don't evenly divide `seq_len`. If pad 
is None or seq_len
+is None, no padding is added. Otherwise, padding token is added to the 
last sample if
+its length is less than `seq_len`. If `pad` is None and `seq_len` is 
specified, the last
+sample is discarded if it's shorter than `seq_len`.
 
 Review comment:
   To illustrate, I'd need to add equal number of examples. I'm not sure if 
further verbosity really helps.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #9986: gluon language modeling dataset and text token reader

2018-03-07 Thread GitBox
szha commented on a change in pull request #9986: gluon language modeling 
dataset and text token reader
URL: https://github.com/apache/incubator-mxnet/pull/9986#discussion_r173084963
 
 

 ##
 File path: python/mxnet/gluon/data/text/base.py
 ##
 @@ -0,0 +1,99 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+# pylint: disable=
+
+"""Base classes for text datasets and readers."""
+
+__all__ = ['WordLanguageReader']
+
+import io
+import os
+
+from ..dataset import SimpleDataset
+from ..datareader import DataReader
+from .utils import flatten_samples, collate, pair
+
+class WordLanguageReader(DataReader):
+"""Text reader that reads a whole corpus and produces samples based on 
provided sample splitter
+and word tokenizer.
+
+Parameters
+--
+filename : str
+Path to the input text file.
+encoding : str, default 'utf8'
+File encoding format.
+sample_splitter : function, default str.splitlines
+A function that splits the dataset string into samples.
+tokenizer : function, default str.split
+A function that splits each sample string into list of tokens.
+seq_len : int or None
+The length of each of the samples. If None, samples are divided 
according to
+`sample_splitter` only, and may have variable lengths.
+bos : str or None, default None
+The token to add at the begining of each sentence. If None, nothing is 
added.
+eos : str or None, default None
+The token to add at the end of each sentence. If None, nothing is 
added.
+pad : str or None, default None
+The padding token to add at the end of dataset if `seq_len` is 
specified and the total
+number of tokens in the corpus don't evenly divide `seq_len`. If pad 
is None or seq_len
 
 Review comment:
   Yes if specified.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #10025: Language model with Google's billion words dataset

2018-03-07 Thread GitBox
eric-haibin-lin commented on issue #10025: Language model with Google's billion 
words dataset
URL: https://github.com/apache/incubator-mxnet/pull/10025#issuecomment-371403506
 
 
   @zihaolucky 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9988: [Perl] Sparse feature.

2018-03-07 Thread GitBox
eric-haibin-lin commented on a change in pull request #9988: [Perl] Sparse 
feature.
URL: https://github.com/apache/incubator-mxnet/pull/9988#discussion_r173080890
 
 

 ##
 File path: perl-package/AI-MXNet/lib/AI/MXNet/NDArray/Sparse.pm
 ##
 @@ -0,0 +1,1342 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+package AI::MXNet::NDArray::Sparse;
+use strict;
+use warnings;
+use AI::MXNet::Base;
+use AI::MXNet::Function::Parameters;
+use Mouse;
+extends 'AI::MXNet::NDArray';
+
+=head1 NAME
+
+AI::MXNet::NDArray::Sparse - Sparse NDArray API of MXNet
+=cut
+
+=head1 DESCRIPTION
+
+The base class of an NDArray stored in a sparse storage format.
+See AI::MXNet::NDArray::CSR and AI::MXNet::NDArray::RowSparse for more 
details.
+=cut
+
+method _new_alloc_handle(
+Stype$stype,
+Shape$shape,
+AI::MXNet::Context   $ctx,
+Bool $delay_alloc,
+Dtype$dtype,
+AuxTypes $aux_types,
+Maybe[ArrayRef[Shape]]   $aux_shapes=
+)
+{
+confess("only int64 is supported for aux types")
+if (grep { $_ ne 'int64' } @$aux_types);
+my $aux_type_ids = [map { DTYPE_STR_TO_MX->{$_} } @$aux_types];
+$aux_shapes //= [map { [0] } @$aux_types];
+my $aux_shape_lens = [map { scalar(@$_) } @$aux_shapes];
+@$aux_shapes = map { @$_ } @$aux_shapes;
+my $num_aux = @{ $aux_types };
+my $handle = check_call(
+AI::MXNetCAPI::NDArrayCreateSparseEx(
+STORAGE_TYPE_STR_TO_ID->{$stype},
+$shape,
+scalar(@$shape),
+$ctx->device_type_id,
+$ctx->device_id,
+$delay_alloc,
+DTYPE_STR_TO_MX->{$dtype},
+scalar(@$aux_types),
+$aux_type_ids,
+$aux_shape_lens,
+$aux_shapes
+)
+);
+}
+
+method _class_name()
+{
+my $class = ref $self || $self;
+$class;
+}
+
+sub not_implemented { confess "Not implemented" }
+use overload '""' => sub {
+my $self = shift;
+my $shape_info = join('x', @{ $self->shape });
+sprintf("\n<%s, %s @%s>", $self->_class_name, 
$shape_info, $self->context);
+ },
+ '+=' => \_implemented,
+ '-=' => \_implemented,
+ '*=' => \_implemented,
+ '/=' => \_implemented;
+{
+no warnings 'redefine';
+*_sync_copyfrom = *_at = *_slice = *reshape = *size = \_implemented;
+}
+
+method _aux_type(Int $i)
+{
+return DTYPE_MX_TO_STR->{
+check_call(
+AI::MXNetCAPI::NDArrayGetAuxType(
+$self->handle, $i
+)
+)
+}
+}
+
+method _num_aux()
+{
+return scalar(@{ STORAGE_AUX_TYPES->{ $self->stype } });
+}
+
+method _aux_types()
+{
+[map { $self->_aux_type($_) } 0..$self->_num_aux-1];
+}
+
+=head2 aspdl
+
+Return a dense PDL object with value copied from this array
+=cut
+
+method aspdl()
+{
+return $self->tostype('default')->aspdl;
+}
+
+=head2 astype
+
+Returns a copy of the array after casting to a specified type.
+Parameters
+--
+dtype : Dtype
+The type of the returned array.
+Examples
+
+>>> $x = mx->nd->sparse->zeros('row_sparse', [2,3], dtype=>'float32')
+>>> $y = $x->astype('int32')
+>>> $y->dtype
+
+=cut
+
+method astype(Dtype $dtype)
+{
+my $res = $self->zeros(
+$self->stype, $self->shape, ctx => $self->context,
+dtype => $dtype
+);
+$self->copyto($res);
+return $res;
+}
+
+=head2 copyto
+
+Copies the value of this array to another array.
+
+Parameters
+--
+other : NDArray or NDArray::CSR or NDArray::RowSparse or Context
+The destination array or context.
+
+Returns
+---
+NDArray or CSRNDArray::CSR or NDArray::RowSparse
+The copied array.
+=cut
+
+method copyto(AI::MXNet::NDArray|AI::MXNet::Context $other)
+{
+if($other->isa('AI::MXNet::NDArray'))
+{
+if($self->handle eq $other->handle)
+{
+

[GitHub] eric-haibin-lin opened a new pull request #10034: Update an incorrect description in the API doc.

2018-03-07 Thread GitBox
eric-haibin-lin opened a new pull request #10034: Update an incorrect 
description in the API doc. 
URL: https://github.com/apache/incubator-mxnet/pull/10034
 
 
   Update an incorrect description in the API doc. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9988: [Perl] Sparse feature.

2018-03-07 Thread GitBox
eric-haibin-lin commented on a change in pull request #9988: [Perl] Sparse 
feature.
URL: https://github.com/apache/incubator-mxnet/pull/9988#discussion_r173079068
 
 

 ##
 File path: perl-package/AI-MXNet/t/test_kvstore.t
 ##
 @@ -73,6 +75,93 @@ sub test_list_kv_pair
 }
 }
 
+sub test_row_sparse_pull
 
 Review comment:
   Does it include the test case added a few days ago 
https://github.com/apache/incubator-mxnet/pull/9887/files ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9988: [Perl] Sparse feature.

2018-03-07 Thread GitBox
eric-haibin-lin commented on a change in pull request #9988: [Perl] Sparse 
feature.
URL: https://github.com/apache/incubator-mxnet/pull/9988#discussion_r173079707
 
 

 ##
 File path: perl-package/AI-MXNet/lib/AI/MXNet/NDArray/Sparse.pm
 ##
 @@ -0,0 +1,1342 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+package AI::MXNet::NDArray::Sparse;
+use strict;
+use warnings;
+use AI::MXNet::Base;
+use AI::MXNet::Function::Parameters;
+use Mouse;
+extends 'AI::MXNet::NDArray';
+
+=head1 NAME
+
+AI::MXNet::NDArray::Sparse - Sparse NDArray API of MXNet
+=cut
+
+=head1 DESCRIPTION
+
+The base class of an NDArray stored in a sparse storage format.
+See AI::MXNet::NDArray::CSR and AI::MXNet::NDArray::RowSparse for more 
details.
+=cut
+
+method _new_alloc_handle(
+Stype$stype,
+Shape$shape,
+AI::MXNet::Context   $ctx,
+Bool $delay_alloc,
+Dtype$dtype,
+AuxTypes $aux_types,
+Maybe[ArrayRef[Shape]]   $aux_shapes=
+)
+{
+confess("only int64 is supported for aux types")
+if (grep { $_ ne 'int64' } @$aux_types);
+my $aux_type_ids = [map { DTYPE_STR_TO_MX->{$_} } @$aux_types];
+$aux_shapes //= [map { [0] } @$aux_types];
+my $aux_shape_lens = [map { scalar(@$_) } @$aux_shapes];
+@$aux_shapes = map { @$_ } @$aux_shapes;
+my $num_aux = @{ $aux_types };
+my $handle = check_call(
+AI::MXNetCAPI::NDArrayCreateSparseEx(
+STORAGE_TYPE_STR_TO_ID->{$stype},
+$shape,
+scalar(@$shape),
+$ctx->device_type_id,
+$ctx->device_id,
+$delay_alloc,
+DTYPE_STR_TO_MX->{$dtype},
+scalar(@$aux_types),
+$aux_type_ids,
+$aux_shape_lens,
+$aux_shapes
+)
+);
+}
+
+method _class_name()
+{
+my $class = ref $self || $self;
+$class;
+}
+
+sub not_implemented { confess "Not implemented" }
+use overload '""' => sub {
+my $self = shift;
+my $shape_info = join('x', @{ $self->shape });
+sprintf("\n<%s, %s @%s>", $self->_class_name, 
$shape_info, $self->context);
+ },
+ '+=' => \_implemented,
+ '-=' => \_implemented,
+ '*=' => \_implemented,
+ '/=' => \_implemented;
+{
+no warnings 'redefine';
+*_sync_copyfrom = *_at = *_slice = *reshape = *size = \_implemented;
+}
+
+method _aux_type(Int $i)
+{
+return DTYPE_MX_TO_STR->{
+check_call(
+AI::MXNetCAPI::NDArrayGetAuxType(
+$self->handle, $i
+)
+)
+}
+}
+
+method _num_aux()
+{
+return scalar(@{ STORAGE_AUX_TYPES->{ $self->stype } });
+}
+
+method _aux_types()
+{
+[map { $self->_aux_type($_) } 0..$self->_num_aux-1];
+}
+
+=head2 aspdl
+
+Return a dense PDL object with value copied from this array
+=cut
+
+method aspdl()
+{
+return $self->tostype('default')->aspdl;
+}
+
+=head2 astype
+
+Returns a copy of the array after casting to a specified type.
+Parameters
+--
+dtype : Dtype
+The type of the returned array.
+Examples
+
+>>> $x = mx->nd->sparse->zeros('row_sparse', [2,3], dtype=>'float32')
+>>> $y = $x->astype('int32')
+>>> $y->dtype
+
+=cut
+
+method astype(Dtype $dtype)
+{
+my $res = $self->zeros(
+$self->stype, $self->shape, ctx => $self->context,
+dtype => $dtype
+);
+$self->copyto($res);
+return $res;
+}
+
+=head2 copyto
+
+Copies the value of this array to another array.
+
+Parameters
+--
+other : NDArray or NDArray::CSR or NDArray::RowSparse or Context
+The destination array or context.
+
+Returns
+---
+NDArray or CSRNDArray::CSR or NDArray::RowSparse
+The copied array.
+=cut
+
+method copyto(AI::MXNet::NDArray|AI::MXNet::Context $other)
+{
+if($other->isa('AI::MXNet::NDArray'))
+{
+if($self->handle eq $other->handle)
+{
+

[GitHub] eric-haibin-lin commented on a change in pull request #9988: [Perl] Sparse feature.

2018-03-07 Thread GitBox
eric-haibin-lin commented on a change in pull request #9988: [Perl] Sparse 
feature.
URL: https://github.com/apache/incubator-mxnet/pull/9988#discussion_r173078142
 
 

 ##
 File path: perl-package/AI-MXNetCAPI/mxnet.i
 ##
 @@ -338,6 +351,37 @@ int MXNDArrayCreateEx(const mx_uint *in,
   int delay_alloc,
   int dtype,
   NDArrayHandle *out);
+/*!
 
 Review comment:
   Does this file have to be updated when c_api is changed each time? Is this 
not automated? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] fhieber commented on issue #10029: [MXNET-58]Layer Normalization in C++

2018-03-07 Thread GitBox
fhieber commented on issue #10029: [MXNET-58]Layer Normalization in C++
URL: https://github.com/apache/incubator-mxnet/pull/10029#issuecomment-371400507
 
 
   @sxjscience fantastic, thank you! We will definitely try this as soon as its 
available!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9986: gluon language modeling dataset and text token reader

2018-03-07 Thread GitBox
eric-haibin-lin commented on a change in pull request #9986: gluon language 
modeling dataset and text token reader
URL: https://github.com/apache/incubator-mxnet/pull/9986#discussion_r173077113
 
 

 ##
 File path: python/mxnet/gluon/data/text/base.py
 ##
 @@ -0,0 +1,99 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+# pylint: disable=
+
+"""Base classes for text datasets and readers."""
+
+__all__ = ['WordLanguageReader']
+
+import io
+import os
+
+from ..dataset import SimpleDataset
+from ..datareader import DataReader
+from .utils import flatten_samples, collate, pair
+
+class WordLanguageReader(DataReader):
+"""Text reader that reads a whole corpus and produces samples based on 
provided sample splitter
+and word tokenizer.
+
+Parameters
+--
+filename : str
+Path to the input text file.
+encoding : str, default 'utf8'
+File encoding format.
+sample_splitter : function, default str.splitlines
+A function that splits the dataset string into samples.
+tokenizer : function, default str.split
+A function that splits each sample string into list of tokens.
+seq_len : int or None
+The length of each of the samples. If None, samples are divided 
according to
+`sample_splitter` only, and may have variable lengths.
+bos : str or None, default None
+The token to add at the begining of each sentence. If None, nothing is 
added.
+eos : str or None, default None
+The token to add at the end of each sentence. If None, nothing is 
added.
+pad : str or None, default None
+The padding token to add at the end of dataset if `seq_len` is 
specified and the total
+number of tokens in the corpus don't evenly divide `seq_len`. If pad 
is None or seq_len
 
 Review comment:
   "total number of tokens" -> does this include bos/eos? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9986: gluon language modeling dataset and text token reader

2018-03-07 Thread GitBox
eric-haibin-lin commented on a change in pull request #9986: gluon language 
modeling dataset and text token reader
URL: https://github.com/apache/incubator-mxnet/pull/9986#discussion_r173077682
 
 

 ##
 File path: python/mxnet/gluon/data/text/base.py
 ##
 @@ -0,0 +1,99 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+# pylint: disable=
+
+"""Base classes for text datasets and readers."""
+
+__all__ = ['WordLanguageReader']
+
+import io
+import os
+
+from ..dataset import SimpleDataset
+from ..datareader import DataReader
+from .utils import flatten_samples, collate, pair
+
+class WordLanguageReader(DataReader):
+"""Text reader that reads a whole corpus and produces samples based on 
provided sample splitter
+and word tokenizer.
+
+Parameters
+--
+filename : str
+Path to the input text file.
+encoding : str, default 'utf8'
+File encoding format.
+sample_splitter : function, default str.splitlines
+A function that splits the dataset string into samples.
+tokenizer : function, default str.split
+A function that splits each sample string into list of tokens.
+seq_len : int or None
+The length of each of the samples. If None, samples are divided 
according to
+`sample_splitter` only, and may have variable lengths.
+bos : str or None, default None
+The token to add at the begining of each sentence. If None, nothing is 
added.
+eos : str or None, default None
+The token to add at the end of each sentence. If None, nothing is 
added.
+pad : str or None, default None
+The padding token to add at the end of dataset if `seq_len` is 
specified and the total
+number of tokens in the corpus don't evenly divide `seq_len`. If pad 
is None or seq_len
+is None, no padding is added. Otherwise, padding token is added to the 
last sample if
+its length is less than `seq_len`. If `pad` is None and `seq_len` is 
specified, the last
+sample is discarded if it's shorter than `seq_len`.
 
 Review comment:
   For example, I found this example very useful:
   
http://torch.ch/blog/2016/07/25/nce.html#loading-the-google-billion-words-dataset
 when they explain what each batch looks like


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9986: gluon language modeling dataset and text token reader

2018-03-07 Thread GitBox
eric-haibin-lin commented on a change in pull request #9986: gluon language 
modeling dataset and text token reader
URL: https://github.com/apache/incubator-mxnet/pull/9986#discussion_r173077529
 
 

 ##
 File path: python/mxnet/gluon/data/text/base.py
 ##
 @@ -0,0 +1,99 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+# pylint: disable=
+
+"""Base classes for text datasets and readers."""
+
+__all__ = ['WordLanguageReader']
+
+import io
+import os
+
+from ..dataset import SimpleDataset
+from ..datareader import DataReader
+from .utils import flatten_samples, collate, pair
+
+class WordLanguageReader(DataReader):
+"""Text reader that reads a whole corpus and produces samples based on 
provided sample splitter
+and word tokenizer.
+
+Parameters
+--
+filename : str
+Path to the input text file.
+encoding : str, default 'utf8'
+File encoding format.
+sample_splitter : function, default str.splitlines
+A function that splits the dataset string into samples.
+tokenizer : function, default str.split
+A function that splits each sample string into list of tokens.
+seq_len : int or None
+The length of each of the samples. If None, samples are divided 
according to
+`sample_splitter` only, and may have variable lengths.
+bos : str or None, default None
+The token to add at the begining of each sentence. If None, nothing is 
added.
+eos : str or None, default None
+The token to add at the end of each sentence. If None, nothing is 
added.
+pad : str or None, default None
+The padding token to add at the end of dataset if `seq_len` is 
specified and the total
+number of tokens in the corpus don't evenly divide `seq_len`. If pad 
is None or seq_len
+is None, no padding is added. Otherwise, padding token is added to the 
last sample if
+its length is less than `seq_len`. If `pad` is None and `seq_len` is 
specified, the last
+sample is discarded if it's shorter than `seq_len`.
 
 Review comment:
   There seems to be many combinations.. Is it possible to add a few example? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha opened a new pull request #10033: correct resnet link in model zoo page

2018-03-07 Thread GitBox
szha opened a new pull request #10033: correct resnet link in model zoo page
URL: https://github.com/apache/incubator-mxnet/pull/10033
 
 
   ## Description ##
   correct resnet paper link in model zoo page
   
   ## Checklist ##
   ### Essentials ###
   - [x] Code is well-documented: 
   
   ### Changes ###
   - [x] correct resnet v2 paper link in model zoo page


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Fix ndarray assignment issue with basic indexing (#10022)

2018-03-07 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 39c0fd8  Fix ndarray assignment issue with basic indexing (#10022)
39c0fd8 is described below

commit 39c0fd82312e138ef6b7f6531adb1f2fe423cb07
Author: reminisce 
AuthorDate: Wed Mar 7 22:40:04 2018 -0800

Fix ndarray assignment issue with basic indexing (#10022)

* Fix ndarray assignment issue with basic index

* Uncomment useful code
---
 python/mxnet/ndarray/ndarray.py   | 2 ++
 tests/python/unittest/test_ndarray.py | 5 +
 2 files changed, 7 insertions(+)

diff --git a/python/mxnet/ndarray/ndarray.py b/python/mxnet/ndarray/ndarray.py
index 5ac2796..5367845 100644
--- a/python/mxnet/ndarray/ndarray.py
+++ b/python/mxnet/ndarray/ndarray.py
@@ -695,6 +695,8 @@ fixed-size items.
 # may need to broadcast first
 if isinstance(value, NDArray):
 if value.handle is not self.handle:
+if value.shape != shape:
+value = value.broadcast_to(shape)
 value.copyto(self)
 elif isinstance(value, numeric_types):
 _internal._full(shape=shape, ctx=self.context,
diff --git a/tests/python/unittest/test_ndarray.py 
b/tests/python/unittest/test_ndarray.py
index e96fb2f..16f08b0 100644
--- a/tests/python/unittest/test_ndarray.py
+++ b/tests/python/unittest/test_ndarray.py
@@ -992,6 +992,8 @@ def test_ndarray_indexing():
 def assert_same(np_array, np_index, mx_array, mx_index, mx_value, 
np_value=None):
 if np_value is not None:
 np_array[np_index] = np_value
+elif isinstance(mx_value, mx.nd.NDArray):
+np_array[np_index] = mx_value.asnumpy()
 else:
 np_array[np_index] = mx_value
 mx_array[mx_index] = mx_value
@@ -1024,6 +1026,9 @@ def test_ndarray_indexing():
 # test value is an numeric_type
 assert_same(np_array, np_index, mx_array, index, 
np.random.randint(low=-1, high=0))
 if len(indexed_array_shape) > 1:
+# test NDArray with broadcast
+assert_same(np_array, np_index, mx_array, index,
+mx.nd.random.uniform(low=-1, high=0, 
shape=(indexed_array_shape[-1],)))
 # test numpy array with broadcast
 assert_same(np_array, np_index, mx_array, index,
 np.random.randint(low=-1, high=0, 
size=(indexed_array_shape[-1],)))

-- 
To stop receiving notification emails like this one, please contact
zhash...@apache.org.


[GitHub] szha closed pull request #10022: Fix ndarray assignment issue with basic indexing

2018-03-07 Thread GitBox
szha closed pull request #10022: Fix ndarray assignment issue with basic 
indexing
URL: https://github.com/apache/incubator-mxnet/pull/10022
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/ndarray/ndarray.py b/python/mxnet/ndarray/ndarray.py
index 5ac279635a1..536784586e3 100644
--- a/python/mxnet/ndarray/ndarray.py
+++ b/python/mxnet/ndarray/ndarray.py
@@ -695,6 +695,8 @@ def _set_nd_basic_indexing(self, key, value):
 # may need to broadcast first
 if isinstance(value, NDArray):
 if value.handle is not self.handle:
+if value.shape != shape:
+value = value.broadcast_to(shape)
 value.copyto(self)
 elif isinstance(value, numeric_types):
 _internal._full(shape=shape, ctx=self.context,
diff --git a/tests/python/unittest/test_ndarray.py 
b/tests/python/unittest/test_ndarray.py
index 0daf74a8879..9bf563a14cb 100644
--- a/tests/python/unittest/test_ndarray.py
+++ b/tests/python/unittest/test_ndarray.py
@@ -992,6 +992,8 @@ def test_setitem(np_array, index, is_scalar):
 def assert_same(np_array, np_index, mx_array, mx_index, mx_value, 
np_value=None):
 if np_value is not None:
 np_array[np_index] = np_value
+elif isinstance(mx_value, mx.nd.NDArray):
+np_array[np_index] = mx_value.asnumpy()
 else:
 np_array[np_index] = mx_value
 mx_array[mx_index] = mx_value
@@ -1024,6 +1026,9 @@ def assert_same(np_array, np_index, mx_array, mx_index, 
mx_value, np_value=None)
 # test value is an numeric_type
 assert_same(np_array, np_index, mx_array, index, 
np.random.randint(low=-1, high=0))
 if len(indexed_array_shape) > 1:
+# test NDArray with broadcast
+assert_same(np_array, np_index, mx_array, index,
+mx.nd.random.uniform(low=-1, high=0, 
shape=(indexed_array_shape[-1],)))
 # test numpy array with broadcast
 assert_same(np_array, np_index, mx_array, index,
 np.random.randint(low=-1, high=0, 
size=(indexed_array_shape[-1],)))


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10025: Language model with Google's billion words dataset

2018-03-07 Thread GitBox
eric-haibin-lin commented on a change in pull request #10025: Language model 
with Google's billion words dataset
URL: https://github.com/apache/incubator-mxnet/pull/10025#discussion_r173073715
 
 

 ##
 File path: src/operator/nn/fully_connected.cc
 ##
 @@ -87,8 +90,16 @@ void FullyConnectedComputeExCPU(const nnvm::NodeAttrs& 
attrs,
 return;
   }
   FallBackCompute(FullyConnectedCompute, attrs, ctx, inputs, req, 
outputs);
+#else
+  std::vector in_blobs(inputs.size());
+  for (size_t i = 0; i < in_blobs.size(); i++) in_blobs[i] = inputs[i].data();
+  std::vector out_blobs(outputs.size());
+  for (size_t i = 0; i < out_blobs.size(); i++) out_blobs[i] = 
outputs[i].data();
+  FullyConnectedCompute(attrs, ctx, in_blobs, req, out_blobs);
+#endif
 
 Review comment:
   When USE_MKL=1,
   I wanted to simply use the non-MKL FCForward with 
   ```
   data.data(),
   weight.data(),
   bias.data(),
   ```
   assuming `data.data()` returns a TBlob with normal cpu layout even if data 
is in MKL layout. `weight.data()` and `bias.data()` should always return normal 
cpu layout if weight and bias are `row_sparse`. Please advice. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10025: Language model with Google's billion words dataset

2018-03-07 Thread GitBox
eric-haibin-lin commented on a change in pull request #10025: Language model 
with Google's billion words dataset
URL: https://github.com/apache/incubator-mxnet/pull/10025#discussion_r173073054
 
 

 ##
 File path: src/operator/nn/fully_connected.cc
 ##
 @@ -87,8 +90,16 @@ void FullyConnectedComputeExCPU(const nnvm::NodeAttrs& 
attrs,
 return;
   }
   FallBackCompute(FullyConnectedCompute, attrs, ctx, inputs, req, 
outputs);
+#else
+  std::vector in_blobs(inputs.size());
+  for (size_t i = 0; i < in_blobs.size(); i++) in_blobs[i] = inputs[i].data();
+  std::vector out_blobs(outputs.size());
+  for (size_t i = 0; i < out_blobs.size(); i++) out_blobs[i] = 
outputs[i].data();
+  FullyConnectedCompute(attrs, ctx, in_blobs, req, out_blobs);
+#endif
 
 Review comment:
   Does MKL support kFComputeFallback dispatch mode? 
   Are you both referring to line 93 - line 99? `FallBackCompute ` is only 
defined when `USE_MKL=1`. Can I still use it?
   What I need to address is the following case for inference:
   - data = dense
   - weight = rowsparse
   - bias = rowsparse
   - output = dense
   But I don't know how to deal with this efficiently with `USE_MKL=1`. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #9934: [MXNET-31] Support variable sequence length in gluon.RecurrentCell

2018-03-07 Thread GitBox
sxjscience commented on issue #9934: [MXNET-31] Support variable sequence 
length in gluon.RecurrentCell 
URL: https://github.com/apache/incubator-mxnet/pull/9934#issuecomment-371383734
 
 
   @szha @piiswrong I've added the test of VariationalDropoutCell.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on a change in pull request #10000: fix average pooling kernel size assignment error

2018-03-07 Thread GitBox
sxjscience commented on a change in pull request #1: fix average pooling 
kernel size assignment error
URL: https://github.com/apache/incubator-mxnet/pull/1#discussion_r173062611
 
 

 ##
 File path: tests/python/gpu/test_operator_gpu.py
 ##
 @@ -904,86 +904,87 @@ def test_1d_pooling(pool_type):
 kernel = (4,)
 pad = (2,)
 stride = (2,)
-
+
 ctx_list = []
 sym_list = []
-
+
 pooling_convention = 'valid'
-
+
 ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pad=pad, stride=stride, 
pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pad=pad, stride=stride, 
pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, name='pool'))
-
+
 ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, name='pool'))
-
+
 ctx_list.append({'ctx': mx.gpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pad=pad, stride=stride, 
pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pad=pad, stride=stride, 
pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, cudnn_off=False, name='pool'))
-
+
 ctx_list.append({'ctx': mx.gpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, cudnn_off=False, name='pool'))
-
+
 ctx_list.append({'ctx': mx.gpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pad=pad, stride=stride, 
pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pad=pad, stride=stride, 
pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, cudnn_off=True, name='pool'))
-
+
 ctx_list.append({'ctx': mx.gpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, cudnn_off=True, name='pool'))
-
+
 check_consistency(sym_list, ctx_list)
-
+
 def test_2d_pooling(pool_type):
 data = (2, 3, 20, 20)
 kernel = (4, 4)
 pad = (2, 2)
 stride = (2, 2)
-
+
 ctx_list = []
 sym_list = []
-
+
 pooling_convention = 'valid'
-
+
 ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
 sym_list.append(mx.sym.Pooling_v1(kernel=kernel, pad=pad, 
stride=stride, pool_type=pool_type,
 
 Review comment:
   Sounds good.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CoinCheung commented on a change in pull request #10000: fix average pooling kernel size assignment error

2018-03-07 Thread GitBox
CoinCheung commented on a change in pull request #1: fix average pooling 
kernel size assignment error
URL: https://github.com/apache/incubator-mxnet/pull/1#discussion_r173060956
 
 

 ##
 File path: tests/python/gpu/test_operator_gpu.py
 ##
 @@ -904,86 +904,87 @@ def test_1d_pooling(pool_type):
 kernel = (4,)
 pad = (2,)
 stride = (2,)
-
+
 ctx_list = []
 sym_list = []
-
+
 pooling_convention = 'valid'
-
+
 ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pad=pad, stride=stride, 
pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pad=pad, stride=stride, 
pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, name='pool'))
-
+
 ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, name='pool'))
-
+
 ctx_list.append({'ctx': mx.gpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pad=pad, stride=stride, 
pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pad=pad, stride=stride, 
pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, cudnn_off=False, name='pool'))
-
+
 ctx_list.append({'ctx': mx.gpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, cudnn_off=False, name='pool'))
-
+
 ctx_list.append({'ctx': mx.gpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pad=pad, stride=stride, 
pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pad=pad, stride=stride, 
pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, cudnn_off=True, name='pool'))
-
+
 ctx_list.append({'ctx': mx.gpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, cudnn_off=True, name='pool'))
-
+
 check_consistency(sym_list, ctx_list)
-
+
 def test_2d_pooling(pool_type):
 data = (2, 3, 20, 20)
 kernel = (4, 4)
 pad = (2, 2)
 stride = (2, 2)
-
+
 ctx_list = []
 sym_list = []
-
+
 pooling_convention = 'valid'
-
+
 ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
 sym_list.append(mx.sym.Pooling_v1(kernel=kernel, pad=pad, 
stride=stride, pool_type=pool_type,
 
 Review comment:
   So shall I remove kernel only in the "even number" test cases and leave the 
odd test case with their kernel? Such as:
   ```
   ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
   sym_list.append(mx.sym.Pooling(kernel=kernel, pad=pad, 
stride=stride, pool_type=pool_type,  # keep the kernel for checking
  
pooling_convention=pooling_convention, global_pool=True, name='pool'))
   
   ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
   sym_list.append(mx.sym.Pooling(pool_type=pool_type,  # remove kernel 
along with the missing pad and stride
  
pooling_convention=pooling_convention, global_pool=True, name='pool'))
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #10029: [MXNET-58]Layer Normalization in C++

2018-03-07 Thread GitBox
sxjscience commented on issue #10029: [MXNET-58]Layer Normalization in C++
URL: https://github.com/apache/incubator-mxnet/pull/10029#issuecomment-371374211
 
 
   Here's the new doc of InstanceNorm 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-10029/4/api/python/gluon/nn.html#mxnet.gluon.nn.InstanceNorm
 @zhanghang1989 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #10029: [MXNET-58]Layer Normalization in C++

2018-03-07 Thread GitBox
sxjscience commented on issue #10029: [MXNET-58]Layer Normalization in C++
URL: https://github.com/apache/incubator-mxnet/pull/10029#issuecomment-371374211
 
 
   Here's the new doc of InstanceNorm 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-10029/4/api/python/gluon/nn.html#mxnet.gluon.nn.LayerNorm
 @zhanghang1989 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on a change in pull request #10000: fix average pooling kernel size assignment error

2018-03-07 Thread GitBox
sxjscience commented on a change in pull request #1: fix average pooling 
kernel size assignment error
URL: https://github.com/apache/incubator-mxnet/pull/1#discussion_r173059994
 
 

 ##
 File path: tests/python/gpu/test_operator_gpu.py
 ##
 @@ -904,86 +904,87 @@ def test_1d_pooling(pool_type):
 kernel = (4,)
 pad = (2,)
 stride = (2,)
-
+
 ctx_list = []
 sym_list = []
-
+
 pooling_convention = 'valid'
-
+
 ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pad=pad, stride=stride, 
pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pad=pad, stride=stride, 
pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, name='pool'))
-
+
 ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, name='pool'))
-
+
 ctx_list.append({'ctx': mx.gpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pad=pad, stride=stride, 
pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pad=pad, stride=stride, 
pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, cudnn_off=False, name='pool'))
-
+
 ctx_list.append({'ctx': mx.gpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, cudnn_off=False, name='pool'))
-
+
 ctx_list.append({'ctx': mx.gpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pad=pad, stride=stride, 
pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pad=pad, stride=stride, 
pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, cudnn_off=True, name='pool'))
-
+
 ctx_list.append({'ctx': mx.gpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, cudnn_off=True, name='pool'))
-
+
 check_consistency(sym_list, ctx_list)
-
+
 def test_2d_pooling(pool_type):
 data = (2, 3, 20, 20)
 kernel = (4, 4)
 pad = (2, 2)
 stride = (2, 2)
-
+
 ctx_list = []
 sym_list = []
-
+
 pooling_convention = 'valid'
-
+
 ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
 sym_list.append(mx.sym.Pooling_v1(kernel=kernel, pad=pad, 
stride=stride, pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, name='pool'))
-
+
 ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
 sym_list.append(mx.sym.Pooling_v1(kernel=kernel, pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, name='pool'))
 
 Review comment:
   You can do it if you have time. It should be due to the difference between 
windows and unix.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CoinCheung commented on a change in pull request #10000: fix average pooling kernel size assignment error

2018-03-07 Thread GitBox
CoinCheung commented on a change in pull request #1: fix average pooling 
kernel size assignment error
URL: https://github.com/apache/incubator-mxnet/pull/1#discussion_r173058901
 
 

 ##
 File path: tests/python/gpu/test_operator_gpu.py
 ##
 @@ -904,86 +904,87 @@ def test_1d_pooling(pool_type):
 kernel = (4,)
 pad = (2,)
 stride = (2,)
-
+
 ctx_list = []
 sym_list = []
-
+
 pooling_convention = 'valid'
-
+
 ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pad=pad, stride=stride, 
pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pad=pad, stride=stride, 
pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, name='pool'))
-
+
 ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, name='pool'))
-
+
 ctx_list.append({'ctx': mx.gpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pad=pad, stride=stride, 
pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pad=pad, stride=stride, 
pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, cudnn_off=False, name='pool'))
-
+
 ctx_list.append({'ctx': mx.gpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, cudnn_off=False, name='pool'))
-
+
 ctx_list.append({'ctx': mx.gpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pad=pad, stride=stride, 
pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pad=pad, stride=stride, 
pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, cudnn_off=True, name='pool'))
-
+
 ctx_list.append({'ctx': mx.gpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
-sym_list.append(mx.sym.Pooling(kernel=kernel, pool_type=pool_type,
+sym_list.append(mx.sym.Pooling(pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, cudnn_off=True, name='pool'))
-
+
 check_consistency(sym_list, ctx_list)
-
+
 def test_2d_pooling(pool_type):
 data = (2, 3, 20, 20)
 kernel = (4, 4)
 pad = (2, 2)
 stride = (2, 2)
-
+
 ctx_list = []
 sym_list = []
-
+
 pooling_convention = 'valid'
-
+
 ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
 sym_list.append(mx.sym.Pooling_v1(kernel=kernel, pad=pad, 
stride=stride, pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, name='pool'))
-
+
 ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': 
{'pool_data': np.float32}})
 sym_list.append(mx.sym.Pooling_v1(kernel=kernel, pool_type=pool_type,
pooling_convention=pooling_convention, 
global_pool=True, name='pool'))
 
 Review comment:
   Do I need to remove the blank lines? I only removed the kernel parameter 
assignment did not touch these white lines.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 commented on issue #9688: Adding BilinearResize2D and AdaptiveAvgPool2d operators

2018-03-07 Thread GitBox
zhanghang1989 commented on issue #9688: Adding BilinearResize2D and 
AdaptiveAvgPool2d operators
URL: https://github.com/apache/incubator-mxnet/pull/9688#issuecomment-371360712
 
 
   @cjolivier01 I guess opencv won't support batch dimension. We have to 
support 4D input (Batch x Channel x Height x Width) :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #9688: Adding BilinearResize2D and AdaptiveAvgPool2d operators

2018-03-07 Thread GitBox
cjolivier01 commented on issue #9688: Adding BilinearResize2D and 
AdaptiveAvgPool2d operators
URL: https://github.com/apache/incubator-mxnet/pull/9688#issuecomment-371360311
 
 
   just curious, does opencv not have an optimized bilinear interpolation 
resize call?
   Intel makes a really fast one with their IPP, although that?s probably not 
open-source...


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #10025: Language model with Google's billion words dataset

2018-03-07 Thread GitBox
zheng-da commented on a change in pull request #10025: Language model with 
Google's billion words dataset
URL: https://github.com/apache/incubator-mxnet/pull/10025#discussion_r173047522
 
 

 ##
 File path: src/operator/nn/fully_connected.cc
 ##
 @@ -87,8 +90,16 @@ void FullyConnectedComputeExCPU(const nnvm::NodeAttrs& 
attrs,
 return;
   }
   FallBackCompute(FullyConnectedCompute, attrs, ctx, inputs, req, 
outputs);
+#else
+  std::vector in_blobs(inputs.size());
+  for (size_t i = 0; i < in_blobs.size(); i++) in_blobs[i] = inputs[i].data();
+  std::vector out_blobs(outputs.size());
+  for (size_t i = 0; i < out_blobs.size(); i++) out_blobs[i] = 
outputs[i].data();
+  FullyConnectedCompute(attrs, ctx, in_blobs, req, out_blobs);
+#endif
 
 Review comment:
   why do you not use FallBackCompute for fallback?
   If an input is a sparse matrix, does data() return a dense ndarray? it 
doesn't seem SetTBlob is doing it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha opened a new pull request #10032: add axes support for dropouts in gluon

2018-03-07 Thread GitBox
szha opened a new pull request #10032: add axes support for dropouts in gluon
URL: https://github.com/apache/incubator-mxnet/pull/10032
 
 
   ## Description ##
   add axes support for dropouts in gluon
   
   ## Checklist ##
   ### Essentials ###
   - [x] Passed code style checking (`make lint`)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] add dropout axes support in Dropout, VariationalDropoutCell


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zihaolucky commented on issue #9990: Update tensorboard.py

2018-03-07 Thread GitBox
zihaolucky commented on issue #9990: Update tensorboard.py
URL: https://github.com/apache/incubator-mxnet/pull/9990#issuecomment-371354509
 
 
   Thanks for your contribution! LGTM.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #9152: [MXNET-37] tutorial for distributed training

2018-03-07 Thread GitBox
TaoLv commented on issue #9152: [MXNET-37] tutorial for distributed training
URL: https://github.com/apache/incubator-mxnet/pull/9152#issuecomment-371353840
 
 
   @rahul003 Thanks for your reply. I will try that. Seems some links in this 
tutorial are broker:
   [Training with multiple GPUs using model 
parallelism](https://mxnet.incubator.apache.org/versions/master/how_to/model_parallel_lstm.html)
   [Using data from S3 for 
training](https://mxnet.incubator.apache.org/versions/master/how_to/s3_integration.html)
   
   Another minor suggestion, I think it would be better if you can add some 
pictures to show the architecture or scalability of parameter sever.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


svn commit: r25579 - /release/incubator/mxnet/1.0.0/

2018-03-07 Thread liuyizhi
Author: liuyizhi
Date: Thu Mar  8 02:04:22 2018
New Revision: 25579

Log:
remove the previous release folder (1.0.0)

Removed:
release/incubator/mxnet/1.0.0/



svn commit: r25578 - in /dev/incubator/mxnet: 1.1.0.rc0/ 1.1.0.rc1/

2018-03-07 Thread liuyizhi
Author: liuyizhi
Date: Thu Mar  8 02:02:50 2018
New Revision: 25578

Log:
remove the previous release candidate folders (1.1.0.rc*)

Removed:
dev/incubator/mxnet/1.1.0.rc0/
dev/incubator/mxnet/1.1.0.rc1/



[GitHub] eric-haibin-lin commented on issue #10031: Sync master with v1.1.0 branch

2018-03-07 Thread GitBox
eric-haibin-lin commented on issue #10031: Sync master with v1.1.0 branch
URL: https://github.com/apache/incubator-mxnet/pull/10031#issuecomment-371350293
 
 
   Looks good if README is also updated. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed pull request #10030: Fix bug for Dropout with axes, also adding unit test

2018-03-07 Thread GitBox
szha closed pull request #10030: Fix bug for Dropout with axes, also adding 
unit test
URL: https://github.com/apache/incubator-mxnet/pull/10030
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/nn/dropout-inl.h b/src/operator/nn/dropout-inl.h
index b57ab45891e..1af4798d1ce 100644
--- a/src/operator/nn/dropout-inl.h
+++ b/src/operator/nn/dropout-inl.h
@@ -259,7 +259,7 @@ class DropoutOp {
 return;
   }
   // initialize the mask
-  LaunchRNG(s, pgen, out.Size(),
+  LaunchRNG(s, pgen, mask.Size(),
   mask.dptr(),
   this->pkeep_);
   // broadcast mul
diff --git a/tests/python/unittest/test_operator.py 
b/tests/python/unittest/test_operator.py
index 1ee14b6e5a4..91b8faa49c1 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -4645,6 +4645,27 @@ def check_dropout_ratio(ratio, shape):
 exe.backward([mx.nd.ones(shape)], is_train=False)
 assert (exe.grad_arrays[0].asnumpy() == 
exe.outputs[0].asnumpy()).all()
 
+def get_slice(x, axis, idx):
+ix = ()
+for i in range(x.ndim):
+if i == axis:
+ix += (idx,)
+else:
+ix += (slice(None, None, None),)
+return x[ix]
+
+def check_dropout_axes(ratio, shape, axes):
+compactshape = list(shape)
+for axis in axes:
+compactshape[axis] = 1
+compactx = mx.random.uniform(shape=tuple(compactshape))
+broadcastx = compactx.broadcast_to(shape)
+dropouty = mx.nd.Dropout(broadcastx, p=ratio, axes=axes)
+for axis in axes:
+target = get_slice(dropouty, axis, 0).asnumpy()
+for i in range(1, shape[axis]):
+assert(get_slice(dropouty, axis, i).asnumpy() == target).all()
+
 shape = (100, 100)
 check_dropout_ratio(0.5, shape)
 check_dropout_ratio(0.0, shape)
@@ -4652,6 +4673,21 @@ def check_dropout_ratio(ratio, shape):
 check_dropout_ratio(0.75, shape)
 check_dropout_ratio(0.25, shape)
 
+nshape = (10, 10, 10, 10)
+check_dropout_axes(0.25, nshape, axes = (0,))
+check_dropout_axes(0.25, nshape, axes = (1,))
+check_dropout_axes(0.25, nshape, axes = (2,))
+check_dropout_axes(0.25, nshape, axes = (3,))
+check_dropout_axes(0.25, nshape, axes = (0, 1))
+check_dropout_axes(0.25, nshape, axes = (0, 2))
+check_dropout_axes(0.25, nshape, axes = (0, 3))
+check_dropout_axes(0.25, nshape, axes = (1, 2))
+check_dropout_axes(0.25, nshape, axes = (1, 3))
+check_dropout_axes(0.25, nshape, axes = (2, 3))
+check_dropout_axes(0.25, nshape, axes = (0, 1, 2))
+check_dropout_axes(0.25, nshape, axes = (0, 2, 3))
+check_dropout_axes(0.25, nshape, axes = (1, 2, 3))
+
 
 @with_seed()
 def test_scatter_gather_nd():


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #10030: Fix bug for Dropout with axes, also adding unit test

2018-03-07 Thread GitBox
szha commented on issue #10030: Fix bug for Dropout with axes, also adding unit 
test
URL: https://github.com/apache/incubator-mxnet/pull/10030#issuecomment-371349152
 
 
   Thanks. Let's add a jira retroactively when you figure out how.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Fix bug for Dropout with axes, also adding unit test (#10030)

2018-03-07 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 117c509  Fix bug for Dropout with axes, also adding unit test (#10030)
117c509 is described below

commit 117c5095fb57b5e9bae36209e133626311d2b815
Author: Hang Zhang <8041160+zhanghang1...@users.noreply.github.com>
AuthorDate: Wed Mar 7 17:44:24 2018 -0800

Fix bug for Dropout with axes, also adding unit test (#10030)

* fix bug

* add test for dropout with axes
---
 src/operator/nn/dropout-inl.h  |  2 +-
 tests/python/unittest/test_operator.py | 36 ++
 2 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/src/operator/nn/dropout-inl.h b/src/operator/nn/dropout-inl.h
index b57ab45..1af4798 100644
--- a/src/operator/nn/dropout-inl.h
+++ b/src/operator/nn/dropout-inl.h
@@ -259,7 +259,7 @@ class DropoutOp {
 return;
   }
   // initialize the mask
-  LaunchRNG(s, pgen, out.Size(),
+  LaunchRNG(s, pgen, mask.Size(),
   mask.dptr(),
   this->pkeep_);
   // broadcast mul
diff --git a/tests/python/unittest/test_operator.py 
b/tests/python/unittest/test_operator.py
index 1ee14b6..91b8faa 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -4645,6 +4645,27 @@ def test_dropout():
 exe.backward([mx.nd.ones(shape)], is_train=False)
 assert (exe.grad_arrays[0].asnumpy() == 
exe.outputs[0].asnumpy()).all()
 
+def get_slice(x, axis, idx):
+ix = ()
+for i in range(x.ndim):
+if i == axis:
+ix += (idx,)
+else:
+ix += (slice(None, None, None),)
+return x[ix]
+
+def check_dropout_axes(ratio, shape, axes):
+compactshape = list(shape)
+for axis in axes:
+compactshape[axis] = 1
+compactx = mx.random.uniform(shape=tuple(compactshape))
+broadcastx = compactx.broadcast_to(shape)
+dropouty = mx.nd.Dropout(broadcastx, p=ratio, axes=axes)
+for axis in axes:
+target = get_slice(dropouty, axis, 0).asnumpy()
+for i in range(1, shape[axis]):
+assert(get_slice(dropouty, axis, i).asnumpy() == target).all()
+
 shape = (100, 100)
 check_dropout_ratio(0.5, shape)
 check_dropout_ratio(0.0, shape)
@@ -4652,6 +4673,21 @@ def test_dropout():
 check_dropout_ratio(0.75, shape)
 check_dropout_ratio(0.25, shape)
 
+nshape = (10, 10, 10, 10)
+check_dropout_axes(0.25, nshape, axes = (0,))
+check_dropout_axes(0.25, nshape, axes = (1,))
+check_dropout_axes(0.25, nshape, axes = (2,))
+check_dropout_axes(0.25, nshape, axes = (3,))
+check_dropout_axes(0.25, nshape, axes = (0, 1))
+check_dropout_axes(0.25, nshape, axes = (0, 2))
+check_dropout_axes(0.25, nshape, axes = (0, 3))
+check_dropout_axes(0.25, nshape, axes = (1, 2))
+check_dropout_axes(0.25, nshape, axes = (1, 3))
+check_dropout_axes(0.25, nshape, axes = (2, 3))
+check_dropout_axes(0.25, nshape, axes = (0, 1, 2))
+check_dropout_axes(0.25, nshape, axes = (0, 2, 3))
+check_dropout_axes(0.25, nshape, axes = (1, 2, 3))
+
 
 @with_seed()
 def test_scatter_gather_nd():

-- 
To stop receiving notification emails like this one, please contact
zhash...@apache.org.


[GitHub] TaoLv commented on a change in pull request #9918: Update mkldnn to the newest & Add clang build test with mkldnn.

2018-03-07 Thread GitBox
TaoLv commented on a change in pull request #9918: Update mkldnn to the newest 
& Add clang build test with mkldnn.
URL: https://github.com/apache/incubator-mxnet/pull/9918#discussion_r173041509
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_base.cc
 ##
 @@ -237,13 +237,14 @@ mkldnn_memory_format_t 
GetDefaultFormat(mkldnn::memory::desc desc) {
   case mkldnn_gOIhw16o16i:
   case mkldnn_gIOhw16o16i:
   case mkldnn_gOihw8o:
+  case mkldnn_Goihw8g:
   case mkldnn_gOihw16o:
   case mkldnn_gOhwi8o:
   case mkldnn_gOhwi16o:
   case mkldnn_gOhIw16o4i:
 return mkldnn_goihw;
   default:
-LOG(FATAL) << "Unknown MKLDNN format for 4 dimensions: " << 
desc.data.format;
+LOG(FATAL) << "Unknown MKLDNN format for 5 dimensions: " << 
desc.data.format;
 
 Review comment:
   @marcoabreu Sorry for late response. It seems a little difficult to monitor 
the change of a `enum` type in mkldnn package. I have asked mkldnn team for 
support and am waiting for their reply. Since the mkldnn version updated in 
this pr works well with this part of code, I would like this pr be merged 
firstly and it will address the compilation issue on OSX. I will add unit test 
for it later if I get support from mkldnn team. If needed, I can help create a 
JIRA ticket to track this issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu opened a new pull request #10031: Sync master with v1.1.0 branch

2018-03-07 Thread GitBox
yzhliu opened a new pull request #10031: Sync master with v1.1.0 branch
URL: https://github.com/apache/incubator-mxnet/pull/10031
 
 
   I have went through the commits and checked all the changes on v1.1.0 branch 
has been merged into master, except the LICENSE and NEWS below.
   
   But I did see some commits that were pushed into v1.1.0 directly (without PR 
against either v1.1.0 nor master), thus ping @eric-haibin-lin for double check.
   
   The commits:
   ```
   commit 31104c9d4b050883467f45f8bf9a164acb93976f
   Author: Haibin Lin 
   Date:   Tue Feb 6 15:22:53 2018 -0800
   
   Update NEWS.md
   
   
   commit 8b3c9ebb7bb4a9e8ee88e7222a718f7fa1c9a6be (tag: 1.1.0.rc0)
   Author: Haibin Lin 
   Date:   Sat Jan 27 23:23:13 2018 -0800
   
   Update NEWS.md
   
   commit 3ba84d83105bbc8825ac858f4c9cf81f9ca03d18
   Author: Haibin Lin 
   Date:   Sat Jan 27 23:18:11 2018 -0800
   
   Update KEYS
   
   
   commit 9a5819687f16ea7cd611bca7b4bcb809d4186d9d
   Author: Haibin Lin 
   Date:   Sat Jan 27 18:18:35 2018 -0800
   
   update news.md (#191)
   
   * Update NEWS.md
   
   * Update README.md
   
   
   commit 4c17c030208c67b2f68808d12d5a79996cfaf4ba
   Author: Haibin Lin 
   Date:   Sat Jan 27 18:22:50 2018 -0800
   
   Bump 1.1 (#192)
   
   * bump
   
   * also update base.h
   
   * revert website changes
   
   * Update index.html
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #9977: Cpu lstm inference

2018-03-07 Thread GitBox
zheng-da commented on issue #9977: Cpu lstm inference
URL: https://github.com/apache/incubator-mxnet/pull/9977#issuecomment-371345496
 
 
   @Jerryzcn can you provide the workload for speed evaluation? so later on, 
other people can reproduce the result if they want.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #9977: Cpu lstm inference

2018-03-07 Thread GitBox
zheng-da commented on a change in pull request #9977: Cpu lstm inference
URL: https://github.com/apache/incubator-mxnet/pull/9977#discussion_r173038522
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -287,8 +507,9 @@ class RNNProp : public OperatorProperty {
 const std::vector _grad,
 const std::vector _data,
 const std::vector _data) const override {
-std::vector dep = {in_data[rnn_enum::kData], 
in_data[rnn_enum::kParams],
-in_data[rnn_enum::kState], out_data[rnn_enum::kOut], 
out_grad[rnn_enum::kOut]};
+std::vector dep = {in_data[rnn_enum::kData],
+  in_data[rnn_enum::kParams], in_data[rnn_enum::kState],
+  out_data[rnn_enum::kOut], out_grad[rnn_enum::kOut]};
 
 Review comment:
   the coding style in mxnet allows up to 100 char per line.
   so the original code is fine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham opened a new pull request #58: Redirect via mod_rewrite

2018-03-07 Thread GitBox
aaronmarkham opened a new pull request #58: Redirect via mod_rewrite
URL: https://github.com/apache/incubator-mxnet-site/pull/58
 
 
   ## Description ##
   Even though #57 should have kicked in I'm still seeing a 404 for:
   http://mxnet.incubator.apache.org/get_started/install.html
   
   This approach does away with doing meta refresh tags and keeping around dead 
files. It'll do a proper 301 redirect... assuming mod_rewrite is enabled on the 
server.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #9918: Update mkldnn to the newest & Add clang build test with mkldnn.

2018-03-07 Thread GitBox
pengzhao-intel commented on issue #9918: Update mkldnn to the newest & Add 
clang build test with mkldnn.
URL: https://github.com/apache/incubator-mxnet/pull/9918#issuecomment-371342064
 
 
   @marcoabreu @KellenSunderland @cjolivier01 please help confirm the final 
changes.
   And this issue will lead the build failure in OSX system so I think we need 
to merge it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 opened a new pull request #10030: Fix bug for Dropout with axes, also adding unit test

2018-03-07 Thread GitBox
zhanghang1989 opened a new pull request #10030: Fix bug for Dropout with axes, 
also adding unit test
URL: https://github.com/apache/incubator-mxnet/pull/10030
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #10029: Layer Norm

2018-03-07 Thread GitBox
sxjscience commented on issue #10029: Layer Norm
URL: https://github.com/apache/incubator-mxnet/pull/10029#issuecomment-371330053
 
 
   @fhieber @tdomhan You could try this after it gets merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience opened a new pull request #10029: Layer Norm

2018-03-07 Thread GitBox
sxjscience opened a new pull request #10029: Layer Norm
URL: https://github.com/apache/incubator-mxnet/pull/10029
 
 
   ## Description ##
   1. Directly implement layer normalization in C++. The speed and memory cost 
are both better than the way of stacking the broadcast/reduce OPs. Solves 
https://github.com/apache/incubator-mxnet/issues/9950
   2. Add LayerNorm in Gluon
   3. Fix the doc of InstanceNorm. In InstanceNorm, the real axis to normalize 
the input tensor is all axes excluding the 0th axis and the given axis.
   4. Fix the doc of BatchNorm, the inverse std instead of the var is set as 
the output. Should fix https://github.com/apache/incubator-mxnet/issues/9216
   ## Checklist ##
   ### Essentials ###
   - [x] Passed code style checking (`make lint`)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] LayerNorm in C++/Gluon, tests
   - [x] Fix Doc of InstanceNorm
   - [x] Fix Doc of BatchNorm
   
   ## Comments ##
   We can improve the speed further by fusing the operators. This is left as 
future work.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #10028: fix bug for dropout with axes

2018-03-07 Thread GitBox
szha commented on issue #10028: fix bug for dropout with axes
URL: https://github.com/apache/incubator-mxnet/pull/10028#issuecomment-371324917
 
 
   Please post the fix and tests together.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed pull request #10028: fix bug for dropout with axes

2018-03-07 Thread GitBox
szha closed pull request #10028: fix bug for dropout with axes
URL: https://github.com/apache/incubator-mxnet/pull/10028
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/nn/dropout-inl.h b/src/operator/nn/dropout-inl.h
index b57ab45891e..1af4798d1ce 100644
--- a/src/operator/nn/dropout-inl.h
+++ b/src/operator/nn/dropout-inl.h
@@ -259,7 +259,7 @@ class DropoutOp {
 return;
   }
   // initialize the mask
-  LaunchRNG(s, pgen, out.Size(),
+  LaunchRNG(s, pgen, mask.Size(),
   mask.dptr(),
   this->pkeep_);
   // broadcast mul


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #9995: CI docker revamp; Add Jetson, Raspberry and CentOS 7 build [MXNET-42][MXNET-43][MXNET-44][MXNET-57]

2018-03-07 Thread GitBox
marcoabreu commented on issue #9995: CI docker revamp; Add Jetson, Raspberry 
and CentOS 7 build [MXNET-42][MXNET-43][MXNET-44][MXNET-57]
URL: https://github.com/apache/incubator-mxnet/pull/9995#issuecomment-371318315
 
 
   @szha @sergeykolychev @tqchen @cjolivier01 @eric-haibin-lin can you please 
review?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] alihashmiii commented on issue #8936: Issue on MXNet R installation with GPU support on Windows

2018-03-07 Thread GitBox
alihashmiii commented on issue #8936: Issue on MXNet R installation with GPU 
support on Windows
URL: 
https://github.com/apache/incubator-mxnet/issues/8936#issuecomment-371310815
 
 
   I am facing the same problem here. why is not the GPU support easier for R 
mxnet. Python GPU support was so much easier


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] alihashmiii commented on issue #8936: Issue on MXNet R installation with GPU support on Windows

2018-03-07 Thread GitBox
alihashmiii commented on issue #8936: Issue on MXNet R installation with GPU 
support on Windows
URL: 
https://github.com/apache/incubator-mxnet/issues/8936#issuecomment-371310815
 
 
   Same problem here. why is not the GPU support easier for R mxnet. Python GPU 
support was so much easier


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: add name (#10007)

2018-03-07 Thread cjolivier01
This is an automated email from the ASF dual-hosted git repository.

cjolivier01 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 098c9d9  add name (#10007)
098c9d9 is described below

commit 098c9d9d56defd62ab4d0a1930506935fa4647c6
Author: Chris Olivier 
AuthorDate: Wed Mar 7 14:33:32 2018 -0800

add name (#10007)

* add name

* Update CONTRIBUTORS.md
---
 CONTRIBUTORS.md | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
index d9f1273..8079ce4 100644
--- a/CONTRIBUTORS.md
+++ b/CONTRIBUTORS.md
@@ -39,8 +39,10 @@ The committers are the granted write access to the project.
   - Zixuan is one of major maintainers of mxnet scala package.
 * [Yuan Tang](https://github.com/terrytangyuan)
   - Yuan is one of major maintainers of mxnet scala package.
+* [Chris Olivier](https://github.com/cjolivier01)
 * [Sergey Kolychev](https://github.com/sergeykolychev)
   - Sergey is original author and current maintainer of Perl5 interface.
+* [Naveen Swamy](https://github.com/nswamy)
 
 ### Become a Committer
 MXNet is a opensource project and we are actively looking for new committers

-- 
To stop receiving notification emails like this one, please contact
cjolivie...@apache.org.


[GitHub] cjolivier01 closed pull request #10007: add name

2018-03-07 Thread GitBox
cjolivier01 closed pull request #10007: add name
URL: https://github.com/apache/incubator-mxnet/pull/10007
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
index d9f12735525..8079ce41911 100644
--- a/CONTRIBUTORS.md
+++ b/CONTRIBUTORS.md
@@ -39,8 +39,10 @@ The committers are the granted write access to the project.
   - Zixuan is one of major maintainers of mxnet scala package.
 * [Yuan Tang](https://github.com/terrytangyuan)
   - Yuan is one of major maintainers of mxnet scala package.
+* [Chris Olivier](https://github.com/cjolivier01)
 * [Sergey Kolychev](https://github.com/sergeykolychev)
   - Sergey is original author and current maintainer of Perl5 interface.
+* [Naveen Swamy](https://github.com/nswamy)
 
 ### Become a Committer
 MXNet is a opensource project and we are actively looking for new committers


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #9963: Onnx Module to import onnx models into mxnet

2018-03-07 Thread GitBox
marcoabreu commented on a change in pull request #9963: Onnx Module to import 
onnx models into mxnet
URL: https://github.com/apache/incubator-mxnet/pull/9963#discussion_r172999228
 
 

 ##
 File path: tests/python/unittest/test_layers.py
 ##
 @@ -50,5 +50,29 @@ def test_reduce_mean(self):
 numpy_op = np.mean(input1, axis=(1, 0), keepdims=True)
 npt.assert_almost_equal(output, numpy_op, decimal=5)
 
+def test_reduce_min(self):
+"""Test for ReduceMin operator"""
+node_def = helper.make_node("ReduceMin", ["input1"], ["output"], 
axes=[1, 0], keepdims=1)
+input1 = self._random_array([3, 10])
 
 Review comment:
   Please make use of the @with_seed decorator for all tests using randomized 
data


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 opened a new pull request #10028: fix bug for dropout with axes

2018-03-07 Thread GitBox
zhanghang1989 opened a new pull request #10028: fix bug for dropout with axes
URL: https://github.com/apache/incubator-mxnet/pull/10028
 
 
   ## Description ##
   fix bug for a previous PR https://github.com/apache/incubator-mxnet/pull/9931
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] OferRosenberg commented on issue #8917: Build using android.arm64 android.armv7 dockers fails, while arm64 and arm7 dockers work

2018-03-07 Thread GitBox
OferRosenberg commented on issue #8917: Build using android.arm64 android.armv7 
dockers fails, while arm64 and arm7 dockers work
URL: 
https://github.com/apache/incubator-mxnet/issues/8917#issuecomment-371295031
 
 
   Thanks ! checked it out and it works perfectly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] OferRosenberg closed issue #8917: Build using android.arm64 android.armv7 dockers fails, while arm64 and arm7 dockers work

2018-03-07 Thread GitBox
OferRosenberg closed issue #8917: Build using android.arm64 android.armv7 
dockers fails, while arm64 and arm7 dockers work
URL: https://github.com/apache/incubator-mxnet/issues/8917
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 commented on issue #10027: Fix bug/typo for Dropout using axes

2018-03-07 Thread GitBox
zhanghang1989 commented on issue #10027: Fix bug/typo for Dropout using axes
URL: https://github.com/apache/incubator-mxnet/pull/10027#issuecomment-371293067
 
 
   will create another one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 closed pull request #10027: Fix bug/typo for Dropout using axes

2018-03-07 Thread GitBox
zhanghang1989 closed pull request #10027: Fix bug/typo for Dropout using axes
URL: https://github.com/apache/incubator-mxnet/pull/10027
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/dmlc-core b/dmlc-core
index 282b98663f5..a1fd6834c0c 16
--- a/dmlc-core
+++ b/dmlc-core
@@ -1 +1 @@
-Subproject commit 282b98663f59df6b26f906580af610dea3046f22
+Subproject commit a1fd6834c0cd3fd2cc586deec2dc24194924cada
diff --git a/src/operator/nn/dropout-inl.h b/src/operator/nn/dropout-inl.h
index b57ab45891e..1af4798d1ce 100644
--- a/src/operator/nn/dropout-inl.h
+++ b/src/operator/nn/dropout-inl.h
@@ -259,7 +259,7 @@ class DropoutOp {
 return;
   }
   // initialize the mask
-  LaunchRNG(s, pgen, out.Size(),
+  LaunchRNG(s, pgen, mask.Size(),
   mask.dptr(),
   this->pkeep_);
   // broadcast mul


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] 01/01: Merge pull request #57 from thinksanky/fix_getting_started_install_redirection

2018-03-07 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git

commit 465896d944c1020e80c67e33d5565893b2c18e76
Merge: 7083943 3d841e3
Author: Yizhi Liu 
AuthorDate: Wed Mar 7 12:59:33 2018 -0800

Merge pull request #57 from 
thinksanky/fix_getting_started_install_redirection

fixed redirection of the install page

 get_started/index.html | 1 +
 1 file changed, 1 insertion(+)

-- 
To stop receiving notification emails like this one, please contact
liuyi...@apache.org.


[GitHub] yzhliu closed pull request #57: fixed redirection of the install page

2018-03-07 Thread GitBox
yzhliu closed pull request #57: fixed redirection of the install page
URL: https://github.com/apache/incubator-mxnet-site/pull/57
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/get_started/index.html b/get_started/index.html
index a29cc0544..6c4e1f4d4 100644
--- a/get_started/index.html
+++ b/get_started/index.html
@@ -3,6 +3,7 @@
 
 
 
+
 
 
  ? mxnet  documentation


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated (7083943 -> 465896d)

2018-03-07 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git.


from 7083943  Merge pull request #56 from 
thinksanky/touched_file_to_refresh_site
 add 3d841e3  fixed redirection of the install page
 new 465896d  Merge pull request #57 from 
thinksanky/fix_getting_started_install_redirection

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 get_started/index.html | 1 +
 1 file changed, 1 insertion(+)

-- 
To stop receiving notification emails like this one, please contact
liuyi...@apache.org.


[GitHub] thinksanky opened a new pull request #57: fixed redirection of the install page

2018-03-07 Thread GitBox
thinksanky opened a new pull request #57: fixed redirection of the install page
URL: https://github.com/apache/incubator-mxnet-site/pull/57
 
 
   ## Description ##
   - Added  redirection to install page to mitigate the issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Jerryzcn commented on a change in pull request #9977: Cpu lstm inference

2018-03-07 Thread GitBox
Jerryzcn commented on a change in pull request #9977: Cpu lstm inference
URL: https://github.com/apache/incubator-mxnet/pull/9977#discussion_r172979255
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -287,8 +507,9 @@ class RNNProp : public OperatorProperty {
 const std::vector _grad,
 const std::vector _data,
 const std::vector _data) const override {
-std::vector dep = {in_data[rnn_enum::kData], 
in_data[rnn_enum::kParams],
-in_data[rnn_enum::kState], out_data[rnn_enum::kOut], 
out_grad[rnn_enum::kOut]};
+std::vector dep = {in_data[rnn_enum::kData],
+  in_data[rnn_enum::kParams], in_data[rnn_enum::kState],
+  out_data[rnn_enum::kOut], out_grad[rnn_enum::kOut]};
 
 Review comment:
   it exceeds 80 char per line limit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Jerryzcn commented on a change in pull request #9977: Cpu lstm inference

2018-03-07 Thread GitBox
Jerryzcn commented on a change in pull request #9977: Cpu lstm inference
URL: https://github.com/apache/incubator-mxnet/pull/9977#discussion_r172979255
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -287,8 +507,9 @@ class RNNProp : public OperatorProperty {
 const std::vector _grad,
 const std::vector _data,
 const std::vector _data) const override {
-std::vector dep = {in_data[rnn_enum::kData], 
in_data[rnn_enum::kParams],
-in_data[rnn_enum::kState], out_data[rnn_enum::kOut], 
out_grad[rnn_enum::kOut]};
+std::vector dep = {in_data[rnn_enum::kData],
+  in_data[rnn_enum::kParams], in_data[rnn_enum::kState],
+  out_data[rnn_enum::kOut], out_grad[rnn_enum::kOut]};
 
 Review comment:
   it exceeded 80 char per line limit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Jerryzcn commented on a change in pull request #9977: Cpu lstm inference

2018-03-07 Thread GitBox
Jerryzcn commented on a change in pull request #9977: Cpu lstm inference
URL: https://github.com/apache/incubator-mxnet/pull/9977#discussion_r172978477
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -78,10 +82,12 @@ inline int rnn_param_size(int layerNum,
   int size = rnn_single_param_size(inputSize, hiddenSize, mode);
   // get size of remaining layers
   if (bidirectional) {
-size += (layerNum - 1) * rnn_single_param_size(2 * hiddenSize, hiddenSize, 
mode);
+size += (layerNum - 1) * rnn_single_param_size(2 * hiddenSize,
+hiddenSize, mode);
 size *= 2;
   } else {
-size += (layerNum - 1) * rnn_single_param_size(hiddenSize, hiddenSize, 
mode);
+size += (layerNum - 1) * rnn_single_param_size(hiddenSize, hiddenSize,
+mode);
 
 Review comment:
   yes


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 commented on issue #10027: Fix bug/typo for Dropout using axes

2018-03-07 Thread GitBox
zhanghang1989 commented on issue #10027: Fix bug/typo for Dropout using axes
URL: https://github.com/apache/incubator-mxnet/pull/10027#issuecomment-371269252
 
 
   @szha this should fix the problem of dropout using axes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 opened a new pull request #10027: Fix bug/typo for Dropout using axes

2018-03-07 Thread GitBox
zhanghang1989 opened a new pull request #10027: Fix bug/typo for Dropout using 
axes
URL: https://github.com/apache/incubator-mxnet/pull/10027
 
 
   ## Description ##
   Fix bug for a recent PR https://github.com/apache/incubator-mxnet/pull/9931
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #9977: Cpu lstm inference

2018-03-07 Thread GitBox
zheng-da commented on a change in pull request #9977: Cpu lstm inference
URL: https://github.com/apache/incubator-mxnet/pull/9977#discussion_r172948107
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -144,15 +149,224 @@ class RNNOp : public Operator {
 const std::vector ,
 const std::vector _grad,
 const std::vector _args) {
-using namespace mshadow;
-using namespace mshadow::expr;
 // TODO(sbodenstein): add MShadow implementation
   }
 
  private:
   RNNParam param_;
 };  // class RNNOp
 
+template
+class RNNOp : public Operator {
+ public:
+  explicit RNNOp(RNNParam param) {
+this->param_ = param;
+// RNN Mode
+switch (param_.mode) {
+  case rnn_enum::kLstm:
+break;
+  default:
+LOG(FATAL) << "only LSTM is implmented on CPU";
+}
+if (param_.mode == rnn_enum::kLstm)
+  param_.lstm_q_ = true;
+else
+  param_.lstm_q_ = false;
 
 Review comment:
   it seems this check can be merged to the switch case statement above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #9977: Cpu lstm inference

2018-03-07 Thread GitBox
zheng-da commented on a change in pull request #9977: Cpu lstm inference
URL: https://github.com/apache/incubator-mxnet/pull/9977#discussion_r172710204
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -78,10 +82,12 @@ inline int rnn_param_size(int layerNum,
   int size = rnn_single_param_size(inputSize, hiddenSize, mode);
   // get size of remaining layers
   if (bidirectional) {
-size += (layerNum - 1) * rnn_single_param_size(2 * hiddenSize, hiddenSize, 
mode);
+size += (layerNum - 1) * rnn_single_param_size(2 * hiddenSize,
+hiddenSize, mode);
 size *= 2;
   } else {
-size += (layerNum - 1) * rnn_single_param_size(hiddenSize, hiddenSize, 
mode);
+size += (layerNum - 1) * rnn_single_param_size(hiddenSize, hiddenSize,
+mode);
 
 Review comment:
   you are just reformatting the code here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #9977: Cpu lstm inference

2018-03-07 Thread GitBox
zheng-da commented on a change in pull request #9977: Cpu lstm inference
URL: https://github.com/apache/incubator-mxnet/pull/9977#discussion_r172944904
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -287,8 +507,9 @@ class RNNProp : public OperatorProperty {
 const std::vector _grad,
 const std::vector _data,
 const std::vector _data) const override {
-std::vector dep = {in_data[rnn_enum::kData], 
in_data[rnn_enum::kParams],
-in_data[rnn_enum::kState], out_data[rnn_enum::kOut], 
out_grad[rnn_enum::kOut]};
+std::vector dep = {in_data[rnn_enum::kData],
+  in_data[rnn_enum::kParams], in_data[rnn_enum::kState],
+  out_data[rnn_enum::kOut], out_grad[rnn_enum::kOut]};
 
 Review comment:
   i'm not sure why you want to change the code in this function. it seems you 
just reorganize the code a little bit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #9977: Cpu lstm inference

2018-03-07 Thread GitBox
zheng-da commented on a change in pull request #9977: Cpu lstm inference
URL: https://github.com/apache/incubator-mxnet/pull/9977#discussion_r172712476
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -144,15 +149,224 @@ class RNNOp : public Operator {
 const std::vector ,
 const std::vector _grad,
 const std::vector _args) {
-using namespace mshadow;
-using namespace mshadow::expr;
 // TODO(sbodenstein): add MShadow implementation
   }
 
  private:
   RNNParam param_;
 };  // class RNNOp
 
+template
+class RNNOp : public Operator {
+ public:
+  explicit RNNOp(RNNParam param) {
+this->param_ = param;
+// RNN Mode
+switch (param_.mode) {
+  case rnn_enum::kLstm:
+break;
+  default:
+LOG(FATAL) << "only LSTM is implmented on CPU";
+}
+if (param_.mode == rnn_enum::kLstm)
+  param_.lstm_q_ = true;
+else
+  param_.lstm_q_ = false;
+  }
+
+  virtual void Forward(const OpContext ,
+   const std::vector _data,
+   const std::vector ,
+   const std::vector _data,
+   const std::vector _args) {
+// Layout TNC
+
+size_t in_expected = param_.lstm_q_ ? 4 : 3;
+size_t out_expected = param_.lstm_q_ ? 3 : 2;
+
+if (!param_.state_outputs)
+  LOG(FATAL) << "no state outputs is currently not supported for cpu.";
+
+CHECK_EQ(req[rnn_enum::kOut], kWriteTo);
+CHECK_EQ(in_data.size(), in_expected);
+CHECK_EQ(out_data.size(), out_expected);
+
+mshadow::Stream *s = ctx.get_stream();
+// get input + output tensors
+// w layout i2h_w, h2h_w, i2h_b, h2h_b
+Tensor x =
+in_data[rnn_enum::kData].get(s);  // TNC
+Tensor w = in_data[rnn_enum::kParams].get(s);
+Tensor hx =
+in_data[rnn_enum::kState].get(s);  // LNC
+Tensor y =
+out_data[rnn_enum::kOut].get(s);  // TNC
+int64_t seq_len = x.shape_[0];
+int64_t num_layers = hx.shape_[0];
+int64_t batch_size = x.shape_[1];
+int64_t h_channel = hx.shape_[2];
+int64_t in_channel = x.shape_[2];
+Tensor x_flatten = in_data[rnn_enum::kData]
+  .get_with_shape(
+  mshadow::Shape2(seq_len * batch_size, in_channel), s);  // (T*N)C
+Tensor y_flatten = out_data[rnn_enum::kOut]
+  .get_with_shape(
+  mshadow::Shape2(
+  y.shape_[0] * y.shape_[1], y.shape_[2]), s);  // (T*N)C
+
+CHECK_EQ(x.CheckContiguous(), true);
+CHECK_EQ(w.CheckContiguous(), true);
+CHECK_EQ(hx.CheckContiguous(), true);
+CHECK_EQ(y.CheckContiguous(), true);
+
+if (ctx.is_train)
+  LOG(FATAL) << "only inference mode is available for cpu at the moment.";
 
 Review comment:
   you can do CHECK(!ctx.is_train) << "..."


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 commented on issue #9989: Cannot train example gluon style transfer

2018-03-07 Thread GitBox
zhanghang1989 commented on issue #9989: Cannot train example gluon style 
transfer
URL: 
https://github.com/apache/incubator-mxnet/issues/9989#issuecomment-371252011
 
 
   @piiswrong Need some help about API changes. 
   In the style transfer example, the `set_data()` function was okay for 
back-propagation, but the code recently broke. Is there any recent update break 
this, any solutions?
   - Error message:
   ```
   mxnet.base.MXNetError: [19:08:27] src/imperative/imperative.cc:192: Check 
failed: AGInfo::IsNone(*(outputs [i])) Assigning to NDArrays that are already 
in a computational graph will cause undefined behavior when evaluating 
gradients. Please call backward first to clear the graph or do this out side of 
a record section. 
   ```
   - Link to the code:
   
https://github.com/apache/incubator-mxnet/blob/master/example/gluon/style_transfer/net.py#L252


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 commented on issue #9989: Cannot train example gluon style transfer

2018-03-07 Thread GitBox
zhanghang1989 commented on issue #9989: Cannot train example gluon style 
transfer
URL: 
https://github.com/apache/incubator-mxnet/issues/9989#issuecomment-371252011
 
 
   @piiswrong Need some help about API changes. 
   In the style transfer example, the `set_data()` function was okay for 
back-propagation, but the code recent broke. Is there any recent update break 
this, any solutions?
   - Error message:
   ```
   mxnet.base.MXNetError: [19:08:27] src/imperative/imperative.cc:192: Check 
failed: AGInfo::IsNone(*(outputs [i])) Assigning to NDArrays that are already 
in a computational graph will cause undefined behavior when evaluating 
gradients. Please call backward first to clear the graph or do this out side of 
a record section. 
   ```
   - Link to the code:
   
https://github.com/apache/incubator-mxnet/blob/master/example/gluon/style_transfer/net.py#L252


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #10016: MxNet hangs up during bind.

2018-03-07 Thread GitBox
sxjscience commented on issue #10016: MxNet hangs up during bind.
URL: 
https://github.com/apache/incubator-mxnet/issues/10016#issuecomment-371244313
 
 
   Hi @kandoiNikhil , what kind of GPU are you using? Sometimes it's due to the 
JIT compilation. You can cache the result of the JIT compilation following the 
guide in 
https://devblogs.nvidia.com/cuda-pro-tip-understand-fat-binaries-jit-caching/. 
For example, you may set `CUDA_CACHE_MAXSIZE` to a large value and set 
`CUDA_CACHE_PATH` to a position that contains enough storage.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 commented on issue #9989: Cannot train example gluon style transfer

2018-03-07 Thread GitBox
zhanghang1989 commented on issue #9989: Cannot train example gluon style 
transfer
URL: 
https://github.com/apache/incubator-mxnet/issues/9989#issuecomment-371242715
 
 
   Hi @samhodge , you shouldn't move L82 out of autograd.record scope, because 
the gradient of siamese network won't back-propagate. I am testing the code and 
will get back to you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sergeykolychev commented on a change in pull request #9988: [Perl] Sparse feature.

2018-03-07 Thread GitBox
sergeykolychev commented on a change in pull request #9988: [Perl] Sparse 
feature.
URL: https://github.com/apache/incubator-mxnet/pull/9988#discussion_r172942895
 
 

 ##
 File path: perl-package/AI-MXNet/lib/AI/MXNet/NDArray/Sparse.pm
 ##
 @@ -0,0 +1,1342 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+package AI::MXNet::NDArray::Sparse;
+use strict;
+use warnings;
+use AI::MXNet::Base;
+use AI::MXNet::Function::Parameters;
+use Mouse;
+extends 'AI::MXNet::NDArray';
+
+=head1 NAME
+
+AI::MXNet::NDArray::Sparse - Sparse NDArray API of MXNet
+=cut
+
+=head1 DESCRIPTION
+
+The base class of an NDArray stored in a sparse storage format.
+See AI::MXNet::NDArray::CSR and AI::MXNet::NDArray::RowSparse for more 
details.
+=cut
+
+method _new_alloc_handle(
+Stype$stype,
+Shape$shape,
+AI::MXNet::Context   $ctx,
+Bool $delay_alloc,
+Dtype$dtype,
+AuxTypes $aux_types,
+Maybe[ArrayRef[Shape]]   $aux_shapes=
+)
+{
+confess("only int64 is supported for aux types")
+if (grep { $_ ne 'int64' } @$aux_types);
+my $aux_type_ids = [map { DTYPE_STR_TO_MX->{$_} } @$aux_types];
+$aux_shapes //= [map { [0] } @$aux_types];
+my $aux_shape_lens = [map { scalar(@$_) } @$aux_shapes];
+@$aux_shapes = map { @$_ } @$aux_shapes;
+my $num_aux = @{ $aux_types };
+my $handle = check_call(
+AI::MXNetCAPI::NDArrayCreateSparseEx(
+STORAGE_TYPE_STR_TO_ID->{$stype},
+$shape,
+scalar(@$shape),
+$ctx->device_type_id,
+$ctx->device_id,
+$delay_alloc,
+DTYPE_STR_TO_MX->{$dtype},
+scalar(@$aux_types),
+$aux_type_ids,
+$aux_shape_lens,
+$aux_shapes
+)
+);
+}
+
+method _class_name()
+{
+my $class = ref $self || $self;
+$class;
+}
+
+sub not_implemented { confess "Not implemented" }
 
 Review comment:
   Thanks man, there's a lot about Perl I still don't know.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu opened a new issue #10026: MXNET_MKLDNN_DEBUG=1 produces errors

2018-03-07 Thread GitBox
marcoabreu opened a new issue #10026: MXNET_MKLDNN_DEBUG=1 produces errors
URL: https://github.com/apache/incubator-mxnet/issues/10026
 
 
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-9995/32/pipeline/483
   
   Setting ``MXNET_MKLDNN_DEBUG=1`` as environment variable will produce the 
following error in tests. This happens across all configurations.
   
   ```
   ==
   
   ERROR: test_gluon_model_zoo.test_models
   
   --
   
   Traceback (most recent call last):
   
 File "/usr/local/lib/python2.7/dist-packages/nose/case.py", line 197, in 
runTest
   
   self.test(*self.arg)
   
 File "/work/mxnet/tests/python/unittest/common.py", line 157, in test_new
   
   orig_test(*args, **kwargs)
   
 File "/work/mxnet/tests/python/unittest/test_gluon_model_zoo.py", line 50, 
in test_models
   
   model(mx.nd.random.uniform(shape=data_shape)).wait_to_read()
   
 File "/work/mxnet/python/mxnet/ndarray/ndarray.py", line 1650, in 
wait_to_read
   
   check_call(_LIB.MXNDArrayWaitToRead(self.handle))
   
 File "/work/mxnet/python/mxnet/base.py", line 149, in check_call
   
   raise MXNetError(py_str(_LIB.MXGetLastError()))
   
   MXNetError: [17:10:12] src/operator/nn/mkldnn/mkldnn_base.cc:395: Check 
failed: similar 
   
   
   
   Stack trace returned 10 entries:
   
   [bt] (0) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(dmlc::StackTrace[abi:cxx11]()+0x5b)
 [0x7f06ccf3745b]
   
   [bt] (1) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x28)
 [0x7f06ccf38478]
   
   [bt] (2) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(mxnet::OpCheck::Run(std::function > const&, std::vector const&, std::vector const&)>, nnvm::NodeAttrs const&, 
mxnet::OpContext const&, std::vector const&, std::vector const&, std::vector const&)+0x3ca8) [0x7f06ccf54198]
   
   [bt] (3) /work/mxnet/python/mxnet/../../lib/libmxnet.so(+0x2a910d9) 
[0x7f06cf55a0d9]
   
   [bt] (4) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(std::_Function_handler > const&, std::vector const&, std::vector const&)> const&, nnvm::Op const*, 
nnvm::NodeAttrs const&, mxnet::Context const&, std::vector > const&, std::vector > const&, std::vector const&, std::vector > const&, std::vector > const&, std::vector 
const&)::{lambda(mxnet::RunContext)#1}>::_M_invoke(std::_Any_data const&, 
mxnet::RunContext&&)+0x7c) [0x7f06cf77608c]
   
   [bt] (5) /work/mxnet/python/mxnet/../../lib/libmxnet.so(+0x3148fdb) 
[0x7f06cfc11fdb]
   
   [bt] (6) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(mxnet::engine::ThreadedEngine::ExecuteOprBlock(mxnet::RunContext,
 mxnet::engine::OprBlock*)+0xcb5) [0x7f06cfc0b1a5]
   
   [bt] (7) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(std::_Function_handler), 
mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*, 
bool)::{lambda()#1}::operator()() 
const::{lambda(std::shared_ptr)#1}>::_M_invoke(std::_Any_data
 const&, std::shared_ptr&&)+0xd9) [0x7f06cfc1d309]
   
   [bt] (8) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(std::thread::_Impl (std::shared_ptr)> 
>::_M_run()+0x4a) [0x7f06cfc1c43a]
   
   [bt] (9) /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xb8c80) [0x7f06d7ca4c80]
   
   
   
   
   
    >> begin captured stdout << -
   
   ResNetV1(
   
 (features): HybridSequential(
   
   (0): Conv2D(None -> 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 
3), bias=False)
   
   (1): BatchNorm(fix_gamma=False, use_global_stats=False, eps=1e-05, 
momentum=0.9, axis=1, in_channels=None)
   
   (2): Activation(relu)
   
   (3): MaxPool2D(size=(3, 3), stride=(2, 2), padding=(1, 1), 
ceil_mode=False)
   
   (4): HybridSequential(
   
 (0): BasicBlockV1(
   
   (body): HybridSequential(
   
 (0): Conv2D(64 -> 64, kernel_size=(3, 3), stride=(1, 1), 
padding=(1, 1), bias=False)
   
 (1): BatchNorm(fix_gamma=False, use_global_stats=False, eps=1e-05, 
momentum=0.9, axis=1, in_channels=None)
   
 (2): Activation(relu)
   
 (3): Conv2D(64 -> 64, kernel_size=(3, 3), stride=(1, 1), 
padding=(1, 1), bias=False)
   
 (4): BatchNorm(fix_gamma=False, use_global_stats=False, 

[GitHub] aaronmarkham commented on issue #10013: update on setting up Scala with MXNet and the IntelliJ IDE

2018-03-07 Thread GitBox
aaronmarkham commented on issue #10013: update on setting up Scala with MXNet 
and the IntelliJ IDE
URL: https://github.com/apache/incubator-mxnet/pull/10013#issuecomment-371228893
 
 
   https://issues.apache.org/jira/projects/MXNET/issues/MXNET-48


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #10010: Documentation for build_version_doc scripts folder

2018-03-07 Thread GitBox
aaronmarkham commented on issue #10010: Documentation for build_version_doc 
scripts folder
URL: https://github.com/apache/incubator-mxnet/pull/10010#issuecomment-371226453
 
 
   https://issues.apache.org/jira/projects/MXNET/issues/MXNET-47


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #10021: [WIP] [MXNET-33] Mkldnn pooling convention crash

2018-03-07 Thread GitBox
zheng-da commented on a change in pull request #10021: [WIP] [MXNET-33] Mkldnn 
pooling convention crash
URL: https://github.com/apache/incubator-mxnet/pull/10021#discussion_r172915279
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_pooling-inl.h
 ##
 @@ -92,6 +92,8 @@ inline bool SupportMKLDNNPooling(const PoolingParam ,
 
   if (param.pooling_convention == pool_enum::kValid)
 return true;
+  else
+return false;
 
 Review comment:
   maybe use "#if 0" to comment the checks below?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ashokei commented on a change in pull request #10021: [MXNET-33] Mkldnn pooling convention crash

2018-03-07 Thread GitBox
ashokei commented on a change in pull request #10021: [MXNET-33] Mkldnn pooling 
convention crash
URL: https://github.com/apache/incubator-mxnet/pull/10021#discussion_r172909508
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_pooling-inl.h
 ##
 @@ -92,6 +92,8 @@ inline bool SupportMKLDNNPooling(const PoolingParam ,
 
   if (param.pooling_convention == pool_enum::kValid)
 return true;
+  else
+return false;
 
 Review comment:
   @piiswrong the later code is being disabled for now until we support all 
"full" pooling convention cases.  As @TaoLv mentioned, i will add a unit-test 
for this failure. 
   We may refactor this code, so marking this PR WIP for now.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tlby commented on a change in pull request #9988: [Perl] Sparse feature.

2018-03-07 Thread GitBox
tlby commented on a change in pull request #9988: [Perl] Sparse feature.
URL: https://github.com/apache/incubator-mxnet/pull/9988#discussion_r172895513
 
 

 ##
 File path: perl-package/AI-MXNet/lib/AI/MXNet/NDArray/Sparse.pm
 ##
 @@ -0,0 +1,1342 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+package AI::MXNet::NDArray::Sparse;
+use strict;
+use warnings;
+use AI::MXNet::Base;
+use AI::MXNet::Function::Parameters;
+use Mouse;
+extends 'AI::MXNet::NDArray';
+
+=head1 NAME
+
+AI::MXNet::NDArray::Sparse - Sparse NDArray API of MXNet
+=cut
+
+=head1 DESCRIPTION
+
+The base class of an NDArray stored in a sparse storage format.
+See AI::MXNet::NDArray::CSR and AI::MXNet::NDArray::RowSparse for more 
details.
+=cut
+
+method _new_alloc_handle(
+Stype$stype,
+Shape$shape,
+AI::MXNet::Context   $ctx,
+Bool $delay_alloc,
+Dtype$dtype,
+AuxTypes $aux_types,
+Maybe[ArrayRef[Shape]]   $aux_shapes=
+)
+{
+confess("only int64 is supported for aux types")
+if (grep { $_ ne 'int64' } @$aux_types);
+my $aux_type_ids = [map { DTYPE_STR_TO_MX->{$_} } @$aux_types];
+$aux_shapes //= [map { [0] } @$aux_types];
+my $aux_shape_lens = [map { scalar(@$_) } @$aux_shapes];
+@$aux_shapes = map { @$_ } @$aux_shapes;
+my $num_aux = @{ $aux_types };
+my $handle = check_call(
+AI::MXNetCAPI::NDArrayCreateSparseEx(
+STORAGE_TYPE_STR_TO_ID->{$stype},
+$shape,
+scalar(@$shape),
+$ctx->device_type_id,
+$ctx->device_id,
+$delay_alloc,
+DTYPE_STR_TO_MX->{$dtype},
+scalar(@$aux_types),
+$aux_type_ids,
+$aux_shape_lens,
+$aux_shapes
+)
+);
+}
+
+method _class_name()
+{
+my $class = ref $self || $self;
+$class;
+}
+
+sub not_implemented { confess "Not implemented" }
 
 Review comment:
   from the perl docs:
   > When Perl 5.12 or later encounters an ellipsis statement, it parses this 
without error, but if and when you should actually try to execute it, Perl 
throws an exception with the text Unimplemented
   
   That seems to be highly aligned the Python source such as 
https://github.com/apache/incubator-mxnet/blob/0fe04e9d778ac9c55e933ff3f21c4ddf28a4a101/python/mxnet/ndarray/sparse.py#L118
 as a pattern for the abstract methods subclasses must provide.
   
   Your code isn't wrong, it's just reimplementing a feature the core language 
already has.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-03-07 Thread GitBox
marcoabreu commented on issue #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model 
Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#issuecomment-371187782
 
 
   @reminisce I have updated the slave. Please resolve all conflicts and 
trigger CI to test if everything works as expected


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #9921: [DISCUSSION] module.contrib.SparseModule API

2018-03-07 Thread GitBox
zheng-da commented on issue #9921: [DISCUSSION] module.contrib.SparseModule API
URL: 
https://github.com/apache/incubator-mxnet/issues/9921#issuecomment-371185700
 
 
   @eric-haibin-lin I agree the first benefit is important. I wonder if we can 
have forward() to pull required weights from kvstore automatically? to enable 
this, we'll need the operators that support sparse row weights to tell the 
executor which rows should be pulled. we probably need to add another attribute 
to NNVM for these operators.
   
   For the second benefit, I would argue that if you accept maintaining the 
entire weight matrix in each worker node, you can create an array lookup table 
(an array that maps row id to the physical location in the row sparse weight 
matrix) instead of a hashtable to reduce the lookup overhead. Maintaining such 
an array lookup table requires computation of O(nzr), the number of non-zero 
rows, for both cleaning the existing table and inserting entries for a new row 
sparse weight matrix. Maintaining the entire weight matrix in each worker node 
has similar overhead for updating rows. The lookup table has memory of O(n), 
instead of O(n * p), the size of the entire weight matrix.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on a change in pull request #10025: Language model with Google's billion words dataset

2018-03-07 Thread GitBox
TaoLv commented on a change in pull request #10025: Language model with 
Google's billion words dataset
URL: https://github.com/apache/incubator-mxnet/pull/10025#discussion_r172880571
 
 

 ##
 File path: src/operator/nn/fully_connected.cc
 ##
 @@ -214,9 +238,7 @@ If ``no_bias`` is set to be true, then the ``bias`` term 
is ignored.
 .set_attr("FInferShape", FullyConnectedShape)
 .set_attr("FInferType", FullyConnectedType)
 .set_attr("FCompute", FullyConnectedCompute)
-#if MXNET_USE_MKLDNN == 1
 
 Review comment:
   I see. Both sparse and mkldnn support need run into this dispatch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10025: Language model with Google's billion words dataset

2018-03-07 Thread GitBox
eric-haibin-lin commented on a change in pull request #10025: Language model 
with Google's billion words dataset
URL: https://github.com/apache/incubator-mxnet/pull/10025#discussion_r172870624
 
 

 ##
 File path: src/operator/nn/fully_connected.cc
 ##
 @@ -214,9 +238,7 @@ If ``no_bias`` is set to be true, then the ``bias`` term 
is ignored.
 .set_attr("FInferShape", FullyConnectedShape)
 .set_attr("FInferType", FullyConnectedType)
 .set_attr("FCompute", FullyConnectedCompute)
-#if MXNET_USE_MKLDNN == 1
 
 Review comment:
   This is for adding sparse support for FC. See line 213.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10025: Language model with Google's billion words dataset

2018-03-07 Thread GitBox
eric-haibin-lin commented on a change in pull request #10025: Language model 
with Google's billion words dataset
URL: https://github.com/apache/incubator-mxnet/pull/10025#discussion_r172870497
 
 

 ##
 File path: src/operator/nn/fully_connected.cc
 ##
 @@ -87,8 +90,16 @@ void FullyConnectedComputeExCPU(const nnvm::NodeAttrs& 
attrs,
 return;
   }
   FallBackCompute(FullyConnectedCompute, attrs, ctx, inputs, req, 
outputs);
+#else
+  std::vector in_blobs(inputs.size());
+  for (size_t i = 0; i < in_blobs.size(); i++) in_blobs[i] = inputs[i].data();
+  std::vector out_blobs(outputs.size());
+  for (size_t i = 0; i < out_blobs.size(); i++) out_blobs[i] = 
outputs[i].data();
+  FullyConnectedCompute(attrs, ctx, in_blobs, req, out_blobs);
+#endif
 
 Review comment:
   This block will only be executed when MKL is absent


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on a change in pull request #10025: Language model with Google's billion words dataset

2018-03-07 Thread GitBox
TaoLv commented on a change in pull request #10025: Language model with 
Google's billion words dataset
URL: https://github.com/apache/incubator-mxnet/pull/10025#discussion_r172866473
 
 

 ##
 File path: src/operator/nn/fully_connected.cc
 ##
 @@ -214,9 +238,7 @@ If ``no_bias`` is set to be true, then the ``bias`` term 
is ignored.
 .set_attr("FInferShape", FullyConnectedShape)
 .set_attr("FInferType", FullyConnectedType)
 .set_attr("FCompute", FullyConnectedCompute)
-#if MXNET_USE_MKLDNN == 1
 
 Review comment:
   Why remove this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on a change in pull request #10025: Language model with Google's billion words dataset

2018-03-07 Thread GitBox
TaoLv commented on a change in pull request #10025: Language model with 
Google's billion words dataset
URL: https://github.com/apache/incubator-mxnet/pull/10025#discussion_r172867957
 
 

 ##
 File path: src/operator/nn/fully_connected.cc
 ##
 @@ -87,8 +90,16 @@ void FullyConnectedComputeExCPU(const nnvm::NodeAttrs& 
attrs,
 return;
   }
   FallBackCompute(FullyConnectedCompute, attrs, ctx, inputs, req, 
outputs);
+#else
+  std::vector in_blobs(inputs.size());
+  for (size_t i = 0; i < in_blobs.size(); i++) in_blobs[i] = inputs[i].data();
+  std::vector out_blobs(outputs.size());
+  for (size_t i = 0; i < out_blobs.size(); i++) out_blobs[i] = 
outputs[i].data();
+  FullyConnectedCompute(attrs, ctx, in_blobs, req, out_blobs);
+#endif
 
 Review comment:
   I think ?FallBackCompute? should be used to fall back the computation to 
original cpu implementation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Ldpe2G commented on issue #9984: [MXNET-38]add reshape predicator function to c_predict_api

2018-03-07 Thread GitBox
Ldpe2G commented on issue #9984: [MXNET-38]add reshape predicator function to 
c_predict_api
URL: https://github.com/apache/incubator-mxnet/pull/9984#issuecomment-371159253
 
 
   @cjolivier01 I have updated the pr


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new pull request #10025: Language model with Google's billion words dataset

2018-03-07 Thread GitBox
eric-haibin-lin opened a new pull request #10025: Language model with Google's 
billion words dataset
URL: https://github.com/apache/incubator-mxnet/pull/10025
 
 
   ## Description ##
   This example reproduces the result (~42 perplexity) on [Exploring the Limits 
of Language Modeling](https://arxiv.org/pdf/1602.02410.pdf) on the GBW dataset. 
   See `readme.mk` for details. 
   @mli @szha @zheng-da @piiswrong 
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu closed pull request #10023: R graph.viz fix

2018-03-07 Thread GitBox
marcoabreu closed pull request #10023: R graph.viz fix
URL: https://github.com/apache/incubator-mxnet/pull/10023
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/R-package/R/viz.graph.R b/R-package/R/viz.graph.R
index 6d13de0af1d..abc83236bb3 100644
--- a/R-package/R/viz.graph.R
+++ b/R-package/R/viz.graph.R
@@ -8,7 +8,6 @@
 #' @importFrom stringr str_trim
 #' @importFrom jsonlite fromJSON
 #' @importFrom DiagrammeR create_graph
-#' @importFrom DiagrammeR set_global_graph_attrs 
 #' @importFrom DiagrammeR add_global_graph_attrs
 #' @importFrom DiagrammeR create_node_df
 #' @importFrom DiagrammeR create_edge_df
@@ -63,91 +62,91 @@ graph.viz <- function(symbol, shape=NULL, direction="TD", 
type="graph", graph.wi
 )
   }
   
-  model_list<- fromJSON(symbol$as.json())
-  model_nodes<- model_list$nodes
-  model_nodes$id<- 1:nrow(model_nodes)-1
-  model_nodes$level<- model_nodes$ID
+  model_list <- fromJSON(symbol$as.json())
+  model_nodes <- model_list$nodes
+  model_nodes$id <- seq_len(nrow(model_nodes))-1
+  model_nodes$level <- model_nodes$ID
   
   # extract IDs from string list
-  tuple_str <- function(str) sapply(str_extract_all(str, "\\d+"), function(x) 
paste0(x, collapse="X"))
+  tuple_str <- function(str) vapply(str_extract_all(str, "\\d+"),
+function(x) paste0(x, collapse="X"),
+character(1))
   
   ### substitute op for heads
-  op_id<- sort(unique(model_list$heads[1,]+1))
-  op_null<- which(model_nodes$op=="null")
-  op_substitute<- intersect(op_id, op_null)
-  model_nodes$op[op_substitute]<- model_nodes$name[op_substitute]
-  
-  model_nodes$color<- apply(model_nodes["op"], 1, get.color)
-  model_nodes$shape<- apply(model_nodes["op"], 1, get.shape)
-  
-  label_paste <- paste0(
-model_nodes$op,
-"\n",
-model_nodes$name,
-"\n",
-model_nodes$attr$num_hidden %>% str_replace_na() %>% 
str_replace_all(pattern = "NA", ""),
-model_nodes$attr$act_type %>% str_replace_na() %>% str_replace_all(pattern 
= "NA", ""),
-model_nodes$attr$pool_type %>% str_replace_na() %>% 
str_replace_all(pattern = "NA", ""),
-model_nodes$attr$kernel %>% tuple_str %>% str_replace_na() %>% 
str_replace_all(pattern = "NA", ""),
-" / ",
-model_nodes$attr$stride %>% tuple_str %>% str_replace_na() %>% 
str_replace_all(pattern = "NA", ""),
-", ",
-model_nodes$attr$num_filter %>% str_replace_na() %>% 
str_replace_all(pattern = "NA", "")
-  ) %>% 
+  op_id <- sort(unique(model_list$heads[1,]+1))
+  op_null <- which(model_nodes$op=="null")
+  op_substitute <- intersect(op_id, op_null)
+  model_nodes$op[op_substitute] <- model_nodes$name[op_substitute]
+  
+  model_nodes$color <- apply(model_nodes["op"], 1, get.color)
+  model_nodes$shape <- apply(model_nodes["op"], 1, get.shape)
+  
+  label_paste <- paste0(model_nodes$op,
+"\n",
+model_nodes$name,
+"\n",
+model_nodes$attr$num_hidden %>% str_replace_na() %>% 
str_replace_all(pattern = "NA", ""),
+model_nodes$attr$act_type %>% str_replace_na() %>% 
str_replace_all(pattern = "NA", ""),
+model_nodes$attr$pool_type %>% str_replace_na() %>% 
str_replace_all(pattern = "NA", ""),
+model_nodes$attr$kernel %>% tuple_str %>% 
str_replace_na() %>% str_replace_all(pattern = "NA", ""),
+" / ",
+model_nodes$attr$stride %>% tuple_str %>% 
str_replace_na() %>% str_replace_all(pattern = "NA", ""),
+", ",
+model_nodes$attr$num_filter %>% str_replace_na() %>% 
str_replace_all(pattern = "NA", "")) %>% 
 str_replace_all(pattern = "[^[:alnum:]]+$", "")  %>% 
 str_trim
   
-  model_nodes$label<- label_paste
+  model_nodes$label <- label_paste
   
   id.to.keep <- model_nodes$id[!model_nodes$op=="null"]
   nodes_df <- model_nodes[model_nodes$id %in% id.to.keep, c("id", "label", 
"shape", "color")]
   
   ### remapping for DiagrammeR convention
-  nodes_df$id<- nodes_df$id
-  nodes_df$id_graph<- 1:nrow(nodes_df)
-  id_dic<- nodes_df$id_graph
-  names(id_dic)<- as.character(nodes_df$id)
-  
-  edges_id<- model_nodes$id[!sapply(model_nodes$inputs, length)==0 & 
!model_nodes$op=="null"]
-  edges_id<- id_dic[as.character(edges_id)]
-  edges<- model_nodes$inputs[!sapply(model_nodes$inputs, length)==0 & 
!model_nodes$op=="null"]
-  edges<- sapply(edges, function(x)intersect(as.numeric(x[, 1]), id.to.keep), 
simplify = F)
-  names(edges)<- edges_id
-  
-  edges_df<- data.frame(
-from=unlist(edges),
-to=rep(names(edges), time=sapply(edges, length)),
-arrows = "to",
-

[incubator-mxnet] branch v1.0.0 updated: R graph.viz fix (#10023)

2018-03-07 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch v1.0.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.0.0 by this push:
 new 3df9bf8  R graph.viz fix (#10023)
3df9bf8 is described below

commit 3df9bf802021d5aa67c609c6736acee94aaf3a48
Author: jeremiedb 
AuthorDate: Wed Mar 7 08:28:23 2018 -0500

R graph.viz fix (#10023)

* R graph.viz fix

* sub
---
 R-package/R/viz.graph.R | 129 
 1 file changed, 64 insertions(+), 65 deletions(-)

diff --git a/R-package/R/viz.graph.R b/R-package/R/viz.graph.R
index 6d13de0..abc8323 100644
--- a/R-package/R/viz.graph.R
+++ b/R-package/R/viz.graph.R
@@ -8,7 +8,6 @@
 #' @importFrom stringr str_trim
 #' @importFrom jsonlite fromJSON
 #' @importFrom DiagrammeR create_graph
-#' @importFrom DiagrammeR set_global_graph_attrs 
 #' @importFrom DiagrammeR add_global_graph_attrs
 #' @importFrom DiagrammeR create_node_df
 #' @importFrom DiagrammeR create_edge_df
@@ -63,91 +62,91 @@ graph.viz <- function(symbol, shape=NULL, direction="TD", 
type="graph", graph.wi
 )
   }
   
-  model_list<- fromJSON(symbol$as.json())
-  model_nodes<- model_list$nodes
-  model_nodes$id<- 1:nrow(model_nodes)-1
-  model_nodes$level<- model_nodes$ID
+  model_list <- fromJSON(symbol$as.json())
+  model_nodes <- model_list$nodes
+  model_nodes$id <- seq_len(nrow(model_nodes))-1
+  model_nodes$level <- model_nodes$ID
   
   # extract IDs from string list
-  tuple_str <- function(str) sapply(str_extract_all(str, "\\d+"), function(x) 
paste0(x, collapse="X"))
+  tuple_str <- function(str) vapply(str_extract_all(str, "\\d+"),
+function(x) paste0(x, collapse="X"),
+character(1))
   
   ### substitute op for heads
-  op_id<- sort(unique(model_list$heads[1,]+1))
-  op_null<- which(model_nodes$op=="null")
-  op_substitute<- intersect(op_id, op_null)
-  model_nodes$op[op_substitute]<- model_nodes$name[op_substitute]
-  
-  model_nodes$color<- apply(model_nodes["op"], 1, get.color)
-  model_nodes$shape<- apply(model_nodes["op"], 1, get.shape)
-  
-  label_paste <- paste0(
-model_nodes$op,
-"\n",
-model_nodes$name,
-"\n",
-model_nodes$attr$num_hidden %>% str_replace_na() %>% 
str_replace_all(pattern = "NA", ""),
-model_nodes$attr$act_type %>% str_replace_na() %>% str_replace_all(pattern 
= "NA", ""),
-model_nodes$attr$pool_type %>% str_replace_na() %>% 
str_replace_all(pattern = "NA", ""),
-model_nodes$attr$kernel %>% tuple_str %>% str_replace_na() %>% 
str_replace_all(pattern = "NA", ""),
-" / ",
-model_nodes$attr$stride %>% tuple_str %>% str_replace_na() %>% 
str_replace_all(pattern = "NA", ""),
-", ",
-model_nodes$attr$num_filter %>% str_replace_na() %>% 
str_replace_all(pattern = "NA", "")
-  ) %>% 
+  op_id <- sort(unique(model_list$heads[1,]+1))
+  op_null <- which(model_nodes$op=="null")
+  op_substitute <- intersect(op_id, op_null)
+  model_nodes$op[op_substitute] <- model_nodes$name[op_substitute]
+  
+  model_nodes$color <- apply(model_nodes["op"], 1, get.color)
+  model_nodes$shape <- apply(model_nodes["op"], 1, get.shape)
+  
+  label_paste <- paste0(model_nodes$op,
+"\n",
+model_nodes$name,
+"\n",
+model_nodes$attr$num_hidden %>% str_replace_na() %>% 
str_replace_all(pattern = "NA", ""),
+model_nodes$attr$act_type %>% str_replace_na() %>% 
str_replace_all(pattern = "NA", ""),
+model_nodes$attr$pool_type %>% str_replace_na() %>% 
str_replace_all(pattern = "NA", ""),
+model_nodes$attr$kernel %>% tuple_str %>% 
str_replace_na() %>% str_replace_all(pattern = "NA", ""),
+" / ",
+model_nodes$attr$stride %>% tuple_str %>% 
str_replace_na() %>% str_replace_all(pattern = "NA", ""),
+", ",
+model_nodes$attr$num_filter %>% str_replace_na() %>% 
str_replace_all(pattern = "NA", "")) %>% 
 str_replace_all(pattern = "[^[:alnum:]]+$", "")  %>% 
 str_trim
   
-  model_nodes$label<- label_paste
+  model_nodes$label <- label_paste
   
   id.to.keep <- model_nodes$id[!model_nodes$op=="null"]
   nodes_df <- model_nodes[model_nodes$id %in% id.to.keep, c("id", "label", 
"shape", "color")]
   
   ### remapping for DiagrammeR convention
-  nodes_df$id<- nodes_df$id
-  nodes_df$id_graph<- 1:nrow(nodes_df)
-  id_dic<- nodes_df$id_graph
-  names(id_dic)<- as.character(nodes_df$id)
-  
-  edges_id<- model_nodes$id[!sapply(model_nodes$inputs, length)==0 & 
!model_nodes$op=="null"]
-  edges_id<- id_dic[as.character(edges_id)]
-  edges<- model_nodes$inputs[!sapply(model_nodes$inputs, length)==0 & 

[GitHub] marcoabreu commented on issue #10020: How can I build C++ package?

2018-03-07 Thread GitBox
marcoabreu commented on issue #10020: How can I build C++ package?
URL: 
https://github.com/apache/incubator-mxnet/issues/10020#issuecomment-371121048
 
 
   @aaronmarkham see comment about website


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #9803: R Metrics

2018-03-07 Thread GitBox
marcoabreu commented on issue #9803: R Metrics
URL: https://github.com/apache/incubator-mxnet/pull/9803#issuecomment-371117644
 
 
   I understand. In that case I'm fine with merging as long as the problem is 
tracked somewhere. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #9906: Add CPU optimized docker which will be compiled with MKL-DNN

2018-03-07 Thread GitBox
marcoabreu commented on issue #9906: Add CPU optimized docker which will be 
compiled with MKL-DNN
URL: https://github.com/apache/incubator-mxnet/pull/9906#issuecomment-371115687
 
 
   I can really only speak from a CI perspective, maybe @eric-haibin-lin or 
@szha could shed a bit of light into the situation with the other dockerfiles.
   
   In terms of CI we are using the dockerfiles at tests/ci_build, but they will 
be migrated as soon as https://github.com/apache/incubator-mxnet/pull/9995 is 
merged. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] edmBernard commented on issue #10020: How can I build C++ package?

2018-03-07 Thread GitBox
edmBernard commented on issue #10020: How can I build C++ package?
URL: 
https://github.com/apache/incubator-mxnet/issues/10020#issuecomment-37564
 
 
   You can use this parameter when you run your make command :
   `USE_CPP_PACKAGE=1`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands closed pull request #9991: Random shuffle implementation

2018-03-07 Thread GitBox
asitstands closed pull request #9991: Random shuffle implementation
URL: https://github.com/apache/incubator-mxnet/pull/9991
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/api/python/ndarray/random.md 
b/docs/api/python/ndarray/random.md
index ae9e69f758f..4341a3ce2cd 100644
--- a/docs/api/python/ndarray/random.md
+++ b/docs/api/python/ndarray/random.md
@@ -35,6 +35,8 @@ In the rest of this document, we list routines provided by 
the `ndarray.random`
 normal
 poisson
 uniform
+multinomial
+shuffle
 mxnet.random.seed
 ```
 
diff --git a/docs/api/python/symbol/random.md b/docs/api/python/symbol/random.md
index a3492f6f840..22c686ff2fd 100644
--- a/docs/api/python/symbol/random.md
+++ b/docs/api/python/symbol/random.md
@@ -35,6 +35,8 @@ In the rest of this document, we list routines provided by 
the `symbol.random` p
 normal
 poisson
 uniform
+multinomial
+shuffle
 mxnet.random.seed
 ```
 
diff --git a/python/mxnet/ndarray/random.py b/python/mxnet/ndarray/random.py
index af125753e5e..49e32d6fd42 100644
--- a/python/mxnet/ndarray/random.py
+++ b/python/mxnet/ndarray/random.py
@@ -24,7 +24,7 @@
 
 
 __all__ = ['uniform', 'normal', 'poisson', 'exponential', 'gamma', 
'multinomial',
-   'negative_binomial', 'generalized_negative_binomial']
+   'negative_binomial', 'generalized_negative_binomial', 'shuffle']
 
 
 def _random_helper(random, sampler, params, shape, dtype, ctx, out, kwargs):
@@ -431,3 +431,32 @@ def multinomial(data, shape=_Null, get_prob=False, 
out=None, **kwargs):
 
 """
 return _internal._sample_multinomial(data, shape, get_prob, out=out, 
**kwargs)
+
+
+def shuffle(data, out=None, **kwargs):
+"""Shuffle the elements randomly.
+
+This shuffles the elements along the last axis, i.e., for each element,
+all indices except the last one are preserved but the last one changes 
randomly.
+
+Parameters
+--
+data : NDArray
+Input data array.
+out : NDArray
+Array to store the result.
+   For in-place shuffle, set this to the same array assigned to `data`.
+
+Examples
+
+>>> data = mx.nd.array([[0, 1, 2], [3, 4, 5]])
+>>> mx.nd.random.shuffle(data)
+[[ 0.  2.  1.]
+ [ 5.  4.  3.]]
+
+>>> mx.nd.random.shuffle(data)
+[[ 1.  2.  0.]
+ [ 3.  5.  4.]]
+
+"""
+return _internal._shuffle(data, out, **kwargs)
diff --git a/python/mxnet/symbol/random.py b/python/mxnet/symbol/random.py
index f0d05ad0561..76b28900b60 100644
--- a/python/mxnet/symbol/random.py
+++ b/python/mxnet/symbol/random.py
@@ -247,3 +247,30 @@ def multinomial(data, shape=_Null, get_prob=True, 
**kwargs):
 reward as head gradient w.r.t. this array to estimate gradient.
 """
 return _internal._sample_multinomial(data, shape, get_prob, **kwargs)
+
+def shuffle(data, **kwargs):
+"""Shuffle the elements randomly.
+
+This shuffles the elements along the last axis, i.e., for each element,
+all indices except the last one are preserved but the last one changes 
randomly.
+
+Parameters
+--
+data : NDArray
+Input data array.
+
+Examples
+
+>>> data = mx.nd.array([[0, 1, 2], [3, 4, 5]])
+>>> a = mx.sym.Variable('a')
+>>> b = mx.sym.random.shuffle(a)
+>>> b.eval(a=data)
+[[ 0.  2.  1.]
+ [ 5.  4.  3.]]
+
+>>> b.eval(a=data)
+[[ 1.  2.  0.]
+ [ 3.  5.  4.]]
+
+"""
+return _internal._shuffle(data, **kwargs)
diff --git a/src/operator/random/shuffle_op.cc 
b/src/operator/random/shuffle_op.cc
new file mode 100644
index 000..073797a88b9
--- /dev/null
+++ b/src/operator/random/shuffle_op.cc
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file shuffle_op.cc
+ * \brief Operator to shuffle elements of an NDArray
+ */
+#if (__GNUC__ > 4 && !defined(__clang__major__)) || 

[GitHub] asitstands commented on issue #9991: Random shuffle implementation

2018-03-07 Thread GitBox
asitstands commented on issue #9991: Random shuffle implementation
URL: https://github.com/apache/incubator-mxnet/pull/9991#issuecomment-371105396
 
 
   Ok, I'll make a new PR with an implementation complying to numpy.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zlin3000 opened a new pull request #10024: fixed calling error while preparing data in ssd

2018-03-07 Thread GitBox
zlin3000 opened a new pull request #10024: fixed calling error while preparing 
data in ssd
URL: https://github.com/apache/incubator-mxnet/pull/10024
 
 
   fixed calling error while calling im2rec.py in prepare_dataset.py
   add threads support in prepare_dataset.py
   
   ## Description ##
   Since the im2rec.py is changed, the calling method in prepare_dataset.py in 
ssd is invalid. Thus, modified the code to allow converting data correctly. 
Also, added threads support to improve converting speed.
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >