Re: [VOTE] Release Apache MXNet (incubating) version 1.6.0.rc1

2020-01-10 Thread Jun Wu
+1 (binding)

Built from source. Ran all the GPU tests and test_numpy*.py cpu tests
without problems.

On Fri, Jan 10, 2020 at 9:43 PM Skalicky, Sam 
wrote:

> We can enable building nightlys for feature branches too.
>
> Sam
>
> > On Jan 10, 2020, at 7:48 PM, Lin Yuan  wrote:
> >
> > We can release one cpu-mkl and one CUDA wheel  for testing various
> > applications. Other people can build from source if they want other
> flavors
> >
> > Lin
> >
> >> On Fri, Jan 10, 2020 at 4:00 PM Karan Jariwala <
> karankjariw...@gmail.com>
> >> wrote:
> >>
> >> Yes, agree with your point. But we will be requiring  many flavors of
> pip
> >> wheel.
> >>
> >> MKL/ without MKL
> >> CUDA/ no CUDA
> >> Linux/windows/Mac
> >>
> >> Thanks,
> >> Karan
> >>
> >> On Fri, Jan 10, 2020 at 3:54 PM Haibin Lin 
> >> wrote:
> >>
> >>> Shall we provide pip wheels for later release votes?
> >>>
> >>> Not everyone knows how to build MXNet from source (and building from
> >> source
> >>> also takes very long). Providing a pip wheel would lower the bar for
> >> users
> >>> who wants to test MXNet and participate in voting.
> >>>
> >>> Best,
> >>> Haibin
> >>>
> >>> On Fri, Jan 10, 2020 at 3:50 PM Haibin Lin 
> >>> wrote:
> >>>
>  +1
> 
>  Built from source with USE_CUDA=1 on Ubuntu. Run gluon-nlp unit tests
> >> and
>  they passed.
> 
>  On Fri, Jan 10, 2020 at 3:18 PM Karan Jariwala <
> >> karankjariw...@gmail.com
> 
>  wrote:
> 
> > +1
> >
> > Tested MXNet with and without MKL-DNN on Ubuntu 16.04 with Horovod
> >>> 0.18.2.
> > No regression seen between 1.5.1 and 1.6.0.rc1 when running
> >>> horovod_MXNet
> > integration test.
> >
> >
> > Thanks,
> >
> > Karan
> >
> > On Fri, Jan 10, 2020 at 2:47 PM Markus Weimer 
> >>> wrote:
> >
> >> +1 (binding)
> >>
> >> I tested on Ubuntu 18.04 on the Windows Subsystem for Linux.
> >>
> >> Tested:
> >>  * Built from source using the instructions here [0]
> >>  * Ran the tests in `./build/tests/mxnet_unit_tests`
> >>  * SHA512 of the archive
> >>
> >> Not tested:
> >>  * Language bindings
> >>  * CUDA or other GPU acceleration
> >>  * LICENSE and compliance status
> >>  * Signature of the archive
> >>
> >
> 
> >>>
> >>
>


Re: [VOTE] Release Apache MXNet (incubating) version 1.6.0.rc1

2020-01-10 Thread Skalicky, Sam
We can enable building nightlys for feature branches too. 

Sam

> On Jan 10, 2020, at 7:48 PM, Lin Yuan  wrote:
> 
> We can release one cpu-mkl and one CUDA wheel  for testing various
> applications. Other people can build from source if they want other flavors
> 
> Lin
> 
>> On Fri, Jan 10, 2020 at 4:00 PM Karan Jariwala 
>> wrote:
>> 
>> Yes, agree with your point. But we will be requiring  many flavors of pip
>> wheel.
>> 
>> MKL/ without MKL
>> CUDA/ no CUDA
>> Linux/windows/Mac
>> 
>> Thanks,
>> Karan
>> 
>> On Fri, Jan 10, 2020 at 3:54 PM Haibin Lin 
>> wrote:
>> 
>>> Shall we provide pip wheels for later release votes?
>>> 
>>> Not everyone knows how to build MXNet from source (and building from
>> source
>>> also takes very long). Providing a pip wheel would lower the bar for
>> users
>>> who wants to test MXNet and participate in voting.
>>> 
>>> Best,
>>> Haibin
>>> 
>>> On Fri, Jan 10, 2020 at 3:50 PM Haibin Lin 
>>> wrote:
>>> 
 +1
 
 Built from source with USE_CUDA=1 on Ubuntu. Run gluon-nlp unit tests
>> and
 they passed.
 
 On Fri, Jan 10, 2020 at 3:18 PM Karan Jariwala <
>> karankjariw...@gmail.com
 
 wrote:
 
> +1
> 
> Tested MXNet with and without MKL-DNN on Ubuntu 16.04 with Horovod
>>> 0.18.2.
> No regression seen between 1.5.1 and 1.6.0.rc1 when running
>>> horovod_MXNet
> integration test.
> 
> 
> Thanks,
> 
> Karan
> 
> On Fri, Jan 10, 2020 at 2:47 PM Markus Weimer 
>>> wrote:
> 
>> +1 (binding)
>> 
>> I tested on Ubuntu 18.04 on the Windows Subsystem for Linux.
>> 
>> Tested:
>>  * Built from source using the instructions here [0]
>>  * Ran the tests in `./build/tests/mxnet_unit_tests`
>>  * SHA512 of the archive
>> 
>> Not tested:
>>  * Language bindings
>>  * CUDA or other GPU acceleration
>>  * LICENSE and compliance status
>>  * Signature of the archive
>> 
> 
 
>>> 
>> 


Re: [VOTE] Release Apache MXNet (incubating) version 1.6.0.rc1

2020-01-10 Thread Lin Yuan
We can release one cpu-mkl and one CUDA wheel  for testing various
applications. Other people can build from source if they want other flavors

Lin

On Fri, Jan 10, 2020 at 4:00 PM Karan Jariwala 
wrote:

> Yes, agree with your point. But we will be requiring  many flavors of pip
> wheel.
>
> MKL/ without MKL
> CUDA/ no CUDA
> Linux/windows/Mac
>
> Thanks,
> Karan
>
> On Fri, Jan 10, 2020 at 3:54 PM Haibin Lin 
> wrote:
>
> > Shall we provide pip wheels for later release votes?
> >
> > Not everyone knows how to build MXNet from source (and building from
> source
> > also takes very long). Providing a pip wheel would lower the bar for
> users
> > who wants to test MXNet and participate in voting.
> >
> > Best,
> > Haibin
> >
> > On Fri, Jan 10, 2020 at 3:50 PM Haibin Lin 
> > wrote:
> >
> > > +1
> > >
> > > Built from source with USE_CUDA=1 on Ubuntu. Run gluon-nlp unit tests
> and
> > > they passed.
> > >
> > > On Fri, Jan 10, 2020 at 3:18 PM Karan Jariwala <
> karankjariw...@gmail.com
> > >
> > > wrote:
> > >
> > >> +1
> > >>
> > >> Tested MXNet with and without MKL-DNN on Ubuntu 16.04 with Horovod
> > 0.18.2.
> > >> No regression seen between 1.5.1 and 1.6.0.rc1 when running
> > horovod_MXNet
> > >> integration test.
> > >>
> > >>
> > >> Thanks,
> > >>
> > >> Karan
> > >>
> > >> On Fri, Jan 10, 2020 at 2:47 PM Markus Weimer 
> > wrote:
> > >>
> > >> > +1 (binding)
> > >> >
> > >> > I tested on Ubuntu 18.04 on the Windows Subsystem for Linux.
> > >> >
> > >> > Tested:
> > >> >   * Built from source using the instructions here [0]
> > >> >   * Ran the tests in `./build/tests/mxnet_unit_tests`
> > >> >   * SHA512 of the archive
> > >> >
> > >> > Not tested:
> > >> >   * Language bindings
> > >> >   * CUDA or other GPU acceleration
> > >> >   * LICENSE and compliance status
> > >> >   * Signature of the archive
> > >> >
> > >>
> > >
> >
>


Re: CD with windows need a special jenkins slave machine like restricted-utility

2020-01-10 Thread shiwen hu
use x64 host msvc. cmake -T host=x64

Pedro Larroy  于2020年1月10日周五 上午7:28写道:

> Is there a solution for this error in VS2017?
>
> c:\users\administrator\mxnet\src\operator\mxnet_op.h(943) : fatal error
> C1002: compiler is out of heap space in pass 2
>
>
>
> On Tue, Jan 7, 2020 at 5:11 PM shiwen hu  wrote:
>
> > >
> > > I personally encountered the problem that 2015 can't compile in high
> > > version cuda. But I can't remember the details. We can continue to use
> > 2015
> > > until we encounter problems.
> > >
> >
>


Re: [VOTE] Release Apache MXNet (incubating) version 1.6.0.rc1

2020-01-10 Thread Karan Jariwala
Yes, agree with your point. But we will be requiring  many flavors of pip
wheel.

MKL/ without MKL
CUDA/ no CUDA
Linux/windows/Mac

Thanks,
Karan

On Fri, Jan 10, 2020 at 3:54 PM Haibin Lin  wrote:

> Shall we provide pip wheels for later release votes?
>
> Not everyone knows how to build MXNet from source (and building from source
> also takes very long). Providing a pip wheel would lower the bar for users
> who wants to test MXNet and participate in voting.
>
> Best,
> Haibin
>
> On Fri, Jan 10, 2020 at 3:50 PM Haibin Lin 
> wrote:
>
> > +1
> >
> > Built from source with USE_CUDA=1 on Ubuntu. Run gluon-nlp unit tests and
> > they passed.
> >
> > On Fri, Jan 10, 2020 at 3:18 PM Karan Jariwala  >
> > wrote:
> >
> >> +1
> >>
> >> Tested MXNet with and without MKL-DNN on Ubuntu 16.04 with Horovod
> 0.18.2.
> >> No regression seen between 1.5.1 and 1.6.0.rc1 when running
> horovod_MXNet
> >> integration test.
> >>
> >>
> >> Thanks,
> >>
> >> Karan
> >>
> >> On Fri, Jan 10, 2020 at 2:47 PM Markus Weimer 
> wrote:
> >>
> >> > +1 (binding)
> >> >
> >> > I tested on Ubuntu 18.04 on the Windows Subsystem for Linux.
> >> >
> >> > Tested:
> >> >   * Built from source using the instructions here [0]
> >> >   * Ran the tests in `./build/tests/mxnet_unit_tests`
> >> >   * SHA512 of the archive
> >> >
> >> > Not tested:
> >> >   * Language bindings
> >> >   * CUDA or other GPU acceleration
> >> >   * LICENSE and compliance status
> >> >   * Signature of the archive
> >> >
> >>
> >
>


Re: [VOTE] Release Apache MXNet (incubating) version 1.6.0.rc1

2020-01-10 Thread Haibin Lin
Shall we provide pip wheels for later release votes?

Not everyone knows how to build MXNet from source (and building from source
also takes very long). Providing a pip wheel would lower the bar for users
who wants to test MXNet and participate in voting.

Best,
Haibin

On Fri, Jan 10, 2020 at 3:50 PM Haibin Lin  wrote:

> +1
>
> Built from source with USE_CUDA=1 on Ubuntu. Run gluon-nlp unit tests and
> they passed.
>
> On Fri, Jan 10, 2020 at 3:18 PM Karan Jariwala 
> wrote:
>
>> +1
>>
>> Tested MXNet with and without MKL-DNN on Ubuntu 16.04 with Horovod 0.18.2.
>> No regression seen between 1.5.1 and 1.6.0.rc1 when running horovod_MXNet
>> integration test.
>>
>>
>> Thanks,
>>
>> Karan
>>
>> On Fri, Jan 10, 2020 at 2:47 PM Markus Weimer  wrote:
>>
>> > +1 (binding)
>> >
>> > I tested on Ubuntu 18.04 on the Windows Subsystem for Linux.
>> >
>> > Tested:
>> >   * Built from source using the instructions here [0]
>> >   * Ran the tests in `./build/tests/mxnet_unit_tests`
>> >   * SHA512 of the archive
>> >
>> > Not tested:
>> >   * Language bindings
>> >   * CUDA or other GPU acceleration
>> >   * LICENSE and compliance status
>> >   * Signature of the archive
>> >
>>
>


Re: [VOTE] Release Apache MXNet (incubating) version 1.6.0.rc1

2020-01-10 Thread Haibin Lin
+1

Built from source with USE_CUDA=1 on Ubuntu. Run gluon-nlp unit tests and
they passed.

On Fri, Jan 10, 2020 at 3:18 PM Karan Jariwala 
wrote:

> +1
>
> Tested MXNet with and without MKL-DNN on Ubuntu 16.04 with Horovod 0.18.2.
> No regression seen between 1.5.1 and 1.6.0.rc1 when running horovod_MXNet
> integration test.
>
>
> Thanks,
>
> Karan
>
> On Fri, Jan 10, 2020 at 2:47 PM Markus Weimer  wrote:
>
> > +1 (binding)
> >
> > I tested on Ubuntu 18.04 on the Windows Subsystem for Linux.
> >
> > Tested:
> >   * Built from source using the instructions here [0]
> >   * Ran the tests in `./build/tests/mxnet_unit_tests`
> >   * SHA512 of the archive
> >
> > Not tested:
> >   * Language bindings
> >   * CUDA or other GPU acceleration
> >   * LICENSE and compliance status
> >   * Signature of the archive
> >
>


Re: [apache/incubator-mxnet] [mxnet 2.0][item 4.8][RFC] Gluon Data API Extension and Fixes(Part 1) (#17263)

2020-01-10 Thread Przemyslaw Tredak
What about `mx.io.ImageRecordIter`? Also, what about the return type of those 
iterator - `mx.io` iterators return `mx.io.DataBatch`, will that be changed too?

@JanuszL FYI since DALI MXNet plugin produces `mx.io.DataBatch` and may be 
affected.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17263#issuecomment-573246015

Re: [VOTE] Release Apache MXNet (incubating) version 1.6.0.rc1

2020-01-10 Thread Karan Jariwala
+1

Tested MXNet with and without MKL-DNN on Ubuntu 16.04 with Horovod 0.18.2.
No regression seen between 1.5.1 and 1.6.0.rc1 when running horovod_MXNet
integration test.


Thanks,

Karan

On Fri, Jan 10, 2020 at 2:47 PM Markus Weimer  wrote:

> +1 (binding)
>
> I tested on Ubuntu 18.04 on the Windows Subsystem for Linux.
>
> Tested:
>   * Built from source using the instructions here [0]
>   * Ran the tests in `./build/tests/mxnet_unit_tests`
>   * SHA512 of the archive
>
> Not tested:
>   * Language bindings
>   * CUDA or other GPU acceleration
>   * LICENSE and compliance status
>   * Signature of the archive
>


Re: [apache/incubator-mxnet] [mxnet 2.0][item 4.8][RFC] Gluon Data API Extension and Fixes(Part 2) (#17269)

2020-01-10 Thread Joshua Z. Zhang
@szha @eric-haibin-lin @sxjscience @szhengac Request for comments regarding NLP 
dataloading

-- 
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17269#issuecomment-573242957

[apache/incubator-mxnet] [mxnet 2.0][item 4.8][RFC] Gluon Data API Extension and Fixes(Part 2) (#17269)

2020-01-10 Thread Joshua Z. Zhang
## Description
This is the part 2 of Gluon Data API extension and fixes, which mainly focus on 
speed up the current data loading pipeline using gluon dataset and dataloader.

## Motivation

The current data loading pipeline is the major bottleneck for many training 
tasks. We can summarize the entire flow as:

```bash
| Dataset.__getitem__ -> 
| Transform.__call__()/forward() ->
| Batchify ->
| (optional communicate through shared_mem) ->
| split_and_load(ctxs) ->
| 
-> 
```
where there are performance concerns:
- performance of python dataset/transform functions aren't satisfying
- it's not easy to embrace multithreading to speed up dataloading due to global 
interpreter lock
- python multiprocessing is unfortunately slow and error prune, not to mention 
the shared memory implementations on different OS are quite difference and very 
annoying(e.g., it's very likely to run out of shared memory if not properly 
taken care of)
- currently memory planing for batchify is non-exist, causing frequent 
alloc/dealloc for large chunk of memory if the batch size is big
- batchify then split and load can be optimized to partial_batchify

## Proposal
To alleviate the existing troubles I propose to use a hybrid solution, that is 
to 
- provide C++ Datasets that can cover the most usecases
```python
from gluon.data.dataset import TupleDataset, ImageFolderDataset, 
ArrayDataset
# as long as TupleDataset, ImageSequenceDataset, ArrayDataset are supported 
by backend
dataset = TupleDataset([ImageSequenceDataset(img_paths), 
ArrayDataset(image_labels)])
# dataset is an image classification dataset while fully supported in C++
# with TupleDataset we can combine as many data as possible

# a C++ backed Dataset can have a magic __handle__ method to return the c++ 
handle for reference
class TupleDataset:
def __init__(self, datasets):
if all([callable(getattr(dataset, '__handle__')) for dataset in 
datasets]):
# all supported by backend
self._tuple_dataset = 
check_call(_LIB.MXTupleDatasetCreate([getattr(dataset, '__handle__') for 
dataset in datasets]))
else:
self._tuple_dataset = None

def __handle__(self):
return self._tuple_dataset

```
- provide common C++ batchify functions that are split and context aware. 
Batchify with memory planner is TBD.
- provide a C++ `MultithreadingDataLoader` which inherit the same arguments as 
`gluon.data.DataLoader` but use mxnet internal multithreading rather than 
python multiprocessing.
- fallback to python multiprocessing whenever 
- the dataset is not fully supported by backend(e.g., there are custom 
python datasets)
- Transform is not fully hybridizable
- Batchify is not fully supported by backend

User will continue to use the existing `gluon.data.DataLoader`, and the 
conversion will be applied automatically
```python

loader = gluon.data.DataLoader(hybrid_dataset.transform(hybrid_transform), 
batch_size=32, batchify_fn=hybrid_batchify)

def DataLoader:
def __init__(self, dataset, ...):
if isinstance(dataset, _LazyTransformDataset) and 
is_hybrid(dataset._transform) and is_hybrid(dataset) and is_hybrid(batchify_fn):
self._mt_dataloader = 
check_call(_LIB.MXMultiThreadDataLoaderCreate(...))
def __iter__(self):
if self._mt_dataloader:
return self._mt_dataloader
else:
   # fallback to single thread normal dataloader or multiprocessing 
dataloader

```

With this change, mxnet 2.0 will get smooth transition to mixed data loaders. 
Please comment with specific examples where this proposal fail to accommodate.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17269

Re: [VOTE] Release Apache MXNet (incubating) version 1.6.0.rc1

2020-01-10 Thread Markus Weimer
+1 (binding)

I tested on Ubuntu 18.04 on the Windows Subsystem for Linux.

Tested:
  * Built from source using the instructions here [0]
  * Ran the tests in `./build/tests/mxnet_unit_tests`
  * SHA512 of the archive

Not tested:
  * Language bindings
  * CUDA or other GPU acceleration
  * LICENSE and compliance status
  * Signature of the archive


Re: [VOTE] Release Apache MXNet (incubating) version 1.6.0.rc1

2020-01-10 Thread Qing Lan
+1 (binding)

Built 1.6.0.rc1 on my Mac with MKLDNN

Scala build/test passed.

Thanks,
Qing


From: Chaitanya Bapat 
Sent: Friday, January 10, 2020 12:21
To: dev@mxnet.incubator.apache.org 
Cc: d...@mxnet.apache.org 
Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.6.0.rc1

+1

Built from the dist [1] on Ubuntu 16.04 DL AMI for CPU + MKLDNN
Tested
1. OpPerf (benchmark utility) - Promising results (faster forward times for
certain ops compared to 1.4.0 and 1.5.1)
2. Large tensor support (used the USE_INT64_TENSOR_SIZE = ON flag while
building) : Tests pass

Thanks Przemyslaw for leading 1.6.0! It's taken long. But we're close to
the finish line! Awesome work!

Thanks
Chai

[1] https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.6.0.rc1/

On Thu, 9 Jan 2020 at 22:05, Chen, Ciyong  wrote:

> +1
>
> Build from source on CentOS 7.6 with GCC 4.8.5 with MKLDNN.
> Unit tests passed and imagenet examples (with MKL-DNN subgraph backend)
> looked good on performance and accuracy in both FP32 and INT8 mode, RNN
> training worked.
>
> Thanks,
> -Ciyong
>
> -Original Message-
> From: Lai Wei 
> Sent: Wednesday, January 8, 2020 8:56 AM
> To: dev@mxnet.incubator.apache.org
> Cc: d...@mxnet.apache.org
> Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.6.0.rc1
>
> +1
> Build from source on Ubuntu with CUDA/CUDNN/MKLDNN and tested with
> keras-mxnet.
> Unit tests passed and example works on CPU/GPU.
>
>
> Best Regards
>
> Lai
>
>
> On Tue, Jan 7, 2020 at 11:49 AM Lin Yuan  wrote:
>
> > Correction: it was built from source on Ubuntu 16.04
> >
> > On Tue, Jan 7, 2020 at 11:42 AM Lin Yuan  wrote:
> >
> > > +1
> > >
> > > Build from source on Ubuntu 18 with CUDA/CUDNN/NCCL on and verified
> > > it works with Horovod 0.18.2
> > >
> > > On Tue, Jan 7, 2020 at 9:55 AM Przemysław Trędak
> > > 
> > > wrote:
> > >
> > >> Dear MXNet community,
> > >>
> > >> This is the vote to release Apache MXNet (incubating) version 1.6.0.
> > >> Voting starts today and will close on Friday 1/10/2020 23:59 PST.
> > >>
> > >> Link to release notes:
> > >> https://cwiki.apache.org/confluence/display/MXNET/1.6.0+Release+not
> > >> es
> > >>
> > >> Link to release candidate:
> > >> https://github.com/apache/incubator-mxnet/releases/tag/1.6.0.rc1
> > >>
> > >> Link to source and signatures on apache dist server:
> > >> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.6.0.rc1/
> > >>
> > >> The differences comparing to previous release candidate 1.6.0.rc0:
> > >> * Fix for RNN gradient calculation for MKLDNN ([v1.6.x] Cherry-pick
> > >> MKL-DNN Rnn operator enhancements to v1.6.x (#17225))
> > >> * Fix for Windows CMake build (Backport #16980 #17031 #17018 #17019
> > >> to
> > >> 1.6 branch (#17213))
> > >> * CPU counterpart to contrib multihead attention operators
> > >> (Interleaved MHA for CPU path (#17138) (#17211))
> > >> * Fix for #16060 (fix norm sparse fallback (#17149))
> > >> * Fix for inconsistent names in estimator API (fix parameter names
> > >> in
> > the
> > >> estimator api (#17051) (#17162))
> > >> * Fixes for OpenMP (Backport 3rdparty/openmp fixes (#17193))
> > >> * Fix for pointwise fusion speed for large networks (which was the
> > reason
> > >> of -1 in the vote for rc0) as well as fixes for nondeterminism in
> > >> sum of squares operator and trainer parameter order (Backport
> > >> #17002, #17068
> > and
> > >> #17114 to 1.6 branch (#17137))
> > >>
> > >>
> > >> Please remember to TEST first before voting accordingly:
> > >> +1 = approve
> > >> +0 = no opinion
> > >> -1 = disapprove (provide reason)
> > >>
> > >>
> > >> Best regards,
> > >> Przemyslaw Tredak
> > >>
> > >
> >
>


--
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



Re: [VOTE] Release Apache MXNet (incubating) version 1.6.0.rc1

2020-01-10 Thread Chaitanya Bapat
+1

Built from the dist [1] on Ubuntu 16.04 DL AMI for CPU + MKLDNN
Tested
1. OpPerf (benchmark utility) - Promising results (faster forward times for
certain ops compared to 1.4.0 and 1.5.1)
2. Large tensor support (used the USE_INT64_TENSOR_SIZE = ON flag while
building) : Tests pass

Thanks Przemyslaw for leading 1.6.0! It's taken long. But we're close to
the finish line! Awesome work!

Thanks
Chai

[1] https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.6.0.rc1/

On Thu, 9 Jan 2020 at 22:05, Chen, Ciyong  wrote:

> +1
>
> Build from source on CentOS 7.6 with GCC 4.8.5 with MKLDNN.
> Unit tests passed and imagenet examples (with MKL-DNN subgraph backend)
> looked good on performance and accuracy in both FP32 and INT8 mode, RNN
> training worked.
>
> Thanks,
> -Ciyong
>
> -Original Message-
> From: Lai Wei 
> Sent: Wednesday, January 8, 2020 8:56 AM
> To: dev@mxnet.incubator.apache.org
> Cc: d...@mxnet.apache.org
> Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.6.0.rc1
>
> +1
> Build from source on Ubuntu with CUDA/CUDNN/MKLDNN and tested with
> keras-mxnet.
> Unit tests passed and example works on CPU/GPU.
>
>
> Best Regards
>
> Lai
>
>
> On Tue, Jan 7, 2020 at 11:49 AM Lin Yuan  wrote:
>
> > Correction: it was built from source on Ubuntu 16.04
> >
> > On Tue, Jan 7, 2020 at 11:42 AM Lin Yuan  wrote:
> >
> > > +1
> > >
> > > Build from source on Ubuntu 18 with CUDA/CUDNN/NCCL on and verified
> > > it works with Horovod 0.18.2
> > >
> > > On Tue, Jan 7, 2020 at 9:55 AM Przemysław Trędak
> > > 
> > > wrote:
> > >
> > >> Dear MXNet community,
> > >>
> > >> This is the vote to release Apache MXNet (incubating) version 1.6.0.
> > >> Voting starts today and will close on Friday 1/10/2020 23:59 PST.
> > >>
> > >> Link to release notes:
> > >> https://cwiki.apache.org/confluence/display/MXNET/1.6.0+Release+not
> > >> es
> > >>
> > >> Link to release candidate:
> > >> https://github.com/apache/incubator-mxnet/releases/tag/1.6.0.rc1
> > >>
> > >> Link to source and signatures on apache dist server:
> > >> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.6.0.rc1/
> > >>
> > >> The differences comparing to previous release candidate 1.6.0.rc0:
> > >> * Fix for RNN gradient calculation for MKLDNN ([v1.6.x] Cherry-pick
> > >> MKL-DNN Rnn operator enhancements to v1.6.x (#17225))
> > >> * Fix for Windows CMake build (Backport #16980 #17031 #17018 #17019
> > >> to
> > >> 1.6 branch (#17213))
> > >> * CPU counterpart to contrib multihead attention operators
> > >> (Interleaved MHA for CPU path (#17138) (#17211))
> > >> * Fix for #16060 (fix norm sparse fallback (#17149))
> > >> * Fix for inconsistent names in estimator API (fix parameter names
> > >> in
> > the
> > >> estimator api (#17051) (#17162))
> > >> * Fixes for OpenMP (Backport 3rdparty/openmp fixes (#17193))
> > >> * Fix for pointwise fusion speed for large networks (which was the
> > reason
> > >> of -1 in the vote for rc0) as well as fixes for nondeterminism in
> > >> sum of squares operator and trainer parameter order (Backport
> > >> #17002, #17068
> > and
> > >> #17114 to 1.6 branch (#17137))
> > >>
> > >>
> > >> Please remember to TEST first before voting accordingly:
> > >> +1 = approve
> > >> +0 = no opinion
> > >> -1 = disapprove (provide reason)
> > >>
> > >>
> > >> Best regards,
> > >> Przemyslaw Tredak
> > >>
> > >
> >
>


-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



Re: [apache/incubator-mxnet] [RFC] Deferred compute in imperative interface to unify imperative and symbolic interface (#16376)

2020-01-10 Thread SunDoge
Is there any progress? I really like the `static_shape` part. Currently, the 
symbol has no `shape` attribute which makes it hard to use some ops in 
HybridBlock, for example

```python
def hybrid_forward(self, F, feat):
_B, C, H, W = feat.shape
x = F.linspace(-1, 1, H)
```
even if I know the C, H, W will never change and I will never access the batch 
size B. I only need the shape once and the shape should be cached. This RFC may 
fix it.


-- 
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/16376#issuecomment-573116971

Re: Stopping nightly releases to Pypi

2020-01-10 Thread Sheng Zha
Size of a change doesn't necessarily reflect the time one spends on the 
navigating the code base and finding the solution. Also, I tend to believe that 
everyone genuinely wants what's best for the project, just from different 
perspectives.

Let's focus on improving the CD solution so that security concerns can be 
addressed too.

-sz

On 2020/01/09 21:57:30, Chris Olivier  wrote: 
> If this tiny fix is representative of the bulk of the reasoning behind all
> the the CD churn recently, then this seems to be of some concern.
> 
> -Chris
> 
> On Thu, Jan 9, 2020 at 6:32 AM Marco de Abreu 
> wrote:
> 
> > Great, thanks a lot sheng!
> >
> > -Marco
> >
> > Sheng Zha  schrieb am Do., 9. Jan. 2020, 14:28:
> >
> > > I'm fixing the CD pipeline in
> > > https://github.com/apache/incubator-mxnet/pull/17259/files and will
> > > update the s3 publish path so that it's friendly for automatically
> > > generating such page.
> > >
> > > -sz
> > >
> > > On 2020/01/06 18:19:52, "Lausen, Leonard" 
> > > wrote:
> > > > Consider a user finds a bug in a nightly version but we can't narrow
> > > down the
> > > > version of mxnet used as the name is constant over time. Or users wan't
> > > to
> > > > revert back to the previous nightly version installed but don't know
> > > which date
> > > > it was from due to constant name.
> > > >
> > > > Instead I suggest we introduce an autogenerated page like
> > > > https://download.pytorch.org/whl/nightly/cu101/torch_nightly.html
> > > >
> > > > Then "pip install -f URLTOPAGE mxnet" will install the latest available
> > > version.
> > > > Maybe the team maintaining the S3 bucket can reconsider creating such
> > > page for
> > > > the intermediate time until the CD based nighlty build is operating.
> > > >
> > > > On Mon, 2020-01-06 at 10:01 -0800, Lin Yuan wrote:
> > > > > +1 for a nightly pip with fixed name.
> > > > >
> > > > > We need this to track mxnet integration with other packages such as
> > > Horovod.
> > > > >
> > > > > Sam, when do you think we can have this nightly build with a fixed
> > > name?
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Lin
> > > > >
> > > > > On Sun, Jan 5, 2020 at 7:48 PM Skalicky, Sam
> > > 
> > > > > wrote:
> > > > >
> > > > > > Hi Tao,
> > > > > >
> > > > > > We dont have this yet, but we did think about putting the latest
> > > wheels in
> > > > > > a specific place in the s3 bucket so they are always updated.
> > > Initially we
> > > > > > decided not to do this since the main MXNet CD should have been
> > > fixed. But
> > > > > > since its still not fixed yet, we might try and go ahead and do
> > this.
> > > > > >
> > > > > > Sam
> > > > > >
> > > > > > On Jan 5, 2020, at 6:02 PM, Lv, Tao A  > > > > > tao.a...@intel.com>> wrote:
> > > > > >
> > > > > > Hi,
> > > > > >
> > > > > > How to install the latest available build of a flavor without
> > > specifying
> > > > > > the build date? Something like `pip install mxnet --pre`.
> > > > > >
> > > > > > Thanks,
> > > > > > -tao
> > > > > >
> > > > > > -Original Message-
> > > > > > From: Skalicky, Sam  > > > > > sska...@amazon.com.INVALID>>
> > > > > > Sent: Monday, January 6, 2020 2:09 AM
> > > > > > To: dev@mxnet.incubator.apache.org > > dev@mxnet.incubator.apache.org>
> > > > > > Subject: Re: Stopping nightly releases to Pypi
> > > > > >
> > > > > > Hi Haibin,
> > > > > >
> > > > > > You typed the correct URLs, the cu100 build has been failing since
> > > > > > December 30th but other builds have succeeded. The wheels are being
> > > > > > delivered into a public bucket that anyone with an AWS account can
> > > access
> > > > > > and go poke around, here’s the URL for web access:
> > > > > >
> > > > > >
> > > > > >
> > >
> > https://s3.console.aws.amazon.com/s3/buckets/apache-mxnet/dist/2020-01-01/dist/?region=us-west-2=overview
> > > > > >
> > > > > > You will have to log into your AWS account to access it however
> > > (which
> > > > > > means you’ll need an AWS account).
> > > > > >
> > > > > > It looks like only the following flavors are available for
> > > 2020-01-01:
> > > > > > mxnet
> > > > > > mxnet-cu92
> > > > > > mxnet-cu92mkl
> > > > > > mxnet-mkl
> > > > > >
> > > > > > Sam
> > > > > >
> > > > > > On Jan 4, 2020, at 9:06 PM, Haibin Lin  > >  > > > > > haibin.lin@gmail.com>> wrote:
> > > > > >
> > > > > > I was trying the nightly builds, but none of them is available:
> > > > > >
> > > > > > pip3 install
> > > > > >
> > > > > >
> > >
> > https://apache-mxnet.s3-us-west-2.amazonaws.com/dist/2020-01-01/dist/mxnet_cu100-1.6.0b20200101-py2.py3-none-manylinux1_x86_64.whl
> > > > > > --user
> > > > > > <
> > > > > >
> > >
> > https://apache-mxnet.s3-us-west-2.amazonaws.com/dist/2020-01-01/dist/mxnet_cu100-1.6.0b20200101-py2.py3-none-manylinux1_x86_64.whl--user
> > > > > > >
> > > > > > pip3 install
> > > > > >
> > > > > >
> > >
> >