using conan to manage Apache Incubator MXNet project dependencies

2018-11-25 Thread Konstantin Ivlev
hello,

this email is related to the following PR and JIRA ticket:
- [MXNET-1229] use OpenBLAS, lapack & OpenCV from conan

- use conan to manage project dependencies


conan  is an open-source package manager for C++
projects. it allows to manage project dependencies in transparent and
declarative manner.

currently, apache incubator-mxnet project uses the following different ways
to manage its dependencies:

- download GitHub archives during the build
- OpenBLAS 
- OpenCV 
- conda  (alternative way to GitHub archives)
- download from CMake
- Intel Math Kernel Library  (MKL)
- Git submodules
- cub 
- dlpack 
- dmlc-core 
- googletest 
- mkldnn 
- mshadow 
- onnx-tensorrt 
- openmp 
- ps-lite 
- tvm 

this appears to be very heterogeneous and hard to manage/maintain, as
multiple various commands are in use to achieve dependencies installation,
as well as multiple places are to look for dependency versions and their
updates.

with conan, it may became much more straightforward, as dependencies will
be declared in single place (conanfile) and installed via single command
(conan install).

as project is very complex, and has lots of dependencies, for the first
prototype I've used only very few of dependencies from conan: OpenCV
, OpenBLAS
 and lapack
.
others may be easily added then one by one, but they first has to be
packaged (not all of them are packaged yet, e.g. GoogleTest
 is
available, while MKL is not).

I attach patch which adds an initial conan support as proof of concept.
also, I attach two simple build scripts, which I've used to test (for
Windows and Linux / Mac OS X). Google Mail blocks .sh and .cmd extensions,
so you'll need to rename files.
lemme know if you have any further questions.

yours sincerely, Konstantin
From 0449ef0bc521a3fba09f9042c1b60fd171a73d60 Mon Sep 17 00:00:00 2001
From: SSE4 
Date: Sun, 25 Nov 2018 03:04:36 +0700
Subject: [PATCH 1/2] - use OpenBLAS, lapack & OpenCV from conan

Signed-off-by: SSE4 
---
 CMakeLists.txt   |  6 ++
 cmake/Modules/FindOpenBLAS.cmake |  7 +++
 conanfile.py | 11 +++
 3 files changed, 24 insertions(+)
 create mode 100644 conanfile.py

diff --git a/CMakeLists.txt b/CMakeLists.txt
index 3b8bbd2e027..15ab2ceb2e9 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -2,6 +2,12 @@ cmake_minimum_required(VERSION 3.0.2)
 
 project(mxnet C CXX)
 
+if(EXISTS ${CMAKE_CURRENT_BINARY_DIR}/conanbuildinfo.cmake)
+  include(${CMAKE_CURRENT_BINARY_DIR}/conanbuildinfo.cmake)
+  conan_basic_setup(TARGETS)
+  message(STATUS "using conan")
+endif()
+
 if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/build/private/local_config.cmake)
   include(${CMAKE_CURRENT_SOURCE_DIR}/build/private/local_config.cmake)
 endif()
diff --git a/cmake/Modules/FindOpenBLAS.cmake b/cmake/Modules/FindOpenBLAS.cmake
index a3a79caae46..cdbb4d38e32 100644
--- a/cmake/Modules/FindOpenBLAS.cmake
+++ b/cmake/Modules/FindOpenBLAS.cmake
@@ -15,6 +15,13 @@
 # specific language governing permissions and limitations
 # under the License.
 
+if(TARGET CONAN_PKG::openblas)
+  set(OpenBLAS_FOUND ON)
+  set(OpenBLAS_LIB CONAN_PKG::openblas CONAN_PKG::lapack)
+  set(OpenBLAS_INCLUDE_DIR ${CONAN_INCLUDE_DIRS_OPENBLAS})
+  return()
+endif()
+
 file(TO_CMAKE_PATH "$ENV{OpenBLAS_HOME}" OpenBLAS_HOME)
 file(TO_CMAKE_PATH "$ENV{OpenBLAS}" OpenBLAS_DIR)
 
diff --git a/conanfile.py b/conanfile.py
new file mode 100644
index 000..4c7b4a04b94
--- /dev/null
+++ b/conanfile.py
@@ -0,0 +1,11 @@
+from conans import ConanFile
+
+class IncubatorMXNetConan(ConanFile):
+settings = "os", "compiler", "build_type", "arch"
+requires = "openblas/0.2.20@conan/stable", "opencv/3.4.3@conan/stable", 
"lapack/3.7.1@conan/stable"
+generators = ["cmake"]
+
+def configure(self):
+if self.settings.compiler == "Visual Studio":
+self.options["lapack"].visual_studio = True
+self.options["lapack"].shared = True

From ca1f60695bcb5ef94bbbaadce475abe43d9a853f Mon Sep 17 00:00:00 2001
From: SSE4 
Date: Sun, 25 Nov 2018 17:32:11 +0700
Subject: [PATCH 2/2] - add license to the 

Re: CI impaired

2018-11-25 Thread kellen sunderland
Sorry, [1] meant to reference
https://issues.jenkins-ci.org/browse/JENKINS-37984 .

On Sun, Nov 25, 2018 at 5:41 PM kellen sunderland <
kellen.sunderl...@gmail.com> wrote:

> Marco and I ran into another urgent issue over the weekend that was
> causing builds to fail.  This issue was unrelated to any feature
> development work, or other CI fixes applied recently, but it did require
> quite a bit of work from Marco (and a little from me) to fix.
>
> We spent enough time on the problem that it caused us to take a step back
> and consider how we could both fix issues in CI and support the 1.4 release
> with the least impact possible on MXNet devs.  Marco had planned to make a
> significant change to the CI to fix a long-standing Jenkins error [1], but
> we feel that most developers would prioritize having a stable build
> environment for the next few weeks over having this fix in place.
>
> To properly introduce a new CI system the intent was to do a gradual
> blue/green roll out of the fix.  To manage this rollout would have taken
> operational effort and double compute load as we run systems in parallel.
> This risks outages due to scaling limits, and we’d rather make this change
> during a period of low-developer activity, i.e. shortly after the 1.4
> release.
>
> This means that from now until the 1.4 release, in order to reduce
> complexity MXNet developers should only see a single Jenkins verification
> check, and a single Travis check.
>
>


Re: CI impaired

2018-11-25 Thread kellen sunderland
Marco and I ran into another urgent issue over the weekend that was causing
builds to fail.  This issue was unrelated to any feature development work,
or other CI fixes applied recently, but it did require quite a bit of work
from Marco (and a little from me) to fix.

We spent enough time on the problem that it caused us to take a step back
and consider how we could both fix issues in CI and support the 1.4 release
with the least impact possible on MXNet devs.  Marco had planned to make a
significant change to the CI to fix a long-standing Jenkins error [1], but
we feel that most developers would prioritize having a stable build
environment for the next few weeks over having this fix in place.

To properly introduce a new CI system the intent was to do a gradual
blue/green roll out of the fix.  To manage this rollout would have taken
operational effort and double compute load as we run systems in parallel.
This risks outages due to scaling limits, and we’d rather make this change
during a period of low-developer activity, i.e. shortly after the 1.4
release.

This means that from now until the 1.4 release, in order to reduce
complexity MXNet developers should only see a single Jenkins verification
check, and a single Travis check.


Re: Include MKLDNN into default mxnet pip package

2018-11-25 Thread Lv, Tao A
Hi Steffen, 

I think all the commits on MKL-DNN master branch are well tested for MKL-DNN 
development team. If we really want to have a release commit in the coming 1.4 
mxnet release, my suggestion is 0.17 MKL-DNN release.

Thank you,
Tao 

Sent from my iPhone

> On Nov 26, 2018, at 8:09 AM, Steffen Rochel  wrote:
> 
> +1 to make MKL-DNN default.
> I'm tracking  https://github.com/apache/incubator-mxnet/issues/13369 as
> open issue to be addressed for 1.4.0
> I do agree that we should move to a model to include released dependencies
> instead of just taking bleeding edge snapshots.
> However, speed of development is important as well.
> As a compromise for 1.4.0 release with MKL-DNN: can the MKL-DNN development
> team provide us with a well tested tag/commit id to include in 1.4.0
> release?
> Steffen
> 
>> On Wed, Nov 21, 2018 at 11:42 PM Lv, Tao A  wrote:
>> 
>> Thanks for the information, Kellen and Naveen.
>> 
>> Better than onnx-tensorrt, MKL-DNN has already provided versioning and
>> release tags. My concern is that as MKL-DNN is still under intensive
>> development, if it has a new feature or bug fix on its master branch, do we
>> really want to wait for next release to get it supported in MXNet?
>> 
>> Take the LSTM regression as an example, probably MKL-DNN will give a fix
>> or improvement on its master branch soon, do we need to wait for 0.18
>> release to get it fixed for mxnet user? AFAIK, tensorflow is also using
>> normal commit id, not release, as the dependency for MKL-DNN.
>> 
>> Regarding the LSTM regression, we are using internal JIRA tickets rather
>> than github issues to track the defects of MKL-DNN. But I agree with you,
>> we need update the progress of it in Alex's issue.
>> 
>> Thanks,
>> -tao
>> 
>> -Original Message-
>> From: kellen sunderland [mailto:kellen.sunderl...@gmail.com]
>> Sent: Thursday, November 22, 2018 10:55 AM
>> To: dev@mxnet.incubator.apache.org
>> Subject: Re: Include MKLDNN into default mxnet pip package
>> 
>> Agree with your point about other repos also not being based on versioning
>> Tao.  I would point out that I've given some that I've worked with similar
>> feedback: https://github.com/onnx/onnx-tensorrt/issues/68
>> 
>>> On Wed, Nov 21, 2018 at 6:48 PM Naveen Swamy  wrote:
>>> 
>>> Tao,
>>> 
>>> You are right there are many submodules in 3rd party. We have to start
>>> somewhere and I believe this one is a good candidate to start with.
>>> This is not to cater to release of MXNet or to tie them with the
>>> releases of the submodules but instead to pick only stable releases
>>> and not to pick up bleeding edge commits from the tip of the master,
>>> this gives us confidence in the submodule that MXNet users are
>>> depending on that especially if we make MKLDNN the default.
>>> 
>>> Good to know it is known already as a regression.Alex has created this
>>> issue https://github.com/apache/incubator-mxnet/issues/13369, please
>>> add details and link the corresponding issue in MKLDNN(I couldn't find).
>>> 
>>> -Naveen
>>> 
 On Wed, Nov 21, 2018 at 6:04 PM Lv, Tao A  wrote:
 
 Here are my answers for the questions from Kellen and Naveen about
 MKL-DNN. It doesn't mean that I'm supportive for making MKL-DNN
 default here.
 
 @Kellen,
 
 FYI, here is a list for those platforms which are officially
 supported by MKL-DNN.
 https://github.com/intel/mkl-dnn#system-requirements
 
 Most of computation intensive kernels in MKL-DNN are JITed. So they
 are supposed to generate code according to the platform during
 runtime. For non-JIT code in MKL-DNN, same as other code in MXNet,
 it will generate instructions according to the options/flags of
 compiler. We can set -DARCH_OPT_FLAGS when build MKL-DNN to avoid
 optimization for compiling machine. That's exactly what we are doing
>> for MKL-DNN build in MXNet.
>>> Even
 without MKL-DNN, I noticed there were issues about illegal
 instructions
>>> of
 MXNet when users import the pip package on a lower end machine which
 probably only supports SSE.
 
 @Naveen,
 
 The LSTM issue has already been identified as a regression from the
>>> recent
 version of MKL-DNN. Hopefully it will be fixed soon with a new
 update of MKL-DNN.
 
 MXNet has many submodule dependencies under the 3rd party folder.
 Seems
>>> we
 don't require release versions for most of these dependencies. The
>>> release
 period of MKL-DNN and MXNet are not matched very well. I think it
 would
>>> be
 a risk for MXNet release if it hardly depends on the release of a
 submodule, no need to say depends on the releases of all submodules.
 
 -tao
 
 -Original Message-
 From: Naveen Swamy [mailto:mnnav...@gmail.com]
 Sent: Thursday, November 22, 2018 9:08 AM
 To: dev@mxnet.incubator.apache.org
 Cc: d...@mxnet.apache.org
 Subject: Re: Include MKLDNN into default 

Re: Include MKLDNN into default mxnet pip package

2018-11-25 Thread Steffen Rochel
+1 to make MKL-DNN default.
I'm tracking  https://github.com/apache/incubator-mxnet/issues/13369 as
open issue to be addressed for 1.4.0
I do agree that we should move to a model to include released dependencies
instead of just taking bleeding edge snapshots.
However, speed of development is important as well.
As a compromise for 1.4.0 release with MKL-DNN: can the MKL-DNN development
team provide us with a well tested tag/commit id to include in 1.4.0
release?
Steffen

On Wed, Nov 21, 2018 at 11:42 PM Lv, Tao A  wrote:

> Thanks for the information, Kellen and Naveen.
>
> Better than onnx-tensorrt, MKL-DNN has already provided versioning and
> release tags. My concern is that as MKL-DNN is still under intensive
> development, if it has a new feature or bug fix on its master branch, do we
> really want to wait for next release to get it supported in MXNet?
>
> Take the LSTM regression as an example, probably MKL-DNN will give a fix
> or improvement on its master branch soon, do we need to wait for 0.18
> release to get it fixed for mxnet user? AFAIK, tensorflow is also using
> normal commit id, not release, as the dependency for MKL-DNN.
>
> Regarding the LSTM regression, we are using internal JIRA tickets rather
> than github issues to track the defects of MKL-DNN. But I agree with you,
> we need update the progress of it in Alex's issue.
>
> Thanks,
> -tao
>
> -Original Message-
> From: kellen sunderland [mailto:kellen.sunderl...@gmail.com]
> Sent: Thursday, November 22, 2018 10:55 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: Include MKLDNN into default mxnet pip package
>
> Agree with your point about other repos also not being based on versioning
> Tao.  I would point out that I've given some that I've worked with similar
> feedback: https://github.com/onnx/onnx-tensorrt/issues/68
>
> On Wed, Nov 21, 2018 at 6:48 PM Naveen Swamy  wrote:
>
> > Tao,
> >
> > You are right there are many submodules in 3rd party. We have to start
> > somewhere and I believe this one is a good candidate to start with.
> > This is not to cater to release of MXNet or to tie them with the
> > releases of the submodules but instead to pick only stable releases
> > and not to pick up bleeding edge commits from the tip of the master,
> > this gives us confidence in the submodule that MXNet users are
> > depending on that especially if we make MKLDNN the default.
> >
> > Good to know it is known already as a regression.Alex has created this
> > issue https://github.com/apache/incubator-mxnet/issues/13369, please
> > add details and link the corresponding issue in MKLDNN(I couldn't find).
> >
> > -Naveen
> >
> > On Wed, Nov 21, 2018 at 6:04 PM Lv, Tao A  wrote:
> >
> > > Here are my answers for the questions from Kellen and Naveen about
> > > MKL-DNN. It doesn't mean that I'm supportive for making MKL-DNN
> > > default here.
> > >
> > > @Kellen,
> > >
> > > FYI, here is a list for those platforms which are officially
> > > supported by MKL-DNN.
> > > https://github.com/intel/mkl-dnn#system-requirements
> > >
> > > Most of computation intensive kernels in MKL-DNN are JITed. So they
> > > are supposed to generate code according to the platform during
> > > runtime. For non-JIT code in MKL-DNN, same as other code in MXNet,
> > > it will generate instructions according to the options/flags of
> > > compiler. We can set -DARCH_OPT_FLAGS when build MKL-DNN to avoid
> > > optimization for compiling machine. That's exactly what we are doing
> for MKL-DNN build in MXNet.
> > Even
> > > without MKL-DNN, I noticed there were issues about illegal
> > > instructions
> > of
> > > MXNet when users import the pip package on a lower end machine which
> > > probably only supports SSE.
> > >
> > > @Naveen,
> > >
> > > The LSTM issue has already been identified as a regression from the
> > recent
> > > version of MKL-DNN. Hopefully it will be fixed soon with a new
> > > update of MKL-DNN.
> > >
> > > MXNet has many submodule dependencies under the 3rd party folder.
> > > Seems
> > we
> > > don't require release versions for most of these dependencies. The
> > release
> > > period of MKL-DNN and MXNet are not matched very well. I think it
> > > would
> > be
> > > a risk for MXNet release if it hardly depends on the release of a
> > > submodule, no need to say depends on the releases of all submodules.
> > >
> > > -tao
> > >
> > > -Original Message-
> > > From: Naveen Swamy [mailto:mnnav...@gmail.com]
> > > Sent: Thursday, November 22, 2018 9:08 AM
> > > To: dev@mxnet.incubator.apache.org
> > > Cc: d...@mxnet.apache.org
> > > Subject: Re: Include MKLDNN into default mxnet pip package
> > >
> > > Hi Alex,
> > >
> > > Thanks for promptly running the numbers on AMD and reporting here.
> > >
> > > Can you please update the AMD numbers here for posterity
> > >
> > https://cwiki.apache.org/confluence/display/MXNET/MXNet+with+Intel+MKL
> > -DNN+-+Performance+Benchmarking
> > > ?
> > >
> > > are there any outstanding issues when MKLDNN 

Re: CI impaired

2018-11-25 Thread Steffen Rochel
Hi Marco - suggest to retrigger PRs, if needed in stages:
- pr-awaiting-merge
- pr-awaiting-review
that would cover 78 PR. In any case I would exclude pr-work-in-progress.

Steffen

On Sat, Nov 24, 2018 at 9:11 PM kellen sunderland <
kellen.sunderl...@gmail.com> wrote:

> Hey Marco, I'm still having quite a few issues passing PRs.  Would you be
> able to at least test a handful of PRs and make sure they pass/fail tests
> as you expect?
>
> On Sat, Nov 24, 2018, 7:01 PM Marco de Abreu
> 
> > Hello Steffen,
> >
> > thank you for bringing up these PRs.
> >
> > I had to abort the builds during the outage which means that the jobs
> > didn't finish and not even the status propagation could have finished
> > (hence they show pending instead of failure or aborted).
> >
> > Recently, we merged a PR that adds utility slaves. This will ensure that
> > status updates will always be posted, no matter whether the main queue
> > hangs or not. This means that the status would then be properly reflected
> > and there should be no hanging pending runs.
> >
> > I could retrigger all PRs to kick off another round of validation, but
> this
> > would result in 240 jobs (2 main pipelines times 120 open PRs) to run.
> > Since we are currently in the pre-release stage, I wanted to avoid
> putting
> > the system under such heavy load.
> >
> > Instead, I'd kindly like to request the PR creators to make a new commit
> to
> > trigger the pipelines. In order to merge a PR, only PR-merge has to pass
> > and I tried to retrigger all PRs that have been aborted during the
> outage.
> > It might have been possible that I missed a few.
> >
> > Since it's still the weekend and there's not much going on, I can use the
> > time to trigger all PRs. Please advise whether you think I should move
> > forward (I expect the CI to finish all PRs within 6-10 hours) or if it's
> > fine to ask people to retrigger themselves.
> >
> > Please excuse the caused inconveniences.
> >
> > Best regards,
> > Marco
> >
> >
> > Am So., 25. Nov. 2018, 03:48 hat Steffen Rochel  >
> > geschrieben:
> >
> > > Thanks Marco for the updates and resolving the issues.
> > > However, I do see a number of PR waiting to be merged with inconsistent
> > PR
> > > validation status check.
> > > E.g. https://github.com/apache/incubator-mxnet/pull/13041 shows 9
> > pending
> > > checks being queued. However, when you look at the details, either the
> > > checks have passed or failed (centos-cpu, edge, unix-cpu, window-cpu,
> > > windows-gpu failed, required pr-merge which includes edge, gpu tests
> > > passed).
> > > Similar also for other PR with label pr-awaiting-merge (
> > >
> > >
> >
> https://github.com/apache/incubator-mxnet/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+label%3Apr-awaiting-merge
> > > )
> > > Please advice on resolution.
> > >
> > > Regards,
> > > Steffen
> > >
> > > On Thu, Nov 22, 2018 at 12:09 PM Marco de Abreu
> > >  wrote:
> > >
> > > > Thanks everybody, I really appreciate it!
> > > >
> > > > Today was a good day, there were no incidents and everything appears
> to
> > > be
> > > > stable. In the meantime I did a deep dive on why we has such a
> > > significant
> > > > performance decrease with of our compilation jobs - which then
> clogged
> > up
> > > > the queue and resulted in 1000 jobs waiting to be scheduled.
> > > >
> > > > The reason was the way how we use ccache to speed up our compilation
> > > jobs.
> > > > Usually, this yields us a huge performance improvement (CPU openblas,
> > for
> > > > example, goes from 30 minutes down to ~3min, ARMv7 from 30 minutes
> down
> > > to
> > > > ~1.5min, etc.). Unfortunately in this case, ccache was our limiting
> > > factor.
> > > > Here's some background about how we operate our cache:
> > > >
> > > > We use EFS to have a distributed ccache between all of our
> > > > unrestricted-prod-slaves. EFS is classified for almost unlimited
> > > > scalability (being consumed by thousands of instances in parallel
> [1])
> > > with
> > > > a theoretical throughput of over 10Gbps. One thing I didn't know
> when I
> > > > designed this approach was the method how throughput is being
> granted.
> > > > Similar to T2-CPU-Credits, EFS uses BurstCredits to allow you higher
> > > > throughput (default is 50MiB/s) [2]. Due to the high load, we
> consumed
> > > all
> > > > of our credits - here's a very interesting graph: [3].
> > > >
> > > > To avoid similar incidents in future, I have taken the following
> > actions:
> > > > 1. I switched EFS from burst-mode to provisioned throughput with
> > 300MB/s
> > > > (in the graph at [3] you can see how our IO immediately increases -
> and
> > > > thus our CI gets faster - as soon as I added provisioned throughput).
> > > > 2. I created internal follow-up tickets to add monitoring and
> automated
> > > > actions.
> > > >
> > > > First, we should be notified if we are running low on credits to
> > kick-off
> > > > an investigation. Second (nice to have), we could have a
> > lambda-function

Re: [Announce] Upcoming Apache MXNet (incubating) 1.4.0 release

2018-11-25 Thread Steffen Rochel
Dear MXNet community,

I will be the release manager for the upcoming Apache MXNet 1.4.0 release.
Sergey Kolychev will be co-managing the release and providing help from the
committers side.
A release candidate will be cut on November 29, 2018 and voting will start
December 7, 2018. Release notes have been drafted here [1]. If you have any
additional features in progress and would like to include it in this
release, please assure they have been merged by November 27, 2018. Release
schedule is available here [2].

Feel free to add any other comments/suggestions. Please help to review and
merge outstanding PR's and resolve issues impacting the quality of the
1.4.0 release.

Regards,

Steffen

[1]
https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.4.0+Release+Notes

[2] 
https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.4.0+Release+Plan+and+Status




On Tue, Nov 20, 2018 at 7:15 PM kellen sunderland <
kellen.sunderl...@gmail.com> wrote:

> Spoke too soon[1], looks like others have been adding Turing support as
> well (thanks to those helping with this).  I believe there's still a few
> changes we'd have to make to claim support though (mshadow CMake changes,
> PyPi package creation tweaks).
>
> 1:
>
> https://github.com/apache/incubator-mxnet/commit/2c3357443ec3d49a11e93c89f278264ce10c2f08
>
> On Tue, Nov 20, 2018 at 7:00 PM kellen sunderland <
> kellen.sunderl...@gmail.com> wrote:
>
> > Hey Steffen, I'd like to be able to merge this PR for version 1.4:
> > https://github.com/apache/incubator-mxnet/pull/13310 . It fixes a
> > regression in master which causes incorrect feature vectors to be output
> > when using the TensorRT feature.  (Thanks to Nathalie for helping me
> track
> > down the root cause of the issue).   I'm currently blocked on a CI issue
> I
> > haven't seen before, but hope to have it resolved by EOW.
> >
> > One call-out I would make is that we currently don't support Turing
> > architecture (sm_75).  I've been slowly trying to add support, but I
> don't
> > think I'd have capacity to do this done by EOW.  Does anyone feel
> strongly
> > we need this in the 1.4 release?  From my perspective this will already
> be
> > a strong release without it.
> >
> > On Tue, Nov 20, 2018 at 6:42 PM Steffen Rochel 
> > wrote:
> >
> >> Thanks Patrick, lets target to get the PR's merged this week.
> >>
> >> Call for contributions from the community: Right now we have 10 PR
> >> awaiting
> >> merge
> >> <
> >>
> https://github.com/apache/incubator-mxnet/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+label%3Apr-awaiting-merge+
> >> >
> >> and
> >> we have 61 open PR awaiting review.
> >> <
> >>
> https://github.com/apache/incubator-mxnet/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+label%3Apr-awaiting-review
> >> >
> >> I would appreciate if you all can help to review the open PR and the
> >> committers can drive the merge before code freeze for 1.4.0.
> >>
> >> The contributors on the Java API are making progress, but not all
> >> performance issues are resolved. With some luck it should be possible to
> >> code freeze towards end of this week.
> >>
> >> Are there other critical features/bugs/PR you think need to be included
> in
> >> 1.4.0? If so, please communicate as soon as possible.
> >>
> >> Regards,
> >> Steffen
> >>
> >> On Mon, Nov 19, 2018 at 8:26 PM Zhao, Patric 
> >> wrote:
> >>
> >> > Thanks, Steffen. I think there is NO open issue to block the MKLDNN to
> >> GA
> >> > now.
> >> >
> >> > BTW, several quantization related PRs (#13297,#13260) are under the
> >> review
> >> > and I think it can be merged in this week.
> >> >
> >> > Thanks,
> >> >
> >> > --Patric
> >> >
> >> >
> >> > > -Original Message-
> >> > > From: Steffen Rochel [mailto:steffenroc...@gmail.com]
> >> > > Sent: Tuesday, November 20, 2018 2:57 AM
> >> > > To: dev@mxnet.incubator.apache.org
> >> > > Subject: Re: [Announce] Upcoming Apache MXNet (incubating) 1.4.0
> >> release
> >> > >
> >> > > On Friday the contributors working on Java API discovered a
> potential
> >> > > performance problem with inference using Java API vs. Python.
> >> > Investigation
> >> > > is ongoing.
> >> > > As the Java API is one of the main features for the upcoming
> release,
> >> I
> >> > > suggest to post-pone the code freeze towards end of this week.
> >> > >
> >> > > Please provide feedback and concern about the change in dates for
> code
> >> > > freeze and 1.4.0 release. I will provide updates on progress
> resolving
> >> > the
> >> > > potential performance problem.
> >> > >
> >> > > Patrick - do you think it is possible to resolve the remaining
> issues
> >> on
> >> > MKL-
> >> > > DNN this week, so we can consider GA for MKL-DNN with 1.4.0?
> >> > >
> >> > > Regards,
> >> > > Steffen
> >> > >
> >> > > On Thu, Nov 15, 2018 at 5:26 AM Anton Chernov 
> >> > > wrote:
> >> > >
> >> > > > I'd like to remind everyone that 'code freeze' would mean cutting
> a
> >> > > > v1.4.x release branch and 

Re: Scala Symbol API Question

2018-11-25 Thread Qing Lan
Hi Sam, 

please join the Slack MXNet. We also have a #MXNet-Scala channel there where 
you can ask more questions with Scala/Java.

Apart from that I have placed an issue for you:
https://github.com/apache/incubator-mxnet/issues/13403

Thanks,
Qing

On 11/23/18, 11:22 PM, "Sam Bean"  wrote:

Hello, I have some questions about the Scala API for the Symbol library.

I'm trying to figure out how to do something like this
https://github.com/ufoym/mxnet/blob/master/example/vae/VAE.py#L83, however
it seems the Scala Symbol API does not allow the mixing of symbols and
constants like the python library does.

It seems like if I want to use constants in my loss functions I'm going to
have to have a very large argsDict when I go to train since every variable
in the loss function definition will have to be symbolic. Is there a better
way to do this?

-- 
Sam Bean
*StockX*
*Tech Lead, Machine Learning and Personalization*
*––*
samb...@stockx.com
stockx.com