[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #468: Distributted module

2019-07-31 Thread GitBox
chrishkchris commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r309496476
 
 

 ##
 File path: src/CMakeLists.txt
 ##
 @@ -36,6 +36,9 @@ AUX_SOURCE_DIRECTORY(core/scheduler core_source)
 AUX_SOURCE_DIRECTORY(core/tensor core_source)
 LIST(APPEND singa_sources ${core_source})
 
 
 Review comment:
   ```
   The build log is here:
   
   ubuntu@ip-172-31-18-113:~/incubator-singa/build$ rm -rf *
   ubuntu@ip-172-31-18-113:~/incubator-singa/build$ cmake -D 
CMAKE_PREFIX_PATH="/usr/local/cuda/lib64;/usr/local/cuda/" -DENABLE_TEST=OFF 
-DUSE_CUDA=ON -DUSE_PYTHON3=ON -DUSE_MKLDNN=ON -DUSE_MODULES=OFF -DUSE_DIST=ON 
..
   -- The C compiler identification is GNU 5.4.0
   -- The CXX compiler identification is GNU 5.4.0
   -- Check for working C compiler: /usr/bin/cc
   -- Check for working C compiler: /usr/bin/cc -- works
   -- Detecting C compiler ABI info
   -- Detecting C compiler ABI info - done
   -- Detecting C compile features
   -- Detecting C compile features - done
   -- Check for working CXX compiler: /usr/bin/c++
   -- Check for working CXX compiler: /usr/bin/c++ -- works
   -- Detecting CXX compiler ABI info
   -- Detecting CXX compiler ABI info - done
   -- Detecting CXX compile features
   -- Detecting CXX compile features - done
   -- Looking for pthread.h
   -- Looking for pthread.h - found
   -- Looking for pthread_create
   -- Looking for pthread_create - not found
   -- Looking for pthread_create in pthreads
   -- Looking for pthread_create in pthreads - not found
   -- Looking for pthread_create in pthread
   -- Looking for pthread_create in pthread - found
   -- Found Threads: TRUE
   -- Found Protobuf: /usr/local/lib/libprotobuf.so;-lpthread (found suitable 
version "3.0.0", minimum required is "3.0")
   -- Found CBLAS: /usr/local/include
   -- Found GLOG: /usr/include
   -- Found cuda_v10.0
   -- Found CUDNN: /usr/local/cuda/include
   -- Found Cudnn_7401 at /usr/local/cuda/include 
/usr/local/cuda/lib64/libcudnn.so
   -- Found PythonInterp: /usr/bin/python3 (found suitable version "3.5.2", 
minimum required is "3")
   -- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython3.5m.so (found 
suitable version "3.5.2", minimum required is "3")
   -- Found SWIG: /usr/local/bin/swig (found suitable version "3.0.12", minimum 
required is "3.0.10")
   -- Found MKLDNN at /usr/local/include
   -- Found MPI at /home/ubuntu/mpich-3.3/build/include
   -- Found MPI lib at /home/ubuntu/mpich-3.3/build/lib/libmpi.so
   -- Found all lib at 
/usr/local/lib/libprotobuf.so;/usr/local/lib/libopenblas.so;/usr/lib/x86_64-linux-gnu/libglog.so;/usr/local/cuda/lib64/libcudnn.so;/usr/local/cuda/lib64/libcudart.so;/usr/local/cuda/lib64/libcurand.so;/usr/local/cuda/lib64/libcublas.so;/home/ubuntu/incubator-singa/build/lib/libcnmem.a;/usr/local/lib/libmkldnn.so;/home/ubuntu/mpich-3.3/build/lib/libmpi.so;/home/ubuntu/mpich-3.3/build/lib/libmpicxx.so
   -- Found NCCL at /usr/local/cuda/include
   -- Found NCCL lib at /usr/local/cuda/lib/libnccl.so
   -- Configuring done
   -- Generating done
   -- Build files have been written to: /home/ubuntu/incubator-singa/build
   ubuntu@ip-172-31-18-113:~/incubator-singa/build$ make -j2
   Scanning dependencies of target cnmem
   Scanning dependencies of target copy_protobuf
   [  1%] Creating directories for 'cnmem'
   [  2%] Running C++ protocol buffer compiler on 
/home/ubuntu/incubator-singa/src/proto/model.proto
   [libprotobuf WARNING google/protobuf/compiler/parser.cc:547] No syntax 
specified for the proto file: model.proto. Please use 'syntax = "proto2";' or 
'syntax = "proto3";' to specify a syntax version. (Defaulted to proto2 syntax.)
   [  3%] Performing download step (git clone) for 'cnmem'
   Cloning into 'cnmem'...
   [  4%] Running C++ protocol buffer compiler on 
/home/ubuntu/incubator-singa/src/proto/caffe.proto
   [  5%] Running C++ protocol buffer compiler on 
/home/ubuntu/incubator-singa/src/proto/core.proto
   [libprotobuf WARNING google/protobuf/compiler/parser.cc:547] No syntax 
specified for the proto file: core.proto. Please use 'syntax = "proto2";' or 
'syntax = "proto3";' to specify a syntax version. (Defaulted to proto2 syntax.)
   [  6%] Running C++ protocol buffer compiler on 
/home/ubuntu/incubator-singa/src/proto/io.proto
   [libprotobuf WARNING google/protobuf/compiler/parser.cc:547] No syntax 
specified for the proto file: io.proto. Please use 'syntax = "proto2";' or 
'syntax = "proto3";' to specify a syntax version. (Defaulted to proto2 syntax.)
   [  7%] Copying Protobuf headers
   [  7%] Built target copy_protobuf
   [  8%] Building NVCC (Device) object 
src/CMakeFiles/cuda_compile_1.dir/core/tensor/cuda_compile_1_generated_math_kernel.cu.o
   Scanning dependencies of target singa_objects
   [  9%] Building CXX object src/CMakeFiles/singa_objects.dir/caffe.pb.cc.o
   Already on 'master'
   Your branch is up-to-date with 'origin/master'.
   [ 10%] No 

[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #468: Distributted module

2019-07-31 Thread GitBox
chrishkchris commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r309496476
 
 

 ##
 File path: src/CMakeLists.txt
 ##
 @@ -36,6 +36,9 @@ AUX_SOURCE_DIRECTORY(core/scheduler core_source)
 AUX_SOURCE_DIRECTORY(core/tensor core_source)
 LIST(APPEND singa_sources ${core_source})
 
 
 Review comment:
   The build log is here:
   
   ubuntu@ip-172-31-18-113:~/incubator-singa/build$ rm -rf *
   ubuntu@ip-172-31-18-113:~/incubator-singa/build$ cmake -D 
CMAKE_PREFIX_PATH="/usr/local/cuda/lib64;/usr/local/cuda/" -DENABLE_TEST=OFF 
-DUSE_CUDA=ON -DUSE_PYTHON3=ON -DUSE_MKLDNN=ON -DUSE_MODULES=OFF -DUSE_DIST=ON 
..
   -- The C compiler identification is GNU 5.4.0
   -- The CXX compiler identification is GNU 5.4.0
   -- Check for working C compiler: /usr/bin/cc
   -- Check for working C compiler: /usr/bin/cc -- works
   -- Detecting C compiler ABI info
   -- Detecting C compiler ABI info - done
   -- Detecting C compile features
   -- Detecting C compile features - done
   -- Check for working CXX compiler: /usr/bin/c++
   -- Check for working CXX compiler: /usr/bin/c++ -- works
   -- Detecting CXX compiler ABI info
   -- Detecting CXX compiler ABI info - done
   -- Detecting CXX compile features
   -- Detecting CXX compile features - done
   -- Looking for pthread.h
   -- Looking for pthread.h - found
   -- Looking for pthread_create
   -- Looking for pthread_create - not found
   -- Looking for pthread_create in pthreads
   -- Looking for pthread_create in pthreads - not found
   -- Looking for pthread_create in pthread
   -- Looking for pthread_create in pthread - found
   -- Found Threads: TRUE
   -- Found Protobuf: /usr/local/lib/libprotobuf.so;-lpthread (found suitable 
version "3.0.0", minimum required is "3.0")
   -- Found CBLAS: /usr/local/include
   -- Found GLOG: /usr/include
   -- Found cuda_v10.0
   -- Found CUDNN: /usr/local/cuda/include
   -- Found Cudnn_7401 at /usr/local/cuda/include 
/usr/local/cuda/lib64/libcudnn.so
   -- Found PythonInterp: /usr/bin/python3 (found suitable version "3.5.2", 
minimum required is "3")
   -- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython3.5m.so (found 
suitable version "3.5.2", minimum required is "3")
   -- Found SWIG: /usr/local/bin/swig (found suitable version "3.0.12", minimum 
required is "3.0.10")
   -- Found MKLDNN at /usr/local/include
   -- Found MPI at /home/ubuntu/mpich-3.3/build/include
   -- Found MPI lib at /home/ubuntu/mpich-3.3/build/lib/libmpi.so
   -- Found all lib at 
/usr/local/lib/libprotobuf.so;/usr/local/lib/libopenblas.so;/usr/lib/x86_64-linux-gnu/libglog.so;/usr/local/cuda/lib64/libcudnn.so;/usr/local/cuda/lib64/libcudart.so;/usr/local/cuda/lib64/libcurand.so;/usr/local/cuda/lib64/libcublas.so;/home/ubuntu/incubator-singa/build/lib/libcnmem.a;/usr/local/lib/libmkldnn.so;/home/ubuntu/mpich-3.3/build/lib/libmpi.so;/home/ubuntu/mpich-3.3/build/lib/libmpicxx.so
   -- Found NCCL at /usr/local/cuda/include
   -- Found NCCL lib at /usr/local/cuda/lib/libnccl.so
   -- Configuring done
   -- Generating done
   -- Build files have been written to: /home/ubuntu/incubator-singa/build
   ubuntu@ip-172-31-18-113:~/incubator-singa/build$ make -j2
   Scanning dependencies of target cnmem
   Scanning dependencies of target copy_protobuf
   [  1%] Creating directories for 'cnmem'
   [  2%] Running C++ protocol buffer compiler on 
/home/ubuntu/incubator-singa/src/proto/model.proto
   [libprotobuf WARNING google/protobuf/compiler/parser.cc:547] No syntax 
specified for the proto file: model.proto. Please use 'syntax = "proto2";' or 
'syntax = "proto3";' to specify a syntax version. (Defaulted to proto2 syntax.)
   [  3%] Performing download step (git clone) for 'cnmem'
   Cloning into 'cnmem'...
   [  4%] Running C++ protocol buffer compiler on 
/home/ubuntu/incubator-singa/src/proto/caffe.proto
   [  5%] Running C++ protocol buffer compiler on 
/home/ubuntu/incubator-singa/src/proto/core.proto
   [libprotobuf WARNING google/protobuf/compiler/parser.cc:547] No syntax 
specified for the proto file: core.proto. Please use 'syntax = "proto2";' or 
'syntax = "proto3";' to specify a syntax version. (Defaulted to proto2 syntax.)
   [  6%] Running C++ protocol buffer compiler on 
/home/ubuntu/incubator-singa/src/proto/io.proto
   [libprotobuf WARNING google/protobuf/compiler/parser.cc:547] No syntax 
specified for the proto file: io.proto. Please use 'syntax = "proto2";' or 
'syntax = "proto3";' to specify a syntax version. (Defaulted to proto2 syntax.)
   [  7%] Copying Protobuf headers
   [  7%] Built target copy_protobuf
   [  8%] Building NVCC (Device) object 
src/CMakeFiles/cuda_compile_1.dir/core/tensor/cuda_compile_1_generated_math_kernel.cu.o
   Scanning dependencies of target singa_objects
   [  9%] Building CXX object src/CMakeFiles/singa_objects.dir/caffe.pb.cc.o
   Already on 'master'
   Your branch is up-to-date with 'origin/master'.
   [ 10%] No patch 

[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #468: Distributted module

2019-07-31 Thread GitBox
chrishkchris commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r309329911
 
 

 ##
 File path: src/CMakeLists.txt
 ##
 @@ -36,6 +36,9 @@ AUX_SOURCE_DIRECTORY(core/scheduler core_source)
 AUX_SOURCE_DIRECTORY(core/tensor core_source)
 LIST(APPEND singa_sources ${core_source})
 
 
 Review comment:
   I updated also some files to include USE_DIST, see the following grep result 
on USE_DIST:
   
   ubuntu@ip-172-31-18-113:~/incubator-singa$ git grep USE_DIST
   CMakeLists.txt:OPTION(USE_DIST "Use nccl distributed module" OFF)
   cmake/Dependencies.cmake:IF(USE_DIST)
   cmake/Templates/singa_config.h.in:#cmakedefine USE_DIST
   include/singa/dist/communicator.h:#ifdef USE_DIST
   include/singa/dist/communicator.h:#endif // USE_DIST
   src/CMakeLists.txt:IF (USE_DIST)
   src/CMakeLists.txt:ENDIF (USE_DIST)
   src/api/config.i:#define USE_DIST 0
   src/api/config.i.in:#cmakedefine01 USE_DIST
   src/api/dist_communicator.i:#if USE_DIST
   src/api/dist_communicator.i:#endif  // USE_DIST
   src/dist/communicator.cc:#ifdef USE_DIST
   src/dist/communicator.cc:#endif // USE_DIST
   
   Note that the default is OFF if we do not set -DUSE_DIST=ON
   
   The test was on version 1.2 although I set the displayed value in CMakeLists 
to be version 2.0. I will still need to test the dist module on singa version 
2.0 and add partitioning of dataset according to MPI rank, etc.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #468: Distributted module

2019-07-31 Thread GitBox
chrishkchris commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r309329911
 
 

 ##
 File path: src/CMakeLists.txt
 ##
 @@ -36,6 +36,9 @@ AUX_SOURCE_DIRECTORY(core/scheduler core_source)
 AUX_SOURCE_DIRECTORY(core/tensor core_source)
 LIST(APPEND singa_sources ${core_source})
 
 
 Review comment:
   I updated also some files to include USE_DIST, see the following grep result 
on USE_DIST:
   
   ubuntu@ip-172-31-18-113:~/incubator-singa$ git grep USE_DIST
   CMakeLists.txt:OPTION(USE_DIST "Use nccl distributed module" OFF)
   cmake/Dependencies.cmake:IF(USE_DIST)
   cmake/Templates/singa_config.h.in:#cmakedefine USE_DIST
   include/singa/dist/communicator.h:#ifdef USE_DIST
   include/singa/dist/communicator.h:#endif // USE_DIST
   src/CMakeLists.txt:IF (USE_DIST)
   src/CMakeLists.txt:ENDIF (USE_DIST)
   src/api/config.i:#define USE_DIST 1
   src/api/config.i.in:#cmakedefine01 USE_DIST
   src/api/dist_communicator.i:#if USE_DIST
   src/api/dist_communicator.i:#endif  // USE_DIST
   src/dist/communicator.cc:#ifdef USE_DIST
   src/dist/communicator.cc:#endif // USE_DIST
   
   Note that the default is OFF if we do not set -DUSE_DIST=ON
   
   The test was on version 1.2 although I set the displayed value in CMakeLists 
to be version 2.0. I will still need to test the dist module on singa version 
2.0 and add partitioning of dataset according to MPI rank, etc.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #468: Distributted module

2019-07-31 Thread GitBox
chrishkchris commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r309329911
 
 

 ##
 File path: src/CMakeLists.txt
 ##
 @@ -36,6 +36,9 @@ AUX_SOURCE_DIRECTORY(core/scheduler core_source)
 AUX_SOURCE_DIRECTORY(core/tensor core_source)
 LIST(APPEND singa_sources ${core_source})
 
 
 Review comment:
   I updated also some files to include USE_DIST, see the following grep result 
on USE_DIST:
   
   ubuntu@ip-172-31-18-113:~/incubator-singa$ git grep USE_DIST
   CMakeLists.txt:OPTION(USE_DIST "Use nccl distributed module" OFF)
   cmake/Dependencies.cmake:IF(USE_DIST)
   cmake/Templates/singa_config.h.in:#cmakedefine USE_DIST
   include/singa/dist/communicator.h:#ifdef USE_DIST
   include/singa/dist/communicator.h:#endif // USE_DIST
   src/CMakeLists.txt:IF (USE_DIST)
   src/CMakeLists.txt:ENDIF (USE_DIST)
   src/api/config.i:#define USE_DIST 1
   src/api/config.i.in:#cmakedefine01 USE_DIST
   src/api/dist_communicator.i:#if USE_DIST
   src/api/dist_communicator.i:#endif  // USE_DIST
   src/dist/communicator.cc:#ifdef USE_DIST
   src/dist/communicator.cc:#endif // USE_DIST
   
   Note that the default is OFF if we do not set -DUSE_DIST=ON
   
   The test was on version 1.2 although I set the displayed value in CMakeLists 
to be version 2.0. I will still need to test version 2.0 and add partitioning 
of dataset according to MPI rank, etc.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #468: Distributted module

2019-07-31 Thread GitBox
chrishkchris commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r309144703
 
 

 ##
 File path: src/CMakeLists.txt
 ##
 @@ -36,6 +36,9 @@ AUX_SOURCE_DIRECTORY(core/scheduler core_source)
 AUX_SOURCE_DIRECTORY(core/tensor core_source)
 LIST(APPEND singa_sources ${core_source})
 
 
 Review comment:
   Changed the files cmake/Dependencies.cmake and src/CMakeLists.txt 
   Can use cmake -DUSE_DIST=ON to turn on the distributed module


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] ShichengChen opened a new pull request #496: Mean

2019-07-31 Thread GitBox
ShichengChen opened a new pull request #496: Mean
URL: https://github.com/apache/incubator-singa/pull/496
 
 
   Implement ONNX Operators following 
https://github.com/onnx/onnx/blob/master/docs/Operators.md


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #468: Distributted module

2019-07-31 Thread GitBox
chrishkchris commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r309144703
 
 

 ##
 File path: src/CMakeLists.txt
 ##
 @@ -36,6 +36,9 @@ AUX_SOURCE_DIRECTORY(core/scheduler core_source)
 AUX_SOURCE_DIRECTORY(core/tensor core_source)
 LIST(APPEND singa_sources ${core_source})
 
 
 Review comment:
   Changed the files cmake/Dependencies.cmake and src/CMakeLists.txt 
   Can use cmake -DUSE_DIST=ON to turn on the distributed module
   
   However, there are some bugs (mainly segmentation fault) if I add the #ifdef 
USE_DIST in the files communicator.h and communicator.cc 
   I will update other files as well (e.g. #cmakedefine and #if USE_DIST etc. 
in many files) when I successfully remove the bug.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #493: SINGA-473 Autograd Trigonometry: Backward Test

2019-07-31 Thread GitBox
chrishkchris commented on a change in pull request #493: SINGA-473 Autograd 
Trigonometry: Backward Test
URL: https://github.com/apache/incubator-singa/pull/493#discussion_r309137122
 
 

 ##
 File path: test/python/test_operation.py
 ##
 @@ -65,6 +65,17 @@ def prepare_inputs_targets_for_rnn_test():
 targets = [t0, t1, t2]
 return inputs, targets, h0
 
+def numpy_unary_ops_backward(func, x, dy, h=0.0005):
 
 Review comment:
   Changed the code by computing the gradient explicitly, the accuracy check is 
up to 5 decimals.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] pinpom commented on a change in pull request #494: SINGA-475 add SoftPlus operator

2019-07-31 Thread GitBox
pinpom commented on a change in pull request #494: SINGA-475 add SoftPlus 
operator
URL: https://github.com/apache/incubator-singa/pull/494#discussion_r309088705
 
 

 ##
 File path: test/python/test_operation.py
 ##
 @@ -610,6 +610,17 @@ def test_Atanh_gpu(self):
 np.testing.assert_array_almost_equal(tensor.to_numpy(result), XT, 
decimal=5)
 self.check_shape(dx.shape(), (3, 2))
 
+def test_SoftPlus(self):
+X=np.array([1.0,2.0,3.0,4.0,5.0,6.0]).reshape(3,2).astype(np.float32)
+XT=np.log(np.exp(X) + 1)
+x=tensor.from_numpy(X)
+x.to_device(gpu_dev)
+
+result=autograd.softplus(x)
+dx=result.creator.backward(x.data)
 
 Review comment:
   yes. just added gradient test


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] pinpom commented on a change in pull request #494: SINGA-475 add SoftPlus operator

2019-07-31 Thread GitBox
pinpom commented on a change in pull request #494: SINGA-475 add SoftPlus 
operator
URL: https://github.com/apache/incubator-singa/pull/494#discussion_r309088705
 
 

 ##
 File path: test/python/test_operation.py
 ##
 @@ -610,6 +610,17 @@ def test_Atanh_gpu(self):
 np.testing.assert_array_almost_equal(tensor.to_numpy(result), XT, 
decimal=5)
 self.check_shape(dx.shape(), (3, 2))
 
+def test_SoftPlus(self):
+X=np.array([1.0,2.0,3.0,4.0,5.0,6.0]).reshape(3,2).astype(np.float32)
+XT=np.log(np.exp(X) + 1)
+x=tensor.from_numpy(X)
+x.to_device(gpu_dev)
+
+result=autograd.softplus(x)
+dx=result.creator.backward(x.data)
 
 Review comment:
   I added gradient test


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-31 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models and their 
components as following:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

Add
 BatchNormalization
 Conv
 LeakyRelu
 MaxPool
 Mul
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Acos
 Add
 BatchNormalization
 Conv
 Cos
 Dropout
 Flatten
 Gemm
 Identity
 InstanceNormalization
 LpNormalization
 Mul
 PRelu
 Reshape
 Sub
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

Abs
 Add
 Add
 ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Conv
 Dropout
 Gather
 Hardmax
 Log
 LSTM
 MatMul
 ReduceMax
 ReduceSum
 Relu
 Shape
 Sigmoid
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

 

In summary, we already implemented 13 ops, and there're still 27 ops needed to 
be implemented:
h2. Already implemented:

-Acos-
 -BatchNormalization-
 -Cos-
 -Conv-
 -LeakyRelu-
 -LSTM-
 -Abs-
 -MaxPool-
 -Flatten-
 -Add-
 -MatMul-
 -Relu-
 -Sigmoid-
h2. To be implemented:

ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Dropout
 Gather
 Gemm
 Hardmax
 Identity
 InstanceNormalization
 Log
 LpNormalization
 Mul
 PRelu
 ReduceMax
 ReduceSum
 Reshape
 Shape
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

Please refer to the [ONNX Operator Schemas| 
[https://github.com/onnx/onnx/blob/master/docs/Operators.md]] for more detailed 
information.

  was:
For the demo purpose, we need to implement these three models and their 
components as following:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

Add
 BatchNormalization
 Conv
 LeakyRelu
 MaxPool
 Mul
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Acos
 Add
 BatchNormalization
 Conv
 Cos
 Dropout
 Flatten
 Gemm
 Identity
 InstanceNormalization
 LpNormalization
 Mul
 PRelu
 Reshape
 Sub
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

Abs
 Add
 Add
 ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Conv
 Dropout
 Gather
 Hardmax
 Log
 LSTM
 MatMul
 ReduceMax
 ReduceSum
 Relu
 Shape
 Sigmoid
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

 

In summary, we already implemented 13 ops, and there're still 27 ops needed to 
be implemented:
h2. Already implemented:

-Acos-
 -BatchNormalization-
 -Cos-
 -Conv-
 -LeakyRelu-
 -LSTM-
 -Abs-
 -MaxPool-
 -Flatten-
 -Add-
 -MatMul-
 -Relu-
 -Sigmoid-
h2. To be implemented:

ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Dropout
 Gather
 Gemm
 Hardmax
 Identity
 InstanceNormalization
 Log
 LpNormalization
 Mul
 PRelu
 ReduceMax
 ReduceSum
 Reshape
 Shape
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

Please refer to the [ONNX Operator 
Schemas|[https://github.com/onnx/onnx/blob/master/docs/Operators.md]] for more 
detailed information.


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
> Attachments: arcface(based resnet100).png, bidaf.png, tiny_yolov2.png
>
>
> For the demo purpose, we need to implement these three models and their 
> components as following:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> Add
>  BatchNormalization
>  Conv
>  LeakyRelu
>  MaxPool
>  Mul
> h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]
> Acos
>  Add
>  BatchNormalization
>  Conv
>  Cos
>  Dropout
>  Flatten
>  Gemm
>  Identity
>  InstanceNormalization
>  LpNormalization
>  Mul
>  PRelu
>  Reshape
>  Sub
> h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]
> Abs
>  Add
>  Add
>  ArgMax
>  Cast
>  Ceil
>  Clip
>  Compress
>  Concat
>  ConstantOfShape
>  Conv
>  Dropout
>  Gather
>  Hardmax
>  Log
>  LSTM
>  MatMul
>  ReduceMax
>  ReduceSum
>  Relu
>  Shape
>  Sigmoid
>  Slice
>  Squeeze
>  Sub
>  Sum
>  Transpose
>  Unsqueeze
>  
> In summary, we already implemented 13 ops, and there're still 27 ops needed 
> to be implemented:
> h2. Already implemented:
> -Acos-
>  -BatchNormalization-
>  -Cos-
>  -Conv-
>  -LeakyRelu-
>  -LSTM-
>  -Abs-
>  -MaxPool-
>  -Flatten-
>  -Add-
>  -MatMul-
>  -Relu-
>  -Sigmoid-
> h2. To be implemented:
> ArgMax
>  Cast
>  Ceil
>  Clip
>  Compress
>  Concat
>  ConstantOfShape
>  Dropout
>  Gather
>  Gemm
>  Hardmax
>  Identity
>  InstanceNormalization
>  Log
>  LpNormalization
>  Mul
>  PRelu
>  ReduceMax
>  ReduceSum
>  Reshape
>  Shape
>  Slice
>  Squeeze
>  Sub
>  Sum
>  Transpose
>  Unsqueeze
> Please refer to the [ONNX Operator Schemas| 
> [https://github.com/onnx/onnx/blob/master/docs/Operators.md]] for more 
> detailed information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-31 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models and their 
components as following:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

Add
 BatchNormalization
 Conv
 LeakyRelu
 MaxPool
 Mul
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Acos
 Add
 BatchNormalization
 Conv
 Cos
 Dropout
 Flatten
 Gemm
 Identity
 InstanceNormalization
 LpNormalization
 Mul
 PRelu
 Reshape
 Sub
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

Abs
 Add
 Add
 ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Conv
 Dropout
 Gather
 Hardmax
 Log
 LSTM
 MatMul
 ReduceMax
 ReduceSum
 Relu
 Shape
 Sigmoid
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

 

In summary, we already implemented 13 ops, and there're still 27 ops needed to 
be implemented:
h2. Already implemented:

-Acos-
 -BatchNormalization-
 -Cos-
 -Conv-
 -LeakyRelu-
 -LSTM-
 -Abs-
 -MaxPool-
 -Flatten-
 -Add-
 -MatMul-
 -Relu-
 -Sigmoid-
h2. To be implemented:

ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Dropout
 Gather
 Gemm
 Hardmax
 Identity
 InstanceNormalization
 Log
 LpNormalization
 Mul
 PRelu
 ReduceMax
 ReduceSum
 Reshape
 Shape
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

Please refer to the [ONNX Operator Schemas| 
https://github.com/onnx/onnx/blob/master/docs/Operators.md] for more detailed 
information.

  was:
For the demo purpose, we need to implement these three models and their 
components as following:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

Add
 BatchNormalization
 Conv
 LeakyRelu
 MaxPool
 Mul
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Acos
 Add
 BatchNormalization
 Conv
 Cos
 Dropout
 Flatten
 Gemm
 Identity
 InstanceNormalization
 LpNormalization
 Mul
 PRelu
 Reshape
 Sub
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

Abs
 Add
 Add
 ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Conv
 Dropout
 Gather
 Hardmax
 Log
 LSTM
 MatMul
 ReduceMax
 ReduceSum
 Relu
 Shape
 Sigmoid
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

 

In summary, we already implemented 13 ops, and there're still 27 ops needed to 
be implemented:
h2. Already implemented:

-Acos-
 -BatchNormalization-
 -Cos-
 -Conv-
 -LeakyRelu-
 -LSTM-
 -Abs-
 -MaxPool-
 -Flatten-
 -Add-
 -MatMul-
 -Relu-
 -Sigmoid-
h2. To be implemented:

ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Dropout
 Gather
 Gemm
 Hardmax
 Identity
 InstanceNormalization
 Log
 LpNormalization
 Mul
 PRelu
 ReduceMax
 ReduceSum
 Reshape
 Shape
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

Please refer to the [ONNX Operator Schemas| 
[https://github.com/onnx/onnx/blob/master/docs/Operators.md]] for more detailed 
information.


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
> Attachments: arcface(based resnet100).png, bidaf.png, tiny_yolov2.png
>
>
> For the demo purpose, we need to implement these three models and their 
> components as following:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> Add
>  BatchNormalization
>  Conv
>  LeakyRelu
>  MaxPool
>  Mul
> h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]
> Acos
>  Add
>  BatchNormalization
>  Conv
>  Cos
>  Dropout
>  Flatten
>  Gemm
>  Identity
>  InstanceNormalization
>  LpNormalization
>  Mul
>  PRelu
>  Reshape
>  Sub
> h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]
> Abs
>  Add
>  Add
>  ArgMax
>  Cast
>  Ceil
>  Clip
>  Compress
>  Concat
>  ConstantOfShape
>  Conv
>  Dropout
>  Gather
>  Hardmax
>  Log
>  LSTM
>  MatMul
>  ReduceMax
>  ReduceSum
>  Relu
>  Shape
>  Sigmoid
>  Slice
>  Squeeze
>  Sub
>  Sum
>  Transpose
>  Unsqueeze
>  
> In summary, we already implemented 13 ops, and there're still 27 ops needed 
> to be implemented:
> h2. Already implemented:
> -Acos-
>  -BatchNormalization-
>  -Cos-
>  -Conv-
>  -LeakyRelu-
>  -LSTM-
>  -Abs-
>  -MaxPool-
>  -Flatten-
>  -Add-
>  -MatMul-
>  -Relu-
>  -Sigmoid-
> h2. To be implemented:
> ArgMax
>  Cast
>  Ceil
>  Clip
>  Compress
>  Concat
>  ConstantOfShape
>  Dropout
>  Gather
>  Gemm
>  Hardmax
>  Identity
>  InstanceNormalization
>  Log
>  LpNormalization
>  Mul
>  PRelu
>  ReduceMax
>  ReduceSum
>  Reshape
>  Shape
>  Slice
>  Squeeze
>  Sub
>  Sum
>  Transpose
>  Unsqueeze
> Please refer to the [ONNX Operator Schemas| 
> https://github.com/onnx/onnx/blob/master/docs/Operators.md] for more detailed 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-31 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models and their 
components as following:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

Add
 BatchNormalization
 Conv
 LeakyRelu
 MaxPool
 Mul
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Acos
 Add
 BatchNormalization
 Conv
 Cos
 Dropout
 Flatten
 Gemm
 Identity
 InstanceNormalization
 LpNormalization
 Mul
 PRelu
 Reshape
 Sub
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

Abs
 Add
 Add
 ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Conv
 Dropout
 Gather
 Hardmax
 Log
 LSTM
 MatMul
 ReduceMax
 ReduceSum
 Relu
 Shape
 Sigmoid
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

 

In summary, we already implemented 13 ops, and there're still 27 ops needed to 
be implemented:
h2. Already implemented:

-Acos-
 -BatchNormalization-
 -Cos-
 -Conv-
 -LeakyRelu-
 -LSTM-
 -Abs-
 -MaxPool-
 -Flatten-
 -Add-
 -MatMul-
 -Relu-
 -Sigmoid-
h2. To be implemented:

ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Dropout
 Gather
 Gemm
 Hardmax
 Identity
 InstanceNormalization
 Log
 LpNormalization
 Mul
 PRelu
 ReduceMax
 ReduceSum
 Reshape
 Shape
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

Please refer to the [ONNX Operator 
Schemas|[https://github.com/onnx/onnx/blob/master/docs/Operators.md]] for more 
detailed information.

  was:
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

Add
BatchNormalization
Conv
LeakyRelu
MaxPool
Mul
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Acos
Add
BatchNormalization
Conv
Cos
Dropout
Flatten
Gemm
Identity
InstanceNormalization
LpNormalization
Mul
PRelu
Reshape
Sub
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

Abs
Add
Add
ArgMax
Cast
Ceil
Clip
Compress
Concat
ConstantOfShape
Conv
Dropout
Gather
Hardmax
Log
LSTM
MatMul
ReduceMax
ReduceSum
Relu
Shape
Sigmoid
Slice
Squeeze
Sub
Sum
Transpose
Unsqueeze

 

In summary, we already implemented 13 ops, and they're still 27 ops needed to 
be implemented:
h2. Already implemented:

-Acos-
-BatchNormalization-
-Cos-
-Conv-
-LeakyRelu-
-LSTM-
-Abs-
-MaxPool-
-Flatten-
-Add-
-MatMul-
-Relu-
-Sigmoid-
h2. To be implemented:

ArgMax
Cast
Ceil
Clip
Compress
Concat
ConstantOfShape
Dropout
Gather
Gemm
Hardmax
Identity
InstanceNormalization
Log
LpNormalization
Mul
PRelu
ReduceMax
ReduceSum
Reshape
Shape
Slice
Squeeze
Sub
Sum
Transpose
Unsqueeze

Please refer to the [ONNX Operator 
Schemas|[https://github.com/onnx/onnx/blob/master/docs/Operators.md]] for more 
detailed information.


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
> Attachments: arcface(based resnet100).png, bidaf.png, tiny_yolov2.png
>
>
> For the demo purpose, we need to implement these three models and their 
> components as following:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> Add
>  BatchNormalization
>  Conv
>  LeakyRelu
>  MaxPool
>  Mul
> h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]
> Acos
>  Add
>  BatchNormalization
>  Conv
>  Cos
>  Dropout
>  Flatten
>  Gemm
>  Identity
>  InstanceNormalization
>  LpNormalization
>  Mul
>  PRelu
>  Reshape
>  Sub
> h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]
> Abs
>  Add
>  Add
>  ArgMax
>  Cast
>  Ceil
>  Clip
>  Compress
>  Concat
>  ConstantOfShape
>  Conv
>  Dropout
>  Gather
>  Hardmax
>  Log
>  LSTM
>  MatMul
>  ReduceMax
>  ReduceSum
>  Relu
>  Shape
>  Sigmoid
>  Slice
>  Squeeze
>  Sub
>  Sum
>  Transpose
>  Unsqueeze
>  
> In summary, we already implemented 13 ops, and there're still 27 ops needed 
> to be implemented:
> h2. Already implemented:
> -Acos-
>  -BatchNormalization-
>  -Cos-
>  -Conv-
>  -LeakyRelu-
>  -LSTM-
>  -Abs-
>  -MaxPool-
>  -Flatten-
>  -Add-
>  -MatMul-
>  -Relu-
>  -Sigmoid-
> h2. To be implemented:
> ArgMax
>  Cast
>  Ceil
>  Clip
>  Compress
>  Concat
>  ConstantOfShape
>  Dropout
>  Gather
>  Gemm
>  Hardmax
>  Identity
>  InstanceNormalization
>  Log
>  LpNormalization
>  Mul
>  PRelu
>  ReduceMax
>  ReduceSum
>  Reshape
>  Shape
>  Slice
>  Squeeze
>  Sub
>  Sum
>  Transpose
>  Unsqueeze
> Please refer to the [ONNX Operator 
> Schemas|[https://github.com/onnx/onnx/blob/master/docs/Operators.md]] for 
> more detailed information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-31 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

Add
BatchNormalization
Conv
LeakyRelu
MaxPool
Mul
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Acos
Add
BatchNormalization
Conv
Cos
Dropout
Flatten
Gemm
Identity
InstanceNormalization
LpNormalization
Mul
PRelu
Reshape
Sub
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

Abs
Add
Add
ArgMax
Cast
Ceil
Clip
Compress
Concat
ConstantOfShape
Conv
Dropout
Gather
Hardmax
Log
LSTM
MatMul
ReduceMax
ReduceSum
Relu
Shape
Sigmoid
Slice
Squeeze
Sub
Sum
Transpose
Unsqueeze

 

In summary, we already implemented 13 ops, and they're still 27 ops needed to 
be implemented:
h2. Already implemented:

-Acos-
-BatchNormalization-
-Cos-
-Conv-
-LeakyRelu-
-LSTM-
-Abs-
-MaxPool-
-Flatten-
-Add-
-MatMul-
-Relu-
-Sigmoid-
h2. To be implemented:

ArgMax
Cast
Ceil
Clip
Compress
Concat
ConstantOfShape
Dropout
Gather
Gemm
Hardmax
Identity
InstanceNormalization
Log
LpNormalization
Mul
PRelu
ReduceMax
ReduceSum
Reshape
Shape
Slice
Squeeze
Sub
Sum
Transpose
Unsqueeze

Please refer to the [ONNX Operator 
Schemas|[https://github.com/onnx/onnx/blob/master/docs/Operators.md]] for more 
detailed information.

  was:
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2. To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
> Attachments: arcface(based resnet100).png, bidaf.png, tiny_yolov2.png
>
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> Add
> BatchNormalization
> Conv
> LeakyRelu
> MaxPool
> Mul
> h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]
> Acos
> Add
> BatchNormalization
> Conv
> Cos
> Dropout
> Flatten
> Gemm
> Identity
> InstanceNormalization
> LpNormalization
> Mul
> PRelu
> Reshape
> Sub
> h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]
> Abs
> Add
> Add
> ArgMax
> Cast
> Ceil
> Clip
> Compress
> Concat
> ConstantOfShape
> Conv
> Dropout
> Gather
> Hardmax
> Log
> LSTM
> MatMul
> ReduceMax
> ReduceSum
> Relu
> Shape
> Sigmoid
> Slice
> Squeeze
> Sub
> Sum
> Transpose
> Unsqueeze
>  
> In summary, we already implemented 13 ops, and they're still 27 ops needed to 
> be implemented:
> h2. Already implemented:
> -Acos-
> -BatchNormalization-
> -Cos-
> -Conv-
> -LeakyRelu-
> -LSTM-
> -Abs-
> -MaxPool-
> -Flatten-
> -Add-
> -MatMul-
> -Relu-
> -Sigmoid-
> h2. To be implemented:
> ArgMax
> Cast
> Ceil
> Clip
> Compress
> Concat
> ConstantOfShape
> Dropout
> Gather
> Gemm
> Hardmax
> Identity
> InstanceNormalization
> Log
> LpNormalization
> Mul
> PRelu
> ReduceMax
> ReduceSum
> Reshape
> Shape
> Slice
> Squeeze
> Sub
> Sum
> Transpose
> Unsqueeze
> Please refer to the [ONNX Operator 
> Schemas|[https://github.com/onnx/onnx/blob/master/docs/Operators.md]] for 
> more detailed information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-31 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Attachment: bidaf.png

> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
> Attachments: arcface(based resnet100).png, bidaf.png, tiny_yolov2.png
>
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> MaxPooling2D
>  Conv2D
>  BatchNormalization
>  LeakyReLU
>  Reshape
> h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]
> Conv2D
>  BatchNormalization
>  relu
>  MaxPooling2D
>  Dropout
>  Flatten
>  Dense
>  Softmax
>  l2_normalize
>  acos
>  cos
> h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]
> K.stack
>  Softmax
>  K.expand_dims
>  K.sum
>  Constant
>  Dense
>  Lambda(lambda x: 1.0 - x, output_shape=(dim,))
>  Multiply
>  Add
>  K.concatenate
>  K.shape
>  K.max
>  K.tile
>  K.squeeze
>  linear
>  TimeDistributed
>  Bidirectional(LSTM
>  
>  
> In summary, we already implemented 12 ops, and there still are 16 ops needed 
> to be implemented:
> h2. Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
> h2. To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-31 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Attachment: arcface(based resnet100).png

> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
> Attachments: arcface(based resnet100).png, tiny_yolov2.png
>
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> MaxPooling2D
>  Conv2D
>  BatchNormalization
>  LeakyReLU
>  Reshape
> h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]
> Conv2D
>  BatchNormalization
>  relu
>  MaxPooling2D
>  Dropout
>  Flatten
>  Dense
>  Softmax
>  l2_normalize
>  acos
>  cos
> h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]
> K.stack
>  Softmax
>  K.expand_dims
>  K.sum
>  Constant
>  Dense
>  Lambda(lambda x: 1.0 - x, output_shape=(dim,))
>  Multiply
>  Add
>  K.concatenate
>  K.shape
>  K.max
>  K.tile
>  K.squeeze
>  linear
>  TimeDistributed
>  Bidirectional(LSTM
>  
>  
> In summary, we already implemented 12 ops, and there still are 16 ops needed 
> to be implemented:
> h2. Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
> h2. To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-31 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Attachment: tiny_yolov2.png

> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
> Attachments: arcface(based resnet100).png, tiny_yolov2.png
>
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> MaxPooling2D
>  Conv2D
>  BatchNormalization
>  LeakyReLU
>  Reshape
> h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]
> Conv2D
>  BatchNormalization
>  relu
>  MaxPooling2D
>  Dropout
>  Flatten
>  Dense
>  Softmax
>  l2_normalize
>  acos
>  cos
> h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]
> K.stack
>  Softmax
>  K.expand_dims
>  K.sum
>  Constant
>  Dense
>  Lambda(lambda x: 1.0 - x, output_shape=(dim,))
>  Multiply
>  Add
>  K.concatenate
>  K.shape
>  K.max
>  K.tile
>  K.squeeze
>  linear
>  TimeDistributed
>  Bidirectional(LSTM
>  
>  
> In summary, we already implemented 12 ops, and there still are 16 ops needed 
> to be implemented:
> h2. Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
> h2. To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)