[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-30 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2. To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 

  was:
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> MaxPooling2D
>  Conv2D
>  BatchNormalization
>  LeakyReLU
>  Reshape
> h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]
> Conv2D
>  BatchNormalization
>  relu
>  MaxPooling2D
>  Dropout
>  Flatten
>  Dense
>  Softmax
>  l2_normalize
>  acos
>  cos
> h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]
> K.stack
>  Softmax
>  K.expand_dims
>  K.sum
>  Constant
>  Dense
>  Lambda(lambda x: 1.0 - x, output_shape=(dim,))
>  Multiply
>  Add
>  K.concatenate
>  K.shape
>  K.max
>  K.tile
>  K.squeeze
>  linear
>  TimeDistributed
>  Bidirectional(LSTM
>  
>  
> In summary, we already implemented 12 ops, and there still are 16 ops needed 
> to be implemented:
> h2. Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
> h2. To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-30 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 

  was:
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. [Arcface]|https://arxiv.org/pdf/1801.07698.pdf]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. [BIDAF]]|https://arxiv.org/pdf/1611.01603.pdf]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> MaxPooling2D
>  Conv2D
>  BatchNormalization
>  LeakyReLU
>  Reshape
> h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]
> Conv2D
>  BatchNormalization
>  relu
>  MaxPooling2D
>  Dropout
>  Flatten
>  Dense
>  Softmax
>  l2_normalize
>  acos
>  cos
> h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]
> K.stack
>  Softmax
>  K.expand_dims
>  K.sum
>  Constant
>  Dense
>  Lambda(lambda x: 1.0 - x, output_shape=(dim,))
>  Multiply
>  Add
>  K.concatenate
>  K.shape
>  K.max
>  K.tile
>  K.squeeze
>  linear
>  TimeDistributed
>  Bidirectional(LSTM
>  
>  
> In summary, we already implemented 12 ops, and there still are 16 ops needed 
> to be implemented:
> h2. Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
> h2.  To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-30 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. [Arcface]|https://arxiv.org/pdf/1801.07698.pdf]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. [BIDAF]]|https://arxiv.org/pdf/1611.01603.pdf]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 

  was:
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [[Tiny yolov2|]|https://arxiv.org/pdf/1612.08242.pdf]]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. Arcface[link title|https://arxiv.org/abs/1801.07698]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. BIDAF[link title|https://arxiv.org/pdf/1611.01603]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> MaxPooling2D
>  Conv2D
>  BatchNormalization
>  LeakyReLU
>  Reshape
> h2. [Arcface]|https://arxiv.org/pdf/1801.07698.pdf]
> Conv2D
>  BatchNormalization
>  relu
>  MaxPooling2D
>  Dropout
>  Flatten
>  Dense
>  Softmax
>  l2_normalize
>  acos
>  cos
> h2. [BIDAF]]|https://arxiv.org/pdf/1611.01603.pdf]
> K.stack
>  Softmax
>  K.expand_dims
>  K.sum
>  Constant
>  Dense
>  Lambda(lambda x: 1.0 - x, output_shape=(dim,))
>  Multiply
>  Add
>  K.concatenate
>  K.shape
>  K.max
>  K.tile
>  K.squeeze
>  linear
>  TimeDistributed
>  Bidirectional(LSTM
>  
>  
> In summary, we already implemented 12 ops, and there still are 16 ops needed 
> to be implemented:
> h2. Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
> h2.  To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-30 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [[Tiny yolov2|]|https://arxiv.org/pdf/1612.08242.pdf]]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. Arcface[link title|https://arxiv.org/abs/1801.07698]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. BIDAF[link title|https://arxiv.org/pdf/1611.01603]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 

  was:
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. Tiny yolov2[link title|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. Arcface[link title|https://arxiv.org/abs/1801.07698]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. BIDAF[link title|https://arxiv.org/pdf/1611.01603]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. [[Tiny yolov2|]|https://arxiv.org/pdf/1612.08242.pdf]]
> MaxPooling2D
>  Conv2D
>  BatchNormalization
>  LeakyReLU
>  Reshape
> h2. Arcface[link title|https://arxiv.org/abs/1801.07698]
> Conv2D
>  BatchNormalization
>  relu
>  MaxPooling2D
>  Dropout
>  Flatten
>  Dense
>  Softmax
>  l2_normalize
>  acos
>  cos
> h2. BIDAF[link title|https://arxiv.org/pdf/1611.01603]
> K.stack
>  Softmax
>  K.expand_dims
>  K.sum
>  Constant
>  Dense
>  Lambda(lambda x: 1.0 - x, output_shape=(dim,))
>  Multiply
>  Add
>  K.concatenate
>  K.shape
>  K.max
>  K.tile
>  K.squeeze
>  linear
>  TimeDistributed
>  Bidirectional(LSTM
>  
>  
> In summary, we already implemented 12 ops, and there still are 16 ops needed 
> to be implemented:
> h2. Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
> h2.  To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-30 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. Tiny yolov2[link title|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. Arcface[link title|https://arxiv.org/abs/1801.07698]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. BIDAF[link title|https://arxiv.org/pdf/1611.01603]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 

  was:
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. Tiny yolov2[link title|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
Conv2D
BatchNormalization
LeakyReLU
Reshape
h2. Arcface[link title|https://arxiv.org/abs/1801.07698]

Conv2D
BatchNormalization
relu
MaxPooling2D
Dropout
Flatten
Dense
Softmax
l2_normalize
acos
cos
h2. BIDAF[link title|https://arxiv.org/pdf/1611.01603]

K.stack
Softmax
K.expand_dims
K.sum
Constant
Dense
Lambda(lambda x: 1.0 - x, output_shape=(dim,))
Multiply
Add
K.concatenate
K.shape
K.max
K.tile
K.squeeze
linear
TimeDistributed
Bidirectional(LSTM
h2. In summary, 
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. Tiny yolov2[link title|https://arxiv.org/pdf/1612.08242.pdf]
> MaxPooling2D
>  Conv2D
>  BatchNormalization
>  LeakyReLU
>  Reshape
> h2. Arcface[link title|https://arxiv.org/abs/1801.07698]
> Conv2D
>  BatchNormalization
>  relu
>  MaxPooling2D
>  Dropout
>  Flatten
>  Dense
>  Softmax
>  l2_normalize
>  acos
>  cos
> h2. BIDAF[link title|https://arxiv.org/pdf/1611.01603]
> K.stack
>  Softmax
>  K.expand_dims
>  K.sum
>  Constant
>  Dense
>  Lambda(lambda x: 1.0 - x, output_shape=(dim,))
>  Multiply
>  Add
>  K.concatenate
>  K.shape
>  K.max
>  K.tile
>  K.squeeze
>  linear
>  TimeDistributed
>  Bidirectional(LSTM
>  
>  
> In summary, we already implemented 12 ops, and there still are 16 ops needed 
> to be implemented:
> h2. Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
> h2.  To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-30 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. Tiny yolov2[link title|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
Conv2D
BatchNormalization
LeakyReLU
Reshape
h2. Arcface[link title|https://arxiv.org/abs/1801.07698]

Conv2D
BatchNormalization
relu
MaxPooling2D
Dropout
Flatten
Dense
Softmax
l2_normalize
acos
cos
h2. BIDAF[link title|https://arxiv.org/pdf/1611.01603]

K.stack
Softmax
K.expand_dims
K.sum
Constant
Dense
Lambda(lambda x: 1.0 - x, output_shape=(dim,))
Multiply
Add
K.concatenate
K.shape
K.max
K.tile
K.squeeze
linear
TimeDistributed
Bidirectional(LSTM
h2. In summary, 
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 

  was:
Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-

 

To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. Tiny yolov2[link title|https://arxiv.org/pdf/1612.08242.pdf]
> MaxPooling2D
> Conv2D
> BatchNormalization
> LeakyReLU
> Reshape
> h2. Arcface[link title|https://arxiv.org/abs/1801.07698]
> Conv2D
> BatchNormalization
> relu
> MaxPooling2D
> Dropout
> Flatten
> Dense
> Softmax
> l2_normalize
> acos
> cos
> h2. BIDAF[link title|https://arxiv.org/pdf/1611.01603]
> K.stack
> Softmax
> K.expand_dims
> K.sum
> Constant
> Dense
> Lambda(lambda x: 1.0 - x, output_shape=(dim,))
> Multiply
> Add
> K.concatenate
> K.shape
> K.max
> K.tile
> K.squeeze
> linear
> TimeDistributed
> Bidirectional(LSTM
> h2. In summary, 
> h2. Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
> h2.  To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (SINGA-476) Autograd operators for ONNX

2019-07-30 Thread zhangzhaoqi (JIRA)
zhangzhaoqi created SINGA-476:
-

 Summary: Autograd operators for ONNX
 Key: SINGA-476
 URL: https://issues.apache.org/jira/browse/SINGA-476
 Project: Singa
  Issue Type: New Feature
Reporter: zhangzhaoqi


Already implemented:

-LSTM-
-Multiply-
-Add-
-linear-
-relu-
-acos-
-cos-
-LeakyReLU-
-Softmax-
-MaxPooling2D-
-Conv2D-
-BatchNormalization-

 

To be implemented:

 

Reshape
Flatten
Dropout
max
shape
concatenate
Constant
L2Normalization
Expand
tile
squeeze
Dense*
TimeDistributed*
Bidirectional*
Stack*
Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-30 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-

 

To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

  was:
Already implemented:

-LSTM-
-Multiply-
-Add-
-linear-
-relu-
-acos-
-cos-
-LeakyReLU-
-Softmax-
-MaxPooling2D-
-Conv2D-
-BatchNormalization-

 

To be implemented:

 

Reshape
Flatten
Dropout
max
shape
concatenate
Constant
L2Normalization
Expand
tile
squeeze
Dense*
TimeDistributed*
Bidirectional*
Stack*
Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
>
> Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
>  
> To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #493: SINGA-473 Autograd Trigonometry: Backward Test

2019-07-30 Thread GitBox
chrishkchris commented on a change in pull request #493: SINGA-473 Autograd 
Trigonometry: Backward Test
URL: https://github.com/apache/incubator-singa/pull/493#discussion_r309006617
 
 

 ##
 File path: test/python/test_operation.py
 ##
 @@ -65,6 +65,17 @@ def prepare_inputs_targets_for_rnn_test():
 targets = [t0, t1, t2]
 return inputs, targets, h0
 
+def numpy_unary_ops_backward(func, x, dy, h=0.0005):
 
 Review comment:
   OK, I will change the code using one of your two methods (the link provided 
or compute the gradient explicitly).
   
   The code I did was only the diagonal of the gradient matrix, which is only 
suitable for unary operator where one output is determined by one input, 
without the coupling from other inputs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] pinpom opened a new pull request #495: SINGA-475 add SoftSign operator

2019-07-30 Thread GitBox
pinpom opened a new pull request #495: SINGA-475 add SoftSign operator
URL: https://github.com/apache/incubator-singa/pull/495
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles commented on a change in pull request #492: Make singa use multiple memory pools

2019-07-30 Thread GitBox
nudles commented on a change in pull request #492: Make singa use multiple 
memory pools
URL: https://github.com/apache/incubator-singa/pull/492#discussion_r308742630
 
 

 ##
 File path: include/singa/core/device.h
 ##
 @@ -295,7 +295,9 @@ class Platform {
   /// Create a set of CudaGPU Device using given GPU IDs.
   static const std::vector>
   CreateCudaGPUsOn(const std::vector , size_t init_size = 0);
-  
+
+static std::vector > allRet;
 
 Review comment:
   what is allRet and retUsed?
   pls add some comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles commented on a change in pull request #492: Make singa use multiple memory pools

2019-07-30 Thread GitBox
nudles commented on a change in pull request #492: Make singa use multiple 
memory pools
URL: https://github.com/apache/incubator-singa/pull/492#discussion_r308745750
 
 

 ##
 File path: include/singa/core/device.h
 ##
 @@ -295,7 +295,9 @@ class Platform {
   /// Create a set of CudaGPU Device using given GPU IDs.
   static const std::vector>
   CreateCudaGPUsOn(const std::vector , size_t init_size = 0);
-  
+
+static std::vector > allRet;
 
 Review comment:
   should they be static members of Platform?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles commented on a change in pull request #494: SINGA-475 add SoftPlus operator

2019-07-30 Thread GitBox
nudles commented on a change in pull request #494: SINGA-475 add SoftPlus 
operator
URL: https://github.com/apache/incubator-singa/pull/494#discussion_r308742158
 
 

 ##
 File path: test/python/test_operation.py
 ##
 @@ -610,6 +610,17 @@ def test_Atanh_gpu(self):
 np.testing.assert_array_almost_equal(tensor.to_numpy(result), XT, 
decimal=5)
 self.check_shape(dx.shape(), (3, 2))
 
+def test_SoftPlus(self):
+X=np.array([1.0,2.0,3.0,4.0,5.0,6.0]).reshape(3,2).astype(np.float32)
+XT=np.log(np.exp(X) + 1)
+x=tensor.from_numpy(X)
+x.to_device(gpu_dev)
+
+result=autograd.softplus(x)
+dx=result.creator.backward(x.data)
 
 Review comment:
   pls also test the correctness of the gradients.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles commented on a change in pull request #493: SINGA-473 Autograd Trigonometry: Backward Test

2019-07-30 Thread GitBox
nudles commented on a change in pull request #493: SINGA-473 Autograd 
Trigonometry: Backward Test
URL: https://github.com/apache/incubator-singa/pull/493#discussion_r308740781
 
 

 ##
 File path: test/python/test_operation.py
 ##
 @@ -65,6 +65,17 @@ def prepare_inputs_targets_for_rnn_test():
 targets = [t0, t1, t2]
 return inputs, targets, h0
 
+def numpy_unary_ops_backward(func, x, dy, h=0.0005):
 
 Review comment:
   This is not correct.
   You can either use this 
[one](http://cs231n.github.io/optimization-1/#gradcompute), or compute the 
gradient explicitly, e.g., gradient of cos() is sin()


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] pinpom opened a new pull request #494: SINGA-475 add SoftPlus operator

2019-07-30 Thread GitBox
pinpom opened a new pull request #494: SINGA-475 add SoftPlus operator
URL: https://github.com/apache/incubator-singa/pull/494
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services