[GitHub] [incubator-singa] ShichengChen commented on a change in pull request #524: SINGA-474 prelu, add, equal, selu, elu operator

2019-08-24 Thread GitBox
ShichengChen commented on a change in pull request #524: SINGA-474 
prelu,add,equal,selu,elu operator
URL: https://github.com/apache/incubator-singa/pull/524#discussion_r317383901
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -608,21 +608,177 @@ def backward(self, dy):
 def reshape(a,shape):
 return Reshape(shape)(a)[0]
 
+class PRelu(Operation):
+
+def __init__(self):
+super(PRelu, self).__init__()
+
+def forward(self, x, slope):
+mask0 = singa.LTFloat(x, 0.0)
+if training:
+self.input = x
+self.slope = slope
+self.mask0 = mask0
+x1 = singa.__mul__(x, mask0)
+x1 *= slope
+x2 = singa.ReLU(x)
+x1 += x2
+return x1
+
+def backward(self, dy):
+dx1mask = singa.GEFloat(self.input, 0.0)
+dx2 = singa.__mul__(self.mask0, self.slope)
+dx = singa.__add__(dx1mask, dx2)
+return singa.__mul__(dy, dx), singa.__mul__(dy,
+singa.__mul__(
+self.mask0, 
self.input))
+
+
+def prelu(x, slope):
+return PRelu()(x, slope)[0]
 
 class Add(Operation):
 def __init__(self):
 super(Add, self).__init__()
 
 def forward(self, a, b):
+#up till now, the dimensions of tensor a and b should less than 3
+self.shape0=list(a.shape())
+self.shape1=list(b.shape())
+assert(len(self.shape0) <= 2 and len(self.shape1) <= 2),"up till now, 
the dimensions of tensor a and b should less than 3"
 return singa.__add__(a, b)
 
 def backward(self, dy):
-return dy, dy
+if(type(dy)==float):return dy,dy
+db=CTensor(list(dy.shape()), dy.device())
+db.CopyData(dy)
+for i in range(len(self.shape0)-len(self.shape1)):
+db=singa.Sum(db, 0)
+return dy, db
 
 
 def add(a, b):
 return Add()(a, b)[0]
 
+class Elu(Operation):
+def __init__(self,alpha=1):
+super(Elu, self).__init__()
+self.alpha=alpha
+
+def forward(self, x):
+"""Do forward propgation.
+Store the x if requires gradient.
+Args:
+x (CTensor): matrix
+Returns:
+a CTensor for the result
+"""
+#f(x) = alpha * (exp(x) - 1.) for x < 0, f(x) = x for x >= 0
+if training:
+self.input = x
+x1 = singa.LTFloat(x, 0.0)
+x1 = singa.__mul__(x, x1)
+x1 = singa.MultFloat(singa.SubFloat(singa.Exp(x1),1.0),self.alpha)
+x2 = singa.ReLU(x)
+x1 = singa.__add__(x1, x2)
 
 Review comment:
   yes


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] ShichengChen commented on a change in pull request #524: SINGA-474 prelu, add, equal, selu, elu operator

2019-08-24 Thread GitBox
ShichengChen commented on a change in pull request #524: SINGA-474 
prelu,add,equal,selu,elu operator
URL: https://github.com/apache/incubator-singa/pull/524#discussion_r317383903
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -608,21 +608,177 @@ def backward(self, dy):
 def reshape(a,shape):
 return Reshape(shape)(a)[0]
 
+class PRelu(Operation):
+
+def __init__(self):
+super(PRelu, self).__init__()
+
+def forward(self, x, slope):
+mask0 = singa.LTFloat(x, 0.0)
+if training:
+self.input = x
+self.slope = slope
+self.mask0 = mask0
+x1 = singa.__mul__(x, mask0)
+x1 *= slope
+x2 = singa.ReLU(x)
+x1 += x2
+return x1
+
+def backward(self, dy):
+dx1mask = singa.GEFloat(self.input, 0.0)
+dx2 = singa.__mul__(self.mask0, self.slope)
+dx = singa.__add__(dx1mask, dx2)
+return singa.__mul__(dy, dx), singa.__mul__(dy,
+singa.__mul__(
+self.mask0, 
self.input))
+
+
+def prelu(x, slope):
+return PRelu()(x, slope)[0]
 
 class Add(Operation):
 def __init__(self):
 super(Add, self).__init__()
 
 def forward(self, a, b):
+#up till now, the dimensions of tensor a and b should less than 3
+self.shape0=list(a.shape())
+self.shape1=list(b.shape())
+assert(len(self.shape0) <= 2 and len(self.shape1) <= 2),"up till now, 
the dimensions of tensor a and b should less than 3"
 return singa.__add__(a, b)
 
 def backward(self, dy):
-return dy, dy
+if(type(dy)==float):return dy,dy
+db=CTensor(list(dy.shape()), dy.device())
+db.CopyData(dy)
+for i in range(len(self.shape0)-len(self.shape1)):
+db=singa.Sum(db, 0)
+return dy, db
 
 
 def add(a, b):
 return Add()(a, b)[0]
 
+class Elu(Operation):
+def __init__(self,alpha=1):
+super(Elu, self).__init__()
+self.alpha=alpha
+
+def forward(self, x):
+"""Do forward propgation.
+Store the x if requires gradient.
+Args:
+x (CTensor): matrix
+Returns:
+a CTensor for the result
+"""
+#f(x) = alpha * (exp(x) - 1.) for x < 0, f(x) = x for x >= 0
+if training:
+self.input = x
+x1 = singa.LTFloat(x, 0.0)
+x1 = singa.__mul__(x, x1)
+x1 = singa.MultFloat(singa.SubFloat(singa.Exp(x1),1.0),self.alpha)
+x2 = singa.ReLU(x)
+x1 = singa.__add__(x1, x2)
+return x1
+
+def backward(self, dy):
+"""
+Args:
+dy (CTensor): data for the dL / dy, L is the loss
+Returns:
+a tuple for dx
+"""
+dx1mask = singa.LTFloat(self.input, 0.0)
+dx1 = singa.MultFloat(singa.Exp(self.input), self.alpha)
+dx1 = singa.__mul__(dx1mask, dx1)
+
+dx2mask = singa.GEFloat(self.input, 0.0)
+
+dx = singa.__add__(dx1, dx2mask)
+return singa.__mul__(dy, dx)
 
 Review comment:
   yes


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] ShichengChen commented on a change in pull request #524: SINGA-474 prelu, add, equal, selu, elu operator

2019-08-24 Thread GitBox
ShichengChen commented on a change in pull request #524: SINGA-474 
prelu,add,equal,selu,elu operator
URL: https://github.com/apache/incubator-singa/pull/524#discussion_r317383894
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -608,21 +608,177 @@ def backward(self, dy):
 def reshape(a,shape):
 return Reshape(shape)(a)[0]
 
+class PRelu(Operation):
+
+def __init__(self):
+super(PRelu, self).__init__()
+
+def forward(self, x, slope):
+mask0 = singa.LTFloat(x, 0.0)
+if training:
+self.input = x
+self.slope = slope
+self.mask0 = mask0
+x1 = singa.__mul__(x, mask0)
+x1 *= slope
+x2 = singa.ReLU(x)
+x1 += x2
+return x1
+
+def backward(self, dy):
+dx1mask = singa.GEFloat(self.input, 0.0)
+dx2 = singa.__mul__(self.mask0, self.slope)
+dx = singa.__add__(dx1mask, dx2)
+return singa.__mul__(dy, dx), singa.__mul__(dy,
+singa.__mul__(
+self.mask0, 
self.input))
+
+
+def prelu(x, slope):
+return PRelu()(x, slope)[0]
 
 class Add(Operation):
 def __init__(self):
 super(Add, self).__init__()
 
 def forward(self, a, b):
+#up till now, the dimensions of tensor a and b should less than 3
+self.shape0=list(a.shape())
+self.shape1=list(b.shape())
+assert(len(self.shape0) <= 2 and len(self.shape1) <= 2),"up till now, 
the dimensions of tensor a and b should less than 3"
 return singa.__add__(a, b)
 
 def backward(self, dy):
-return dy, dy
+if(type(dy)==float):return dy,dy
 
 Review comment:
   yes


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] ShichengChen commented on a change in pull request #524: SINGA-474 prelu, add, equal, selu, elu operator

2019-08-24 Thread GitBox
ShichengChen commented on a change in pull request #524: SINGA-474 
prelu,add,equal,selu,elu operator
URL: https://github.com/apache/incubator-singa/pull/524#discussion_r317383759
 
 

 ##
 File path: test/python/test_operation.py
 ##
 @@ -1587,25 +1587,148 @@ def test_min_gpu(self):
 
np.testing.assert_array_almost_equal(tensor.to_numpy(tensor.from_raw_tensor(dx1)),
 DX1, decimal=5)
 
 
-def test_HardSigmoid(self):
-def test_helper(gpu=False):
-x = np.random.randn(3, 2)
-#y = max(0, min(1, alpha * x + gamma))
-a=0.2
-g=0.5
-y = np.clip(x * 0.2 + 0.5, 0, 1)
-grad=(0<(np.clip(x * 0.2 + 0.5, 0, 1)) * (np.clip(x * 0.2 + 0.5, 0, 
1)<1))*0.2
-x = tensor.from_numpy(x)
-if(gpu):
-x.to_device(gpu_dev)
-result = autograd.hardsigmoid(x,a,g)
-dy = tensor.from_numpy(np.random.randn((3,2)).astype(np.float32))
-dx = result.creator.backward(dy.data)
-np.testing.assert_array_almost_equal(tensor.to_numpy(result), y, 
decimal=5)
-
np.testing.assert_array_almost_equal(tensor.to_numpy(tensor.from_raw_tensor(dx)),
 grad, decimal=5)
-test_helper(False)
-test_helper(True)
+def test_HardSigmoid(self):
+def test_helper(gpu=False):
+x = np.random.randn(3, 2)
+#y = max(0, min(1, alpha * x + gamma))
+a=0.2
+g=0.5
+y = np.clip(x * 0.2 + 0.5, 0, 1)
+dy=np.random.randn(3,2)
+grad=(0<(np.clip(x * 0.2 + 0.5, 0, 1)) * (np.clip(x * 0.2 + 0.5, 
0, 1)<1))*0.2 * dy
+x = tensor.from_numpy(x)
+dy = tensor.from_numpy(dy)
+if(gpu):
+x.to_device(gpu_dev)
+dy.to_device(gpu_dev)
+result = autograd.hardsigmoid(x,a,g)
+dx = result.creator.backward(dy.data)
+np.testing.assert_array_almost_equal(tensor.to_numpy(result), y, 
decimal=5)
+
np.testing.assert_array_almost_equal(tensor.to_numpy(tensor.from_raw_tensor(dx)),
 grad, decimal=5)
+test_helper(False)
+test_helper(True)
+
+@unittest.skipIf(not singa_wrap.USE_CUDA, 'CUDA is not enabled')
+def test_prelu(self):
+def hepler(gpu):
+x = np.random.randn(3, 2)
+slope = np.random.randn(3, 2)
+y = np.clip(x, 0, np.inf) + np.clip(x, -np.inf, 0) * slope
+dy = np.random.randn(3, 2)
+x0=x.copy()
+x0[x0>0]=1
+x0[x0<1]=0
+grad0=(x0+(1-x0)*slope)*dy
+grad1 = (1-x0)*x*dy
+x = tensor.from_numpy(x)
+slope = tensor.from_numpy(slope)
+dy = tensor.from_numpy(dy)
+if(gpu):
+x.to_device(gpu_dev)
+slope.to_device(gpu_dev)
+dy.to_device(gpu_dev)
+result = autograd.prelu(x,slope)
+dx0,dx1 = result.creator.backward(dy.data)
+np.testing.assert_array_almost_equal(tensor.to_numpy(result), y, 
decimal=5)
+
np.testing.assert_array_almost_equal(tensor.to_numpy(tensor.from_raw_tensor(dx0)),
 grad0, decimal=5)
+
np.testing.assert_array_almost_equal(tensor.to_numpy(tensor.from_raw_tensor(dx1)),
 grad1, decimal=5)
+hepler(False)
+hepler(True)
+
+@unittest.skipIf(not singa_wrap.USE_CUDA, 'CUDA is not enabled')
+def test_SeLU(self):
+def test_helper(gpu):
+x = np.random.randn(3, 2)
+a=0.2
+g=0.3
+y = np.clip(x, 0, np.inf) * g + (np.exp(np.clip(x, -np.inf, 0)) - 
1) * a * g
+dy=np.random.randn(3, 2)
+grad = (np.exp(np.clip(x, -np.inf, 0))) * g
+grad[x<=0]=grad[x<=0]*a
+grad*=dy
+x = tensor.from_numpy(x)
+
 
+result = autograd.selu(x,a,g)
+dy = tensor.from_numpy(dy)
+if(gpu):
+dy.to_device(gpu_dev)
+x.to_device(gpu_dev)
+dx = result.creator.backward(dy.data)
+
+np.testing.assert_array_almost_equal(tensor.to_numpy(result), y, 
decimal=5)
+
np.testing.assert_array_almost_equal(tensor.to_numpy(tensor.from_raw_tensor(dx)),
 grad, decimal=5)
+test_helper(False)
+test_helper(True)
+
+
+@unittest.skipIf(not singa_wrap.USE_CUDA, 'CUDA is not enabled')
+def test_Equal(self):
+def test_helper(gpu):
+x0 = np.random.randn(3, 2)
+x1 = np.random.randn(3, 2)
+y = np.equal(x0,x1)
+x0 = tensor.from_numpy(x0)
+x1 = tensor.from_numpy(x1)
+if(gpu):
+x0.to_device(gpu_dev)
+x1.to_device(gpu_dev)
+
+result = autograd.equal(x0,x1)
+
+np.testing.assert_array_almost_equal(tensor.to_numpy(result), y, 
decimal=5)
+test_helper(False)
+test_helper(True)
+
+@unittest.skipIf(not singa_wrap.USE_CUDA, 'CUDA is no

[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #468: Distributted module

2019-08-24 Thread GitBox
chrishkchris commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r317382891
 
 

 ##
 File path: examples/autograd/mnist_dist.py
 ##
 @@ -0,0 +1,251 @@
+#
 
 Review comment:
   I see. I will modify the codes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #468: Distributted module

2019-08-24 Thread GitBox
chrishkchris commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r317382900
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -1286,25 +1287,26 @@ def set_params(self, **parameters):
 
 
 class _BatchNorm2d(Operation):
-def __init__(self, handle, name=None):
+def __init__(self, handle, running_mean, running_var, name=None):
 super(_BatchNorm2d, self).__init__(name)
 self.handle = handle
+self.running_mean = running_mean.data
+self.running_var = running_var.data
 
-def forward(self, x, scale, bias, running_mean, running_var):
-self.running_mean = running_mean
-self.running_var = running_var
+def forward(self, x, scale, bias):
 if training:
 
 if isinstance(self.handle, singa.CudnnBatchNormHandle):
 y, mean, var = singa.GpuBatchNormForwardTraining(
-self.handle, x, scale, bias, running_mean, running_var
+self.handle, x, scale, bias, self.running_mean, 
self.running_var
 
 Review comment:
   I see. I will modify the codes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #468: Distributted module

2019-08-24 Thread GitBox
chrishkchris commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r317382034
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -30,7 +30,7 @@ LIST(APPEND CMAKE_MODULE_PATH 
${PROJECT_SOURCE_DIR}/cmake/Thirdparty)
 
 # Flags
 IF(UNIX)
-SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -g -O2 -fPIC -Wall 
-pthread")
+SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -O3 -fPIC -Wall 
-pthread")
 
 Review comment:
   ok, changed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #468: Distributted module

2019-08-24 Thread GitBox
chrishkchris commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r317381887
 
 

 ##
 File path: src/api/config.i
 ##
 @@ -0,0 +1,34 @@
+// Licensed to the Apache Software Foundation (ASF) under one
 
 Review comment:
   Yes, this is generated directly from "config.i.in" by cmake.
   I have deleted this file "config.i"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (SINGA-456) Adding more PGP Keys

2019-08-24 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SINGA-456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16915115#comment-16915115
 ] 

ASF subversion and git services commented on SINGA-456:
---

Commit a5a3221b45621163c9f1ad601a91be2daaeeb30c in incubator-singa's branch 
refs/heads/master from Wei Wang
[ https://gitbox.apache.org/repos/asf?p=incubator-singa.git;h=a5a3221 ]

Merge pull request #530 from moazreyad/SINGA-456

SINGA-456 Add KEYS file

> Adding more PGP Keys
> 
>
> Key: SINGA-456
> URL: https://issues.apache.org/jira/browse/SINGA-456
> Project: Singa
>  Issue Type: Improvement
>Reporter: Moaz Reyad
>Priority: Major
> Attachments: KEYS
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently the SINGA [KEYS |https://www.apache.org/dist/incubator/singa/KEYS] 
> file has only one PGP key which is expiring this September (it needs to be 
> updated). This means only one person can sign the releases. While other 
> projects like CouchDB for example, have several keys in the [KEYS 
> |https://www.apache.org/dist/couchdb/KEYS] file.
> It will be useful if every active Apache committer in the team create a PGP 
> key and uploads the Public Key Primary Fingerprint to his account using 
> [Apache Account Utility|https://id.apache.org/]. Then append the new key to 
> the SINGA KEYS file.
> Furthermore, the keys themselves can be signed for more trust. SINGA team can 
> exchange key signatures between them or organize a [key signing 
> party|https://www.apache.org/dev/release-signing#key-signing-party]. This 
> will help adding more SINGA committers into the [Apache Web of 
> Trust|https://www.apache.org/dev/release-signing#web-of-trust]. 
> I attach with this issue the KEYS file with my key appended at the end. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (SINGA-456) Adding more PGP Keys

2019-08-24 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SINGA-456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16915114#comment-16915114
 ] 

ASF subversion and git services commented on SINGA-456:
---

Commit a5a3221b45621163c9f1ad601a91be2daaeeb30c in incubator-singa's branch 
refs/heads/master from Wei Wang
[ https://gitbox.apache.org/repos/asf?p=incubator-singa.git;h=a5a3221 ]

Merge pull request #530 from moazreyad/SINGA-456

SINGA-456 Add KEYS file

> Adding more PGP Keys
> 
>
> Key: SINGA-456
> URL: https://issues.apache.org/jira/browse/SINGA-456
> Project: Singa
>  Issue Type: Improvement
>Reporter: Moaz Reyad
>Priority: Major
> Attachments: KEYS
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently the SINGA [KEYS |https://www.apache.org/dist/incubator/singa/KEYS] 
> file has only one PGP key which is expiring this September (it needs to be 
> updated). This means only one person can sign the releases. While other 
> projects like CouchDB for example, have several keys in the [KEYS 
> |https://www.apache.org/dist/couchdb/KEYS] file.
> It will be useful if every active Apache committer in the team create a PGP 
> key and uploads the Public Key Primary Fingerprint to his account using 
> [Apache Account Utility|https://id.apache.org/]. Then append the new key to 
> the SINGA KEYS file.
> Furthermore, the keys themselves can be signed for more trust. SINGA team can 
> exchange key signatures between them or organize a [key signing 
> party|https://www.apache.org/dev/release-signing#key-signing-party]. This 
> will help adding more SINGA committers into the [Apache Web of 
> Trust|https://www.apache.org/dev/release-signing#web-of-trust]. 
> I attach with this issue the KEYS file with my key appended at the end. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (SINGA-456) Adding more PGP Keys

2019-08-24 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SINGA-456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16915113#comment-16915113
 ] 

ASF subversion and git services commented on SINGA-456:
---

Commit c28df010ecdb79a334674982a3881fb9a224347a in incubator-singa's branch 
refs/heads/master from Moaz Reyad
[ https://gitbox.apache.org/repos/asf?p=incubator-singa.git;h=c28df01 ]

SINGA-456 Add KEYS file


> Adding more PGP Keys
> 
>
> Key: SINGA-456
> URL: https://issues.apache.org/jira/browse/SINGA-456
> Project: Singa
>  Issue Type: Improvement
>Reporter: Moaz Reyad
>Priority: Major
> Attachments: KEYS
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently the SINGA [KEYS |https://www.apache.org/dist/incubator/singa/KEYS] 
> file has only one PGP key which is expiring this September (it needs to be 
> updated). This means only one person can sign the releases. While other 
> projects like CouchDB for example, have several keys in the [KEYS 
> |https://www.apache.org/dist/couchdb/KEYS] file.
> It will be useful if every active Apache committer in the team create a PGP 
> key and uploads the Public Key Primary Fingerprint to his account using 
> [Apache Account Utility|https://id.apache.org/]. Then append the new key to 
> the SINGA KEYS file.
> Furthermore, the keys themselves can be signed for more trust. SINGA team can 
> exchange key signatures between them or organize a [key signing 
> party|https://www.apache.org/dev/release-signing#key-signing-party]. This 
> will help adding more SINGA committers into the [Apache Web of 
> Trust|https://www.apache.org/dev/release-signing#web-of-trust]. 
> I attach with this issue the KEYS file with my key appended at the end. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [incubator-singa] nudles merged pull request #530: SINGA-456 Add KEYS file

2019-08-24 Thread GitBox
nudles merged pull request #530: SINGA-456 Add KEYS file
URL: https://github.com/apache/incubator-singa/pull/530
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] moazreyad opened a new pull request #530: SINGA-456 Add KEYS file

2019-08-24 Thread GitBox
moazreyad opened a new pull request #530: SINGA-456 Add KEYS file
URL: https://github.com/apache/incubator-singa/pull/530
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (SINGA-456) Adding more PGP Keys

2019-08-24 Thread Moaz Reyad (Jira)


[ 
https://issues.apache.org/jira/browse/SINGA-456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16915107#comment-16915107
 ] 

Moaz Reyad commented on SINGA-456:
--

reminder: the key will expire next month and it should be extended. Otherwise 
the next release of SINGA can not be correctly signed.

I noticed also that many Apache projects put the KEYS file in GitHub, such as 
[MXNET|https://github.com/apache/incubator-mxnet/blob/master/KEYS], 
[NIFI|https://github.com/apache/nifi/blob/master/KEYS], 
[Tika|https://github.com/apache/tika/blob/master/KEYS], … etc. So it may be a 
good idea to add SINGA KEYS also to GitHub.

> Adding more PGP Keys
> 
>
> Key: SINGA-456
> URL: https://issues.apache.org/jira/browse/SINGA-456
> Project: Singa
>  Issue Type: Improvement
>Reporter: Moaz Reyad
>Priority: Major
> Attachments: KEYS
>
>
> Currently the SINGA [KEYS |https://www.apache.org/dist/incubator/singa/KEYS] 
> file has only one PGP key which is expiring this September (it needs to be 
> updated). This means only one person can sign the releases. While other 
> projects like CouchDB for example, have several keys in the [KEYS 
> |https://www.apache.org/dist/couchdb/KEYS] file.
> It will be useful if every active Apache committer in the team create a PGP 
> key and uploads the Public Key Primary Fingerprint to his account using 
> [Apache Account Utility|https://id.apache.org/]. Then append the new key to 
> the SINGA KEYS file.
> Furthermore, the keys themselves can be signed for more trust. SINGA team can 
> exchange key signatures between them or organize a [key signing 
> party|https://www.apache.org/dev/release-signing#key-signing-party]. This 
> will help adding more SINGA committers into the [Apache Web of 
> Trust|https://www.apache.org/dev/release-signing#web-of-trust]. 
> I attach with this issue the KEYS file with my key appended at the end. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [incubator-singa] nudles commented on a change in pull request #524: SINGA-474 prelu, add, equal, selu, elu operator

2019-08-24 Thread GitBox
nudles commented on a change in pull request #524: SINGA-474 
prelu,add,equal,selu,elu operator
URL: https://github.com/apache/incubator-singa/pull/524#discussion_r317377067
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -608,21 +608,177 @@ def backward(self, dy):
 def reshape(a,shape):
 return Reshape(shape)(a)[0]
 
+class PRelu(Operation):
+
+def __init__(self):
+super(PRelu, self).__init__()
+
+def forward(self, x, slope):
+mask0 = singa.LTFloat(x, 0.0)
+if training:
+self.input = x
+self.slope = slope
+self.mask0 = mask0
+x1 = singa.__mul__(x, mask0)
+x1 *= slope
+x2 = singa.ReLU(x)
+x1 += x2
+return x1
+
+def backward(self, dy):
+dx1mask = singa.GEFloat(self.input, 0.0)
+dx2 = singa.__mul__(self.mask0, self.slope)
+dx = singa.__add__(dx1mask, dx2)
+return singa.__mul__(dy, dx), singa.__mul__(dy,
+singa.__mul__(
+self.mask0, 
self.input))
+
+
+def prelu(x, slope):
+return PRelu()(x, slope)[0]
 
 class Add(Operation):
 def __init__(self):
 super(Add, self).__init__()
 
 def forward(self, a, b):
+#up till now, the dimensions of tensor a and b should less than 3
+self.shape0=list(a.shape())
+self.shape1=list(b.shape())
+assert(len(self.shape0) <= 2 and len(self.shape1) <= 2),"up till now, 
the dimensions of tensor a and b should less than 3"
 return singa.__add__(a, b)
 
 def backward(self, dy):
-return dy, dy
+if(type(dy)==float):return dy,dy
+db=CTensor(list(dy.shape()), dy.device())
+db.CopyData(dy)
+for i in range(len(self.shape0)-len(self.shape1)):
+db=singa.Sum(db, 0)
+return dy, db
 
 
 def add(a, b):
 return Add()(a, b)[0]
 
+class Elu(Operation):
+def __init__(self,alpha=1):
+super(Elu, self).__init__()
+self.alpha=alpha
+
+def forward(self, x):
+"""Do forward propgation.
+Store the x if requires gradient.
+Args:
+x (CTensor): matrix
+Returns:
+a CTensor for the result
+"""
+#f(x) = alpha * (exp(x) - 1.) for x < 0, f(x) = x for x >= 0
+if training:
+self.input = x
+x1 = singa.LTFloat(x, 0.0)
+x1 = singa.__mul__(x, x1)
+x1 = singa.MultFloat(singa.SubFloat(singa.Exp(x1),1.0),self.alpha)
+x2 = singa.ReLU(x)
+x1 = singa.__add__(x1, x2)
 
 Review comment:
   can use ```x1+=x2```?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles commented on a change in pull request #524: SINGA-474 prelu, add, equal, selu, elu operator

2019-08-24 Thread GitBox
nudles commented on a change in pull request #524: SINGA-474 
prelu,add,equal,selu,elu operator
URL: https://github.com/apache/incubator-singa/pull/524#discussion_r317377059
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -608,21 +608,177 @@ def backward(self, dy):
 def reshape(a,shape):
 return Reshape(shape)(a)[0]
 
+class PRelu(Operation):
+
+def __init__(self):
+super(PRelu, self).__init__()
+
+def forward(self, x, slope):
+mask0 = singa.LTFloat(x, 0.0)
+if training:
+self.input = x
+self.slope = slope
+self.mask0 = mask0
+x1 = singa.__mul__(x, mask0)
+x1 *= slope
+x2 = singa.ReLU(x)
+x1 += x2
+return x1
+
+def backward(self, dy):
+dx1mask = singa.GEFloat(self.input, 0.0)
+dx2 = singa.__mul__(self.mask0, self.slope)
+dx = singa.__add__(dx1mask, dx2)
+return singa.__mul__(dy, dx), singa.__mul__(dy,
+singa.__mul__(
+self.mask0, 
self.input))
+
+
+def prelu(x, slope):
+return PRelu()(x, slope)[0]
 
 class Add(Operation):
 def __init__(self):
 super(Add, self).__init__()
 
 def forward(self, a, b):
+#up till now, the dimensions of tensor a and b should less than 3
+self.shape0=list(a.shape())
+self.shape1=list(b.shape())
+assert(len(self.shape0) <= 2 and len(self.shape1) <= 2),"up till now, 
the dimensions of tensor a and b should less than 3"
 return singa.__add__(a, b)
 
 def backward(self, dy):
-return dy, dy
+if(type(dy)==float):return dy,dy
 
 Review comment:
   this implementation assumes that the a has the same shape as dy?
   then need to assert it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles commented on a change in pull request #524: SINGA-474 prelu, add, equal, selu, elu operator

2019-08-24 Thread GitBox
nudles commented on a change in pull request #524: SINGA-474 
prelu,add,equal,selu,elu operator
URL: https://github.com/apache/incubator-singa/pull/524#discussion_r317377086
 
 

 ##
 File path: test/python/test_operation.py
 ##
 @@ -1587,25 +1587,148 @@ def test_min_gpu(self):
 
np.testing.assert_array_almost_equal(tensor.to_numpy(tensor.from_raw_tensor(dx1)),
 DX1, decimal=5)
 
 
-def test_HardSigmoid(self):
-def test_helper(gpu=False):
-x = np.random.randn(3, 2)
-#y = max(0, min(1, alpha * x + gamma))
-a=0.2
-g=0.5
-y = np.clip(x * 0.2 + 0.5, 0, 1)
-grad=(0<(np.clip(x * 0.2 + 0.5, 0, 1)) * (np.clip(x * 0.2 + 0.5, 0, 
1)<1))*0.2
-x = tensor.from_numpy(x)
-if(gpu):
-x.to_device(gpu_dev)
-result = autograd.hardsigmoid(x,a,g)
-dy = tensor.from_numpy(np.random.randn((3,2)).astype(np.float32))
-dx = result.creator.backward(dy.data)
-np.testing.assert_array_almost_equal(tensor.to_numpy(result), y, 
decimal=5)
-
np.testing.assert_array_almost_equal(tensor.to_numpy(tensor.from_raw_tensor(dx)),
 grad, decimal=5)
-test_helper(False)
-test_helper(True)
+def test_HardSigmoid(self):
+def test_helper(gpu=False):
+x = np.random.randn(3, 2)
+#y = max(0, min(1, alpha * x + gamma))
+a=0.2
+g=0.5
+y = np.clip(x * 0.2 + 0.5, 0, 1)
+dy=np.random.randn(3,2)
+grad=(0<(np.clip(x * 0.2 + 0.5, 0, 1)) * (np.clip(x * 0.2 + 0.5, 
0, 1)<1))*0.2 * dy
+x = tensor.from_numpy(x)
+dy = tensor.from_numpy(dy)
+if(gpu):
+x.to_device(gpu_dev)
+dy.to_device(gpu_dev)
+result = autograd.hardsigmoid(x,a,g)
+dx = result.creator.backward(dy.data)
+np.testing.assert_array_almost_equal(tensor.to_numpy(result), y, 
decimal=5)
+
np.testing.assert_array_almost_equal(tensor.to_numpy(tensor.from_raw_tensor(dx)),
 grad, decimal=5)
+test_helper(False)
+test_helper(True)
+
+@unittest.skipIf(not singa_wrap.USE_CUDA, 'CUDA is not enabled')
+def test_prelu(self):
+def hepler(gpu):
+x = np.random.randn(3, 2)
+slope = np.random.randn(3, 2)
+y = np.clip(x, 0, np.inf) + np.clip(x, -np.inf, 0) * slope
+dy = np.random.randn(3, 2)
+x0=x.copy()
+x0[x0>0]=1
+x0[x0<1]=0
+grad0=(x0+(1-x0)*slope)*dy
+grad1 = (1-x0)*x*dy
+x = tensor.from_numpy(x)
+slope = tensor.from_numpy(slope)
+dy = tensor.from_numpy(dy)
+if(gpu):
+x.to_device(gpu_dev)
+slope.to_device(gpu_dev)
+dy.to_device(gpu_dev)
+result = autograd.prelu(x,slope)
+dx0,dx1 = result.creator.backward(dy.data)
+np.testing.assert_array_almost_equal(tensor.to_numpy(result), y, 
decimal=5)
+
np.testing.assert_array_almost_equal(tensor.to_numpy(tensor.from_raw_tensor(dx0)),
 grad0, decimal=5)
+
np.testing.assert_array_almost_equal(tensor.to_numpy(tensor.from_raw_tensor(dx1)),
 grad1, decimal=5)
+hepler(False)
+hepler(True)
+
+@unittest.skipIf(not singa_wrap.USE_CUDA, 'CUDA is not enabled')
+def test_SeLU(self):
+def test_helper(gpu):
+x = np.random.randn(3, 2)
+a=0.2
+g=0.3
+y = np.clip(x, 0, np.inf) * g + (np.exp(np.clip(x, -np.inf, 0)) - 
1) * a * g
+dy=np.random.randn(3, 2)
+grad = (np.exp(np.clip(x, -np.inf, 0))) * g
+grad[x<=0]=grad[x<=0]*a
+grad*=dy
+x = tensor.from_numpy(x)
+
 
+result = autograd.selu(x,a,g)
+dy = tensor.from_numpy(dy)
+if(gpu):
+dy.to_device(gpu_dev)
+x.to_device(gpu_dev)
+dx = result.creator.backward(dy.data)
+
+np.testing.assert_array_almost_equal(tensor.to_numpy(result), y, 
decimal=5)
+
np.testing.assert_array_almost_equal(tensor.to_numpy(tensor.from_raw_tensor(dx)),
 grad, decimal=5)
+test_helper(False)
+test_helper(True)
+
+
+@unittest.skipIf(not singa_wrap.USE_CUDA, 'CUDA is not enabled')
+def test_Equal(self):
+def test_helper(gpu):
+x0 = np.random.randn(3, 2)
+x1 = np.random.randn(3, 2)
+y = np.equal(x0,x1)
+x0 = tensor.from_numpy(x0)
+x1 = tensor.from_numpy(x1)
+if(gpu):
+x0.to_device(gpu_dev)
+x1.to_device(gpu_dev)
+
+result = autograd.equal(x0,x1)
+
+np.testing.assert_array_almost_equal(tensor.to_numpy(result), y, 
decimal=5)
+test_helper(False)
+test_helper(True)
+
+@unittest.skipIf(not singa_wrap.USE_CUDA, 'CUDA is not enab

[GitHub] [incubator-singa] nudles commented on a change in pull request #524: SINGA-474 prelu, add, equal, selu, elu operator

2019-08-24 Thread GitBox
nudles commented on a change in pull request #524: SINGA-474 
prelu,add,equal,selu,elu operator
URL: https://github.com/apache/incubator-singa/pull/524#discussion_r317377077
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -608,21 +608,177 @@ def backward(self, dy):
 def reshape(a,shape):
 return Reshape(shape)(a)[0]
 
+class PRelu(Operation):
+
+def __init__(self):
+super(PRelu, self).__init__()
+
+def forward(self, x, slope):
+mask0 = singa.LTFloat(x, 0.0)
+if training:
+self.input = x
+self.slope = slope
+self.mask0 = mask0
+x1 = singa.__mul__(x, mask0)
+x1 *= slope
+x2 = singa.ReLU(x)
+x1 += x2
+return x1
+
+def backward(self, dy):
+dx1mask = singa.GEFloat(self.input, 0.0)
+dx2 = singa.__mul__(self.mask0, self.slope)
+dx = singa.__add__(dx1mask, dx2)
+return singa.__mul__(dy, dx), singa.__mul__(dy,
+singa.__mul__(
+self.mask0, 
self.input))
+
+
+def prelu(x, slope):
+return PRelu()(x, slope)[0]
 
 class Add(Operation):
 def __init__(self):
 super(Add, self).__init__()
 
 def forward(self, a, b):
+#up till now, the dimensions of tensor a and b should less than 3
+self.shape0=list(a.shape())
+self.shape1=list(b.shape())
+assert(len(self.shape0) <= 2 and len(self.shape1) <= 2),"up till now, 
the dimensions of tensor a and b should less than 3"
 return singa.__add__(a, b)
 
 def backward(self, dy):
-return dy, dy
+if(type(dy)==float):return dy,dy
+db=CTensor(list(dy.shape()), dy.device())
+db.CopyData(dy)
+for i in range(len(self.shape0)-len(self.shape1)):
+db=singa.Sum(db, 0)
+return dy, db
 
 
 def add(a, b):
 return Add()(a, b)[0]
 
+class Elu(Operation):
+def __init__(self,alpha=1):
+super(Elu, self).__init__()
+self.alpha=alpha
+
+def forward(self, x):
+"""Do forward propgation.
+Store the x if requires gradient.
+Args:
+x (CTensor): matrix
+Returns:
+a CTensor for the result
+"""
+#f(x) = alpha * (exp(x) - 1.) for x < 0, f(x) = x for x >= 0
+if training:
+self.input = x
+x1 = singa.LTFloat(x, 0.0)
+x1 = singa.__mul__(x, x1)
+x1 = singa.MultFloat(singa.SubFloat(singa.Exp(x1),1.0),self.alpha)
+x2 = singa.ReLU(x)
+x1 = singa.__add__(x1, x2)
+return x1
+
+def backward(self, dy):
+"""
+Args:
+dy (CTensor): data for the dL / dy, L is the loss
+Returns:
+a tuple for dx
+"""
+dx1mask = singa.LTFloat(self.input, 0.0)
+dx1 = singa.MultFloat(singa.Exp(self.input), self.alpha)
+dx1 = singa.__mul__(dx1mask, dx1)
+
+dx2mask = singa.GEFloat(self.input, 0.0)
+
+dx = singa.__add__(dx1, dx2mask)
+return singa.__mul__(dy, dx)
 
 Review comment:
   can use dx *= dy?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles commented on a change in pull request #468: Distributted module

2019-08-24 Thread GitBox
nudles commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r317377016
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -1286,25 +1287,26 @@ def set_params(self, **parameters):
 
 
 class _BatchNorm2d(Operation):
-def __init__(self, handle, name=None):
+def __init__(self, handle, running_mean, running_var, name=None):
 super(_BatchNorm2d, self).__init__(name)
 self.handle = handle
+self.running_mean = running_mean.data
+self.running_var = running_var.data
 
-def forward(self, x, scale, bias, running_mean, running_var):
-self.running_mean = running_mean
-self.running_var = running_var
+def forward(self, x, scale, bias):
 if training:
 
 if isinstance(self.handle, singa.CudnnBatchNormHandle):
 y, mean, var = singa.GpuBatchNormForwardTraining(
-self.handle, x, scale, bias, running_mean, running_var
+self.handle, x, scale, bias, self.running_mean, 
self.running_var
 
 Review comment:
   for all cudnn checks in this file, need to replace them with the Cpu check.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles commented on a change in pull request #468: Distributted module

2019-08-24 Thread GitBox
nudles commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r317377008
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -1286,25 +1287,26 @@ def set_params(self, **parameters):
 
 
 class _BatchNorm2d(Operation):
-def __init__(self, handle, name=None):
+def __init__(self, handle, running_mean, running_var, name=None):
 super(_BatchNorm2d, self).__init__(name)
 self.handle = handle
+self.running_mean = running_mean.data
+self.running_var = running_var.data
 
-def forward(self, x, scale, bias, running_mean, running_var):
-self.running_mean = running_mean
-self.running_var = running_var
+def forward(self, x, scale, bias):
 if training:
 
 if isinstance(self.handle, singa.CudnnBatchNormHandle):
 y, mean, var = singa.GpuBatchNormForwardTraining(
-self.handle, x, scale, bias, running_mean, running_var
+self.handle, x, scale, bias, self.running_mean, 
self.running_var
 
 Review comment:
   pls do the cpu check ```if isinstance(self.handle, 
singa.CpuBatchNormHandle):``` because if there is no GPU, then there will be 
error like Cudnn... is unknown.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles commented on a change in pull request #468: Distributted module

2019-08-24 Thread GitBox
nudles commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r317375031
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -30,7 +30,7 @@ LIST(APPEND CMAKE_MODULE_PATH 
${PROJECT_SOURCE_DIR}/cmake/Thirdparty)
 
 # Flags
 IF(UNIX)
-SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -g -O2 -fPIC -Wall 
-pthread")
+SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -O3 -fPIC -Wall 
-pthread")
 
 Review comment:
   pls change it back to "-g -O2"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles commented on a change in pull request #468: Distributted module

2019-08-24 Thread GitBox
nudles commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r317375284
 
 

 ##
 File path: src/api/config.i
 ##
 @@ -0,0 +1,34 @@
+// Licensed to the Apache Software Foundation (ASF) under one
 
 Review comment:
   this file is generated by cmake?
   if yes, it should not be put into git repo.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles commented on a change in pull request #468: Distributted module

2019-08-24 Thread GitBox
nudles commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r317375169
 
 

 ##
 File path: examples/autograd/mnist_dist.py
 ##
 @@ -0,0 +1,251 @@
+#
 
 Review comment:
   can combine mnist_dist.py and mnist.py?
   e.g., put the model construction, data preprocessing and training code into 
mnist.py.
   mnist_dist.py imports those functions and passes the dist opt into train() 
to conduct dist training.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] moazreyad opened a new pull request #529: SINGA-484 Code analysis with LGTM

2019-08-24 Thread GitBox
moazreyad opened a new pull request #529: SINGA-484 Code analysis with LGTM
URL: https://github.com/apache/incubator-singa/pull/529
 
 
   Adding code analysis badges to README.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (SINGA-484) Code analysis with LGTM

2019-08-24 Thread Moaz Reyad (Jira)
Moaz Reyad created SINGA-484:


 Summary: Code analysis with LGTM
 Key: SINGA-484
 URL: https://issues.apache.org/jira/browse/SINGA-484
 Project: Singa
  Issue Type: Improvement
Reporter: Moaz Reyad


As the code of SINGA is growing, we need to keep the code quality higher to 
avoid bugs or security issues.

[LGTM|https://lgtm.com/] is a free tool for continuous security analysis.

At the time of creating this issue, the quality of SINGA's C++ and Python code 
are both at [grade|https://lgtm.com/help/lgtm/project-scoring-grading] D 
(between A+ and E, this is a low grade). 

There are some [alerts for 
SINGA|https://lgtm.com/projects/g/apache/incubator-singa/alerts/?mode=list] 
that should be fixed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


Re: [VOTE] Graduate Apache SINGA as TLP

2019-08-24 Thread Moaz Reyad
+1 from me also.

The vote can be closed and I will send the RESULT email soon.

But it will be useful if the mentors participate in this voting before we
close it.

best regards,
Moaz

On Thu, Aug 22, 2019 at 4:34 AM zhongle  wrote:

> +1
>
> Best,
> zl
>
> > On Aug 22, 2019, at 10:24, Beng Chin OOI  wrote:
> >
> >
> > +1
> >
> > regards
> > Beng Chin
> >
> >
> >> On 2019-08-22 10:22, Wang Wei wrote:
> >> +1
> >> Thanks, Moaz!
> >> Regards,
> >> Wei
> >>> On Wed, Aug 21, 2019 at 8:53 PM Moaz Reyad  wrote:
> >>> Dear All,
> >>> Apache SINGA entered the incubator on March 2015. Since then, the
> community
> >>> has grown and several releases were published.
> >>> The last incubator report listed SINGA as "Nearing graduation":
> >>> https://cwiki.apache.org/confluence/display/INCUBATOR/June2019
> >>> The maturity assessment of SINGA can be found here:
> >>>
> https://cwiki.apache.org/confluence/display/SINGA/Maturity+model+assessment
> >>> The graduation resolution can be found here:
> >>>
> https://cwiki.apache.org/confluence/display/SINGA/Graduation+Resolution
> >>> Please take a minute to vote on whether or not Apache SINGA should
> graduate
> >>> to a Top Level Project by responding with one of the following:
> >>> [ ] +1 Apache SINGA should graduate.
> >>> [ ] +0 No opinion
> >>> [ ] -1 Apache SINGA should not graduate (please provide the reason)
> >>> The VOTE is open for a minimum of 72 hours.
> >>> Thank you,
> >>> Moaz
>
>