[ https://issues.apache.org/jira/browse/SINGA-182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15326238#comment-15326238 ]
ASF subversion and git services commented on SINGA-182: ------------------------------------------------------- Commit 564c88ad95e976e6067198c832f4fcd9a8878cd7 in incubator-singa's branch refs/heads/dev from [~wangwei.cs] [ https://git-wip-us.apache.org/repos/asf?p=incubator-singa.git;h=564c88a ] SINGA-182 Clean math function APIs and implementations Clean tensor.h/.cc and tensor_math.h, tensor_math_cpp.h: re-order the functions by (type, name), where type is a) element-wise function b) matrix function c) random function d) blas function Implement GEMV using cblas and cublas. > CLean math function APIs and implementations > -------------------------------------------- > > Key: SINGA-182 > URL: https://issues.apache.org/jira/browse/SINGA-182 > Project: Singa > Issue Type: Improvement > Reporter: wangwei > > Since we are supporting different types of hardware devices using > corresponding programming languages, e.g., cpp, cuda and opencl, > we need different math function implementations. > It is important to make all math functions consistent in terms of their > function APIs and variable names. Here are some guides to make them > consistent, > 1. All function names should be like XxxYyy or XY, i.e., capitablize the > first letter. > > 2. Order functions based on function name in alphabetical order. > 3. Function arguments order is {code}[const basic type] [const Blob] [mutable > Blob]{code} > 4. Function argument names, use 'num' for total number of elements in > elementwise operations; use 'in1' 'in2' for input blobs; use 'out' for > output blob or value. With exceptions for some functions, e.g., > {code} void Scale(const float alpha, const Blob* in, Blob* out); > {code} > For such cases, use v, alpha, etc for scalar types. > For blas functions, follow the BLAS conventions for argument names. > 5. In the implementation, for Blob argument xxx, name its raw pointer as > xxxPtr. > 6. Add an argument 'Context *ctx' for every cuda kernel function. > 7. Name the kernel functions as KernelXxxx. > 8. Use size_t for number of elements, number of rows or columns. > 9. Use the same name for the Tensor and Blob level math functions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)