[ 
https://issues.apache.org/jira/browse/SINGA-126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15227949#comment-15227949
 ] 

ASF subversion and git services commented on SINGA-126:
-------------------------------------------------------

Commit 8130b7ed14e6b556749bf49f0e3ca2a2f6f00e2d in incubator-singa's branch 
refs/heads/master from chonho
[ https://git-wip-us.apache.org/repos/asf?p=incubator-singa.git;h=8130b7e ]

SINGA-126 Python Binding for Interactive Training

- add 2 example python codes for interactive training
  . train_mnist.py
  . train_cifar10.py

- add methods/class in singa/layer.py
  . ComputeFeature, ComputeGradient, Feed, Setup
  . GetParams, SetParams, GetData
  . Dummy()

- add methods in singa/model.py
  . save_model_parameter
  . load_model_parameter

- add Feed fucntion in src/neuralnet/neuron_layer/dummy.cc
  . correspond to class Dummy() in layer.py
    note: DummyInputLayer and kDummyInput are removed

- add functions in src/worker.cc
  . Checkpoint
  . InitNetParams

- add CreateXXX functions to set up singa::XXX from string proto
  . XXX are Layer, Updater, Worker

- update tool/python/singa/driver.i for wrapper

- include cifar10_mean_image in examples/datasets/


> Improve Python Binding for interactive training
> -----------------------------------------------
>
>                 Key: SINGA-126
>                 URL: https://issues.apache.org/jira/browse/SINGA-126
>             Project: Singa
>          Issue Type: Improvement
>            Reporter: wangwei
>            Assignee: Lee Chonho
>              Labels: binding, debugging, interative, python
>
> Currently, python APIs only configure the layer and model. All objects are 
> created after the the JobProto is passed to Driver. Hence, users cannot query 
> the layer object returned by
> {code}
> conv1 = Convolution2D()
> {code}
> to get its internal data (e.g, feature and param values). These internal data 
> is useful for debugging.
> To support this feature, we need to create the SINGA::Layer object and store 
> it in conv1.
> Users can write their own BP algorithm like this,
> {code}
> data = numpy.loadtxt("csv.txt")
> x, y = data[:, 1:], data[:, 0]
> input = Dummy() // dummy layer to get input data
> label = Dummy() // dummy layer to get label 
> conv = Convolution2D(...)
> pool = Pool2D()
> inner = Dense()
> loss = ...
> for i in range(x.shape[0] / batchsize):
>    xb, yb = ...
>    input.SetData(x)
>    label.SetData(y)
>    conv.ComputeFeature(input)
>    pool.ComputeFeature(conv)
>    inner.ComputeFeature(pool)
>    loss.ComputeGradient(inner, label)
>    ....
> {code}
> In this way, users know exactly how the training is conducted, and can access 
> the internal data of each layer directly, e.g., conv.data(), conv.GetParams().
> We may also learn from chainer to call the ComputeGradient functions 
> automatically for the backward pass.
> This feature requires the python APIs for singa::Layer.
> It is easy for training with a single worker. For multiple workers, we need 
> to think more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to