piiswrong closed pull request #9005: update python-howto examples URL: https://github.com/apache/incubator-mxnet/pull/9005
This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/example/python-howto/README.md b/example/python-howto/README.md index 2499c2ab07..29652408e0 100644 --- a/example/python-howto/README.md +++ b/example/python-howto/README.md @@ -1,15 +1,17 @@ Python Howto Examples ===================== -* [Configuring Net to get Multiple Ouputs](multiple_outputs.py) + +* [Configuring Net to Get Multiple Ouputs](multiple_outputs.py) * [Configuring Image Record Iterator](data_iter.py) +* [Monitor Intermediate Outputs in the Network](monitor_weights.py) * Set break point in C++ code of the symbol using gdb under Linux: * Build mxnet with following values: ``` DEBUG=1 - CUDA=0 #to make sure convolution-inl.h will be used - CUDNN=0 #to make sure convolution-inl.h will be used + USE_CUDA=0 # to make sure convolution-inl.h will be used + USE_CUDNN=0 # to make sure convolution-inl.h will be used ``` * run python under gdb: ```gdb --args python debug_conv.py``` diff --git a/example/python-howto/monitor_weights.py b/example/python-howto/monitor_weights.py index a8b255196d..ab77b4908b 100644 --- a/example/python-howto/monitor_weights.py +++ b/example/python-howto/monitor_weights.py @@ -25,6 +25,7 @@ import numpy as np import logging +# network data = mx.symbol.Variable('data') fc1 = mx.symbol.FullyConnected(data = data, name='fc1', num_hidden=128) act1 = mx.symbol.Activation(data = fc1, name='relu1', act_type="relu") @@ -34,20 +35,16 @@ mlp = mx.symbol.SoftmaxOutput(data = fc3, name = 'softmax') # data - train, val = MNISTIterator(batch_size=100, input_shape = (784,)) -# train - -logging.basicConfig(level=logging.DEBUG) - -model = mx.model.FeedForward( - ctx = mx.cpu(), symbol = mlp, num_epoch = 20, - learning_rate = 0.1, momentum = 0.9, wd = 0.00001) - +# monitor def norm_stat(d): return mx.nd.norm(d)/np.sqrt(d.size) mon = mx.mon.Monitor(100, norm_stat) -model.fit(X=train, eval_data=val, monitor=mon, - batch_end_callback = mx.callback.Speedometer(100, 100)) +# train with monitor +logging.basicConfig(level=logging.DEBUG) +module = mx.module.Module(context=mx.cpu(), symbol=mlp) +module.fit(train_data=train, eval_data=val, monitor=mon, num_epoch=2, + batch_end_callback = mx.callback.Speedometer(100, 100), + optimizer_params=(('learning_rate', 0.1), ('momentum', 0.9), ('wd', 0.00001))) ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services