ThomasDelteil commented on a change in pull request #13411: [WIP] Gluon end to 
end tutorial
URL: https://github.com/apache/incubator-mxnet/pull/13411#discussion_r236903849
 
 

 ##########
 File path: docs/tutorials/gluon/gluon_from_experiment_to_deploymen.md
 ##########
 @@ -0,0 +1,400 @@
+# Gluon: from experiment to deployment, an end to end example
+
+## Overview
+
+MXNet Gluon API comes with a lot of great features and it can provide you 
everything you need from experiment to deploy the model.
+In this tutorial, we will walk you through a common used case on how to build 
a model using gluon, train it on your data, and deploy it for inference.
+
+Let's say you want to build a service that provides flower species 
recognition. A common use case is, you don't have enough data to train a good 
model like ResNet50.
+What you can do is utilize pre-trained model from Gluon, tweak the model 
according to your neeed, fine-tune the model on your small dataset, and deploy 
the model to integrate with your service.
+
+We will use the [Oxford 102 Category Flower 
Dateset](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/) as an example to 
show you the steps.
+
+## Prepare training data
+
+You can use this 
[script](https://github.com/Arsey/keras-transfer-learning-for-oxford102/blob/master/bootstrap.py)
 to download and organize your data into train, test, and validation sets. 
Simply run:
+```python
+python bootstrap.py
+```
+
+Now your data will be organized into the following format, all the images 
belong to the same category will be put together
+```
+data
+├── train
+│   ├── 0
+│   │   ├── image_06736.jpg
+│   │   ├── image_06741.jpg
+...
+│   ├── 1
+│   │   ├── image_06755.jpg
+│   │   ├── image_06899.jpg
+...
+├── test
+│   ├── 0
+│   │   ├── image_00731.jpg
+│   │   ├── image_0002.jpg
+...
+│   ├── 1
+│   │   ├── image_00036.jpg
+│   │   ├── image_05011.jpg
+
+```
+
+
+# Training using Gluon
+### Define Hyper-paramerters
+Now let's first import neccesarry packages:
+```python
+import mxnet as mx
+import numpy as np
+import os, time
+
+from mxnet import gluon, init
+from mxnet import autograd as ag
+from mxnet.gluon import nn
+from mxnet.gluon.data.vision import transforms
+from gluoncv.model_zoo import get_model
+```
+
+and define the hyper parameter we will use for fine-tuning:
+```python
+classes = 102
+
+epochs = 1
+lr = 0.001
+per_device_batch_size = 32
+momentum = 0.9
+wd = 0.0001
+
+lr_factor = 0.75
+lr_steps = [10, 20, 30, np.inf]
+
+num_gpus = 0
+num_workers = 1
+ctx = [mx.gpu(i) for i in range(num_gpus)] if num_gpus > 0 else [mx.cpu()]
+batch_size = per_device_batch_size * max(num_gpus, 1)
+```
+
+### Data pre-processing
+
+We can use Gluon DataSet API, DataLoader API, and Transform API to load the 
images and do data augmentation:
+```python
+jitter_param = 0.4
+lighting_param = 0.1
+
+transform_train = transforms.Compose([
+    transforms.RandomResizedCrop(224),
+    transforms.RandomFlipLeftRight(),
+    transforms.RandomColorJitter(brightness=jitter_param, 
contrast=jitter_param,
+                                 saturation=jitter_param),
+    transforms.RandomLighting(lighting_param),
+    transforms.ToTensor(),
+    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+])
+
+transform_test = transforms.Compose([
+    transforms.Resize(256),
+    transforms.CenterCrop(224),
+    transforms.ToTensor(),
+    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+])
+
+
+path = './data'
+train_path = os.path.join(path, 'train')
+val_path = os.path.join(path, 'valid')
+test_path = os.path.join(path, 'test')
+
+train_data = gluon.data.DataLoader(
+    
gluon.data.vision.ImageFolderDataset(train_path).transform_first(transform_train),
+    batch_size=batch_size, shuffle=True, num_workers=num_workers)
+
+val_data = gluon.data.DataLoader(
+    
gluon.data.vision.ImageFolderDataset(val_path).transform_first(transform_test),
+    batch_size=batch_size, shuffle=False, num_workers = num_workers)
+
+test_data = gluon.data.DataLoader(
+    
gluon.data.vision.ImageFolderDataset(test_path).transform_first(transform_test),
+    batch_size=batch_size, shuffle=False, num_workers = num_workers)
+ ```
+
+
+### Loading pre-trained model
+
+We will use pre-trained ResNet50_v2 model, all you need to do is re-define the 
last softmax layer for your case. Specify the number of classes in your data 
and initialize the weights.
+You can also add layers to the network according to your needs.
+
+Before we go to training, one important part is to hybridize your model, it 
will convert your imperative code to mxnet symbolic graph. It's much more 
efficient to train a symbolic model,
+and you can also serialize and save the network archietecure and parameters 
for inference.
+
+```python
+model_name = 'ResNet50_v2'
+finetune_net = get_model(model_name, pretrained=True)
+with finetune_net.name_scope():
+    finetune_net.output = nn.Dense(classes)
+finetune_net.output.initialize(init.Xavier(), ctx = ctx)
+finetune_net.collect_params().reset_ctx(ctx)
+finetune_net.hybridize()
+
+trainer = gluon.Trainer(finetune_net.collect_params(), 'sgd', {
+                        'learning_rate': lr, 'momentum': momentum, 'wd': wd})
+metric = mx.metric.Accuracy()
+L = gluon.loss.SoftmaxCrossEntropyLoss()
+```
+
+### Fine-tuning model on your custom dataset
+
+Now let's define the test metrics and start fine-tuning.
+
+```python
+def test(net, val_data, ctx):
+    metric = mx.metric.Accuracy()
+    for i, batch in enumerate(val_data):
 
 Review comment:
   please use the following notation:
   `for i, (data, label) in enumerate(dataloader):`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to