Hi @LewsTherin511 thanks for the overview! 

The reason we doing only pencil and pen because their shapes are quite 
identical and so it might be good dataset to check on how to improve the 
accuracy with retraining.

1. First, we did use the same tool LabelImg to annotate the images of pencils 
and pens. We also uses `imgaug` to create augmented images.

2. Next we train our first model with the 
[train_ssd.py](https://gluon-cv.mxnet.io/build/examples_detection/train_ssd_voc.html).
 For the custom dataset, we declared a VOCLike to be used in `get_dataset()`:

> class VOCLike(VOCDetection):
>     CLASSES = ["pencil", "pen"]
> 
>     def __init__(self, root, splits, transform=None, index_map=None, 
> preload_label=True):
>         super(VOCLike, self).__init__(root, splits, transform, index_map, 
> preload_label)

3. After a few prediction with different images from the training and 
validation dataset, we found a few images that the model failed to recognized, 
and have annotated these new images with LabelImg + augmentation like what we 
did in step#1.

However, the question come with the retraining the model with more dataset to 
the classes:

1. Do we just add the new images to existing dataset in the VOC folder, and 
train using the same script `train_ssd.py`?

2. Or do we retrain the existing model with 
[finetune_detection.py](https://gluon-cv.mxnet.io/build/examples_detection/finetune_detection.html)?
 The script `finetune_detection.py` uses custom model 
`ssd_512_mobilenet1.0_custom` instead. However, we noticed the script 
`net.reset_class(classes)`, which is not applicable to us, because we are 
training for the same classes?





---
[Visit 
Topic](https://discuss.mxnet.io/t/retraining-ssd-for-pencils-and-pens/6528/3) 
or reply to this email to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.mxnet.io/email/unsubscribe/f74d5d4062c4b15e3764f682901d3b16a5f7bc1df6c474dff9e78c60e1231aac).

Reply via email to