[GitHub] indhub commented on a change in pull request #10959: [MXNET-423] Gluon Model Zoo Pre Trained Model tutorial
indhub commented on a change in pull request #10959: [MXNET-423] Gluon Model Zoo Pre Trained Model tutorial URL: https://github.com/apache/incubator-mxnet/pull/10959#discussion_r189128940 ## File path: docs/tutorials/gluon/pretrained_models.md ## @@ -0,0 +1,374 @@ + +# Using pre-trained models in MXNet + +In this tutorial we will see how to use multiple pre-trained models with Apache MXNet. First, let's download three image classification models from the Apache MXNet [Gluon model zoo](https://mxnet.incubator.apache.org/api/python/gluon/model_zoo.html). +* **DenseNet-121** ([research paper](https://arxiv.org/abs/1608.06993)), improved state of the art on [ImageNet dataset](http://image-net.org/challenges/LSVRC) in 2016. +* **MobileNet** ([research paper](https://arxiv.org/abs/1704.04861)), MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks, suitable for mobile applications. +* **ResNet-18** ([research paper](https://arxiv.org/abs/1512.03385v1)), the -152 version is the 2015 winner in multiple categories. + +Why would you want to try multiple models? Why not just pick the one with the best accuracy? As we will see later in the tutorial, even though these models have been trained on the same dataset and optimized for maximum accuracy, they do behave slightly differently on specific images. In addition, prediction speed and memory footprints can vary, and that is an important factor for many applications. By trying a few pretrained models, you have an opportunity to find a model that can be a good fit for solving your business problem. + + +```python +import json + +import matplotlib.pyplot as plt +import mxnet as mx +from mxnet import gluon, nd +from mxnet.gluon.model_zoo import vision +import numpy as np +%matplotlib inline +``` + +## Loading the model + +The [Gluon Model Zoo](https://mxnet.incubator.apache.org/api/python/gluon/model_zoo.html) provides a collection of off-the-shelf models. You can get the ImageNet pre-trained model by using `pretrained=True`. +If you want to train on your own classification problem from scratch, you can get an untrained network with a specific number of classes using the `classes` parameter: for example `net = vision.resnet18_v1(classes=10)`. However note that you cannot use the `pretrained` and `classes` parameter at the same time. If you want to use pre-trained weights as initialization of your network except for the last layer, have a look at the last section of this tutorial. + +We can specify the *context* where we want to run the model: the default behavior is to use a CPU context. There are two reasons for this: +* First, this will allow you to test the notebook even if your machine is not equipped with a GPU :) +* Second, we're going to predict a single image and we don't have any specific performance requirements. For production applications where you'd want to predict large batches of images with the best possible throughput, a GPU could definitely be the way to go. +* If you want to use a GPU, make sure you have pip installed the right version of mxnet, or you will get an error when using the `mx.gpu()` context. Refer to the [install instructions](http://mxnet.incubator.apache.org/install/index.html) + + +```python +# We set the context to CPU, you can switch to GPU if you have one and installed a compatible version of MXNet +ctx = mx.cpu() +``` + + +```python +# We can load three the three models +densenet121 = vision.densenet121(pretrained=True, ctx=ctx) +mobileNet = vision.mobilenet0_5(pretrained=True, ctx=ctx) +resnet18 = vision.resnet18_v1(pretrained=True, ctx=ctx) +``` + +We can look at the description of the MobileNet network for example, which has a relatively simple yet deep architecture + + +```python +print(mobileNet) +``` + +MobileNet( + (features): HybridSequential( +(0): Conv2D(3 -> 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) +(1): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=16) +(2): Activation(relu) +(3): Conv2D(1 -> 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=16, bias=False) +(4): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=16) +(5): Activation(relu) +(6): Conv2D(16 -> 32, kernel_size=(1, 1), stride=(1, 1), bias=False) +(7): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=32) +(8): Activation(relu) +(9): Conv2D(1 -> 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False) +(10): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=32) +(11): Activation(relu) +(12): Conv2D(32 -> 64, kernel_size=(1, 1), stride=(1, 1), bias=False) +
[GitHub] indhub commented on a change in pull request #10959: [MXNET-423] Gluon Model Zoo Pre Trained Model tutorial
indhub commented on a change in pull request #10959: [MXNET-423] Gluon Model Zoo Pre Trained Model tutorial URL: https://github.com/apache/incubator-mxnet/pull/10959#discussion_r189128940 ## File path: docs/tutorials/gluon/pretrained_models.md ## @@ -0,0 +1,374 @@ + +# Using pre-trained models in MXNet + +In this tutorial we will see how to use multiple pre-trained models with Apache MXNet. First, let's download three image classification models from the Apache MXNet [Gluon model zoo](https://mxnet.incubator.apache.org/api/python/gluon/model_zoo.html). +* **DenseNet-121** ([research paper](https://arxiv.org/abs/1608.06993)), improved state of the art on [ImageNet dataset](http://image-net.org/challenges/LSVRC) in 2016. +* **MobileNet** ([research paper](https://arxiv.org/abs/1704.04861)), MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks, suitable for mobile applications. +* **ResNet-18** ([research paper](https://arxiv.org/abs/1512.03385v1)), the -152 version is the 2015 winner in multiple categories. + +Why would you want to try multiple models? Why not just pick the one with the best accuracy? As we will see later in the tutorial, even though these models have been trained on the same dataset and optimized for maximum accuracy, they do behave slightly differently on specific images. In addition, prediction speed and memory footprints can vary, and that is an important factor for many applications. By trying a few pretrained models, you have an opportunity to find a model that can be a good fit for solving your business problem. + + +```python +import json + +import matplotlib.pyplot as plt +import mxnet as mx +from mxnet import gluon, nd +from mxnet.gluon.model_zoo import vision +import numpy as np +%matplotlib inline +``` + +## Loading the model + +The [Gluon Model Zoo](https://mxnet.incubator.apache.org/api/python/gluon/model_zoo.html) provides a collection of off-the-shelf models. You can get the ImageNet pre-trained model by using `pretrained=True`. +If you want to train on your own classification problem from scratch, you can get an untrained network with a specific number of classes using the `classes` parameter: for example `net = vision.resnet18_v1(classes=10)`. However note that you cannot use the `pretrained` and `classes` parameter at the same time. If you want to use pre-trained weights as initialization of your network except for the last layer, have a look at the last section of this tutorial. + +We can specify the *context* where we want to run the model: the default behavior is to use a CPU context. There are two reasons for this: +* First, this will allow you to test the notebook even if your machine is not equipped with a GPU :) +* Second, we're going to predict a single image and we don't have any specific performance requirements. For production applications where you'd want to predict large batches of images with the best possible throughput, a GPU could definitely be the way to go. +* If you want to use a GPU, make sure you have pip installed the right version of mxnet, or you will get an error when using the `mx.gpu()` context. Refer to the [install instructions](http://mxnet.incubator.apache.org/install/index.html) + + +```python +# We set the context to CPU, you can switch to GPU if you have one and installed a compatible version of MXNet +ctx = mx.cpu() +``` + + +```python +# We can load three the three models +densenet121 = vision.densenet121(pretrained=True, ctx=ctx) +mobileNet = vision.mobilenet0_5(pretrained=True, ctx=ctx) +resnet18 = vision.resnet18_v1(pretrained=True, ctx=ctx) +``` + +We can look at the description of the MobileNet network for example, which has a relatively simple yet deep architecture + + +```python +print(mobileNet) +``` + +MobileNet( + (features): HybridSequential( +(0): Conv2D(3 -> 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) +(1): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=16) +(2): Activation(relu) +(3): Conv2D(1 -> 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=16, bias=False) +(4): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=16) +(5): Activation(relu) +(6): Conv2D(16 -> 32, kernel_size=(1, 1), stride=(1, 1), bias=False) +(7): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=32) +(8): Activation(relu) +(9): Conv2D(1 -> 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False) +(10): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=32) +(11): Activation(relu) +(12): Conv2D(32 -> 64, kernel_size=(1, 1), stride=(1, 1), bias=False) +
[GitHub] indhub commented on a change in pull request #10959: [MXNET-423] Gluon Model Zoo Pre Trained Model tutorial
indhub commented on a change in pull request #10959: [MXNET-423] Gluon Model Zoo Pre Trained Model tutorial URL: https://github.com/apache/incubator-mxnet/pull/10959#discussion_r188791651 ## File path: docs/tutorials/gluon/pretrained_models.md ## @@ -0,0 +1,373 @@ + +# Using pre-trained models in MXNet + +In this tutorial we will see how to use multiple pre-trained models with Apache MXNet. First, let's download three image classification models from the Apache MXNet [Gluon model zoo](https://mxnet.incubator.apache.org/api/python/gluon/model_zoo.html). +* **DenseNet-121** ([research paper](https://arxiv.org/abs/1608.06993)), improved state of the art on [ImageNet dataset](http://image-net.org/challenges/LSVRC) in 2016. +* **MobileNet** ([research paper](https://arxiv.org/abs/1704.04861)), MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks, suitable for mobile applications. +* **ResNet-18** ([research paper](https://arxiv.org/abs/1512.03385v1)), the -152 version is the 2015 winner in multiple categories. + +Why would you want to try multiple models? Why not just pick the one with the best accuracy? As we will see later in the tutorial, even though these models have been trained on the same data set and optimized for maximum accuracy, they do behave slightly differently on specific images. In addition, prediction speed can vary, and that's an important factor for many applications. By trying a few pretrained models, you have an opportunity to find a model that can be a good fit for solving your business problem. + + +```python +import mxnet as mx +from mxnet import gluon, nd +from mxnet.gluon.model_zoo import vision +import matplotlib.pyplot as plt +import numpy as np +import json +%matplotlib inline +``` + +## Loading the model + +The [Gluon Model Zoo](https://mxnet.incubator.apache.org/api/python/gluon/model_zoo.html) provides a collection of off-the-shelf models. You can get the ImageNet pre-trained model by using `pretrained=True`. +If you want to train on your own classification problem from scratch, you can get an untrained network with a specific number of classes using the `classes=10` for example + +We can specify the *context* where we want to run the model: the default behavior is to use a CPU context. There are two reasons for this: +* First, this will allow you to test the notebook even if your machine is not equipped with a GPU :) +* Second, we're going to predict a single image and we don't have any specific performance requirements. For production applications where you'd want to predict large batches of images with the best possible throughput, a GPU could definitely be the way to go. +* If you want to use a GPU, make sure you have pip installed the right version of mxnet, or you will get an error when using the `mx.gpu()` context. Refer to the [install instructions](http://mxnet.incubator.apache.org/install/index.html) + + +```python +# We set the context to CPU, you can switch to GPU if you have one and installed a compatible version of MXNet +ctx = mx.cpu() +``` + + +```python +# We can load three the three models +densenet121 = vision.densenet121(pretrained=True, ctx=ctx) +mobileNet = vision.mobilenet0_5(pretrained=True, ctx=ctx) +resnet18 = vision.resnet18_v1(pretrained=True, ctx=ctx) +``` + +We can look at the description of the MobileNet network for example, which has a relatively simple though deep architecture + + +```python +print(mobileNet) +``` + +MobileNet( + (features): HybridSequential( +(0): Conv2D(3 -> 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) +(1): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=16) +(2): Activation(relu) +(3): Conv2D(1 -> 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=16, bias=False) +(4): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=16) +(5): Activation(relu) +(6): Conv2D(16 -> 32, kernel_size=(1, 1), stride=(1, 1), bias=False) +(7): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=32) +(8): Activation(relu) +(9): Conv2D(1 -> 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False) +(10): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=32) +(11): Activation(relu) +(12): Conv2D(32 -> 64, kernel_size=(1, 1), stride=(1, 1), bias=False) +(13): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=64) +(14): Activation(relu) +(15): Conv2D(1 -> 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False) +(16): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_ga
[GitHub] indhub commented on a change in pull request #10959: [MXNET-423] Gluon Model Zoo Pre Trained Model tutorial
indhub commented on a change in pull request #10959: [MXNET-423] Gluon Model Zoo Pre Trained Model tutorial URL: https://github.com/apache/incubator-mxnet/pull/10959#discussion_r188788759 ## File path: docs/tutorials/gluon/pretrained_models.md ## @@ -0,0 +1,373 @@ + +# Using pre-trained models in MXNet + +In this tutorial we will see how to use multiple pre-trained models with Apache MXNet. First, let's download three image classification models from the Apache MXNet [Gluon model zoo](https://mxnet.incubator.apache.org/api/python/gluon/model_zoo.html). +* **DenseNet-121** ([research paper](https://arxiv.org/abs/1608.06993)), improved state of the art on [ImageNet dataset](http://image-net.org/challenges/LSVRC) in 2016. +* **MobileNet** ([research paper](https://arxiv.org/abs/1704.04861)), MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks, suitable for mobile applications. +* **ResNet-18** ([research paper](https://arxiv.org/abs/1512.03385v1)), the -152 version is the 2015 winner in multiple categories. + +Why would you want to try multiple models? Why not just pick the one with the best accuracy? As we will see later in the tutorial, even though these models have been trained on the same data set and optimized for maximum accuracy, they do behave slightly differently on specific images. In addition, prediction speed can vary, and that's an important factor for many applications. By trying a few pretrained models, you have an opportunity to find a model that can be a good fit for solving your business problem. + + +```python +import mxnet as mx +from mxnet import gluon, nd +from mxnet.gluon.model_zoo import vision +import matplotlib.pyplot as plt +import numpy as np +import json +%matplotlib inline +``` + +## Loading the model + +The [Gluon Model Zoo](https://mxnet.incubator.apache.org/api/python/gluon/model_zoo.html) provides a collection of off-the-shelf models. You can get the ImageNet pre-trained model by using `pretrained=True`. +If you want to train on your own classification problem from scratch, you can get an untrained network with a specific number of classes using the `classes=10` for example Review comment: using the `classes` parameter. For example, This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] indhub commented on a change in pull request #10959: [MXNET-423] Gluon Model Zoo Pre Trained Model tutorial
indhub commented on a change in pull request #10959: [MXNET-423] Gluon Model Zoo Pre Trained Model tutorial URL: https://github.com/apache/incubator-mxnet/pull/10959#discussion_r188792363 ## File path: docs/tutorials/gluon/pretrained_models.md ## @@ -0,0 +1,373 @@ + +# Using pre-trained models in MXNet + +In this tutorial we will see how to use multiple pre-trained models with Apache MXNet. First, let's download three image classification models from the Apache MXNet [Gluon model zoo](https://mxnet.incubator.apache.org/api/python/gluon/model_zoo.html). +* **DenseNet-121** ([research paper](https://arxiv.org/abs/1608.06993)), improved state of the art on [ImageNet dataset](http://image-net.org/challenges/LSVRC) in 2016. +* **MobileNet** ([research paper](https://arxiv.org/abs/1704.04861)), MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks, suitable for mobile applications. +* **ResNet-18** ([research paper](https://arxiv.org/abs/1512.03385v1)), the -152 version is the 2015 winner in multiple categories. + +Why would you want to try multiple models? Why not just pick the one with the best accuracy? As we will see later in the tutorial, even though these models have been trained on the same data set and optimized for maximum accuracy, they do behave slightly differently on specific images. In addition, prediction speed can vary, and that's an important factor for many applications. By trying a few pretrained models, you have an opportunity to find a model that can be a good fit for solving your business problem. + + +```python +import mxnet as mx +from mxnet import gluon, nd +from mxnet.gluon.model_zoo import vision +import matplotlib.pyplot as plt +import numpy as np +import json +%matplotlib inline +``` + +## Loading the model + +The [Gluon Model Zoo](https://mxnet.incubator.apache.org/api/python/gluon/model_zoo.html) provides a collection of off-the-shelf models. You can get the ImageNet pre-trained model by using `pretrained=True`. +If you want to train on your own classification problem from scratch, you can get an untrained network with a specific number of classes using the `classes=10` for example + +We can specify the *context* where we want to run the model: the default behavior is to use a CPU context. There are two reasons for this: +* First, this will allow you to test the notebook even if your machine is not equipped with a GPU :) +* Second, we're going to predict a single image and we don't have any specific performance requirements. For production applications where you'd want to predict large batches of images with the best possible throughput, a GPU could definitely be the way to go. +* If you want to use a GPU, make sure you have pip installed the right version of mxnet, or you will get an error when using the `mx.gpu()` context. Refer to the [install instructions](http://mxnet.incubator.apache.org/install/index.html) + + +```python +# We set the context to CPU, you can switch to GPU if you have one and installed a compatible version of MXNet +ctx = mx.cpu() +``` + + +```python +# We can load three the three models +densenet121 = vision.densenet121(pretrained=True, ctx=ctx) +mobileNet = vision.mobilenet0_5(pretrained=True, ctx=ctx) +resnet18 = vision.resnet18_v1(pretrained=True, ctx=ctx) +``` + +We can look at the description of the MobileNet network for example, which has a relatively simple though deep architecture + + +```python +print(mobileNet) +``` + +MobileNet( + (features): HybridSequential( +(0): Conv2D(3 -> 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) +(1): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=16) +(2): Activation(relu) +(3): Conv2D(1 -> 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=16, bias=False) +(4): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=16) +(5): Activation(relu) +(6): Conv2D(16 -> 32, kernel_size=(1, 1), stride=(1, 1), bias=False) +(7): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=32) +(8): Activation(relu) +(9): Conv2D(1 -> 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False) +(10): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=32) +(11): Activation(relu) +(12): Conv2D(32 -> 64, kernel_size=(1, 1), stride=(1, 1), bias=False) +(13): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=64) +(14): Activation(relu) +(15): Conv2D(1 -> 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False) +(16): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_ga
[GitHub] indhub commented on a change in pull request #10959: [MXNET-423] Gluon Model Zoo Pre Trained Model tutorial
indhub commented on a change in pull request #10959: [MXNET-423] Gluon Model Zoo Pre Trained Model tutorial URL: https://github.com/apache/incubator-mxnet/pull/10959#discussion_r188791808 ## File path: docs/tutorials/gluon/pretrained_models.md ## @@ -0,0 +1,373 @@ + +# Using pre-trained models in MXNet + +In this tutorial we will see how to use multiple pre-trained models with Apache MXNet. First, let's download three image classification models from the Apache MXNet [Gluon model zoo](https://mxnet.incubator.apache.org/api/python/gluon/model_zoo.html). +* **DenseNet-121** ([research paper](https://arxiv.org/abs/1608.06993)), improved state of the art on [ImageNet dataset](http://image-net.org/challenges/LSVRC) in 2016. +* **MobileNet** ([research paper](https://arxiv.org/abs/1704.04861)), MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks, suitable for mobile applications. +* **ResNet-18** ([research paper](https://arxiv.org/abs/1512.03385v1)), the -152 version is the 2015 winner in multiple categories. + +Why would you want to try multiple models? Why not just pick the one with the best accuracy? As we will see later in the tutorial, even though these models have been trained on the same data set and optimized for maximum accuracy, they do behave slightly differently on specific images. In addition, prediction speed can vary, and that's an important factor for many applications. By trying a few pretrained models, you have an opportunity to find a model that can be a good fit for solving your business problem. + + +```python +import mxnet as mx +from mxnet import gluon, nd +from mxnet.gluon.model_zoo import vision +import matplotlib.pyplot as plt +import numpy as np +import json +%matplotlib inline +``` + +## Loading the model + +The [Gluon Model Zoo](https://mxnet.incubator.apache.org/api/python/gluon/model_zoo.html) provides a collection of off-the-shelf models. You can get the ImageNet pre-trained model by using `pretrained=True`. +If you want to train on your own classification problem from scratch, you can get an untrained network with a specific number of classes using the `classes=10` for example + +We can specify the *context* where we want to run the model: the default behavior is to use a CPU context. There are two reasons for this: +* First, this will allow you to test the notebook even if your machine is not equipped with a GPU :) +* Second, we're going to predict a single image and we don't have any specific performance requirements. For production applications where you'd want to predict large batches of images with the best possible throughput, a GPU could definitely be the way to go. +* If you want to use a GPU, make sure you have pip installed the right version of mxnet, or you will get an error when using the `mx.gpu()` context. Refer to the [install instructions](http://mxnet.incubator.apache.org/install/index.html) + + +```python +# We set the context to CPU, you can switch to GPU if you have one and installed a compatible version of MXNet +ctx = mx.cpu() +``` + + +```python +# We can load three the three models +densenet121 = vision.densenet121(pretrained=True, ctx=ctx) +mobileNet = vision.mobilenet0_5(pretrained=True, ctx=ctx) +resnet18 = vision.resnet18_v1(pretrained=True, ctx=ctx) +``` + +We can look at the description of the MobileNet network for example, which has a relatively simple though deep architecture + + +```python +print(mobileNet) +``` + +MobileNet( + (features): HybridSequential( +(0): Conv2D(3 -> 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) +(1): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=16) +(2): Activation(relu) +(3): Conv2D(1 -> 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=16, bias=False) +(4): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=16) +(5): Activation(relu) +(6): Conv2D(16 -> 32, kernel_size=(1, 1), stride=(1, 1), bias=False) +(7): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=32) +(8): Activation(relu) +(9): Conv2D(1 -> 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False) +(10): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=32) +(11): Activation(relu) +(12): Conv2D(32 -> 64, kernel_size=(1, 1), stride=(1, 1), bias=False) +(13): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=64) +(14): Activation(relu) +(15): Conv2D(1 -> 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False) +(16): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_ga