Re: [GitHub] szha closed pull request #11154: Revert "[MXNET-503] Website landing page for MMS (#11037)"

2018-06-10 Thread Mu Li
Hi Sheng,

I suggest to put down a reason for such actions later. It may confuse other 
contributors, e.g., Steffen raised his concern in a private thread.

Best
Mu

> On Jun 10, 2018, at 7:42 PM, Sheng Zha  wrote:
> 
> Thanks, Henri. I was reverting the commit on a PR that another committer
> didn't intend to merge but only realized afterwards. Given that it wasn't
> convenient for him to revert and the negative effect, I committed the
> revert and cc'd the original committer in the PR, both as notification and
> as a proof of the claim.
> 
> -sz
> 
>> On Sun, Jun 10, 2018 at 10:21 PM, Hen  wrote:
>> 
>> It wasn't clear why this was commit was reverted. Things that stood out as
>> odd:
>> 
>> * I didn't see an email to dev@ on the topic of a revert.
>> * Rather than reverting, if there is a minor item requiring a fix it could
>> simply be fixed; if a major item then it should be raised on dev@.
>> * I didn't see a reason to revert in the revert PR (11154).
>> * The original PR has github:szha asking for github:piiswrong to review
>> with no context; I'm concerned that it was implied that the commit could
>> not go in without this review.
>> * I don't see anything in the original PR to earn a revert. At best
>> 'github:john-andrilla' being asked if "a flexible, scalable,
>> multi-framework serving solution" was okay.
>> * I find it odd that github:lupesko is a reviewer.
>> 
>> Hen
>> 
>> 
>> 
>>> On Tue, Jun 5, 2018 at 5:08 PM, GitBox  wrote:
>>> 
>>> szha closed pull request #11154: Revert "[MXNET-503] Website landing page
>>> for MMS (#11037)"
>>> URL: https://github.com/apache/incubator-mxnet/pull/11154
>>> 
>>> 
>>> 
>>> 
>>> This is a PR merged from a forked repository.
>>> As GitHub hides the original diff on merge, it is displayed below for
>>> the sake of provenance:
>>> 
>>> As this is a foreign pull request (from a fork), the diff is supplied
>>> below (as it won't show otherwise due to GitHub magic):
>>> 
>>> diff --git a/docs/mms/index.md b/docs/mms/index.md
>>> deleted file mode 100644
>>> index ff6edae414b..000
>>> --- a/docs/mms/index.md
>>> +++ /dev/null
>>> @@ -1,114 +0,0 @@
>>> -# Model Server for Apache MXNet (incubating)
>>> -
>>> -[Model Server for Apache MXNet (incubating)](https://github.
>>> com/awslabs/mxnet-model-server), otherwise known as MXNet Model Server
>>> (MMS), is an open source project aimed at providing a simple yet scalable
>>> solution for model inference. It is a set of command line tools for
>>> packaging model archives and serving them. The tools are written in
>> Python,
>>> and have been extended to support containers for easy deployment and
>>> scaling. MMS also supports basic logging and advanced metrics with Amazon
>>> CloudWatch integration.
>>> -
>>> -
>>> -## Multi-Framework Model Support with ONNX
>>> -
>>> -MMS supports both *symbolic* MXNet and *imperative* Gluon models. While
>>> the name implies that MMS is just for MXNet, it is in fact much more
>>> flexible, as it can support models in the [ONNX](https://onnx.ai)
>> format.
>>> This means that models created and trained in PyTorch, Caffe2, or other
>>> ONNX-supporting frameworks can be served with MMS.
>>> -
>>> -To find out more about MXNet's support for ONNX models and using ONNX
>>> with MMS, refer to the following resources:
>>> -
>>> -* [MXNet-ONNX Docs](../api/python/contrib/onnx.md)
>>> -* [Export an ONNX Model to Serve with MMS](https://github.com/
>>> awslabs/mxnet-model-server/docs/export_from_onnx.md)
>>> -
>>> -## Getting Started
>>> -
>>> -To install MMS with ONNX support, make sure you have Python installed,
>>> then for Ubuntu run:
>>> -
>>> -```bash
>>> -sudo apt-get install protobuf-compiler libprotoc-dev
>>> -pip install mxnet-model-server
>>> -```
>>> -
>>> -Or for Mac run:
>>> -
>>> -```bash
>>> -conda install -c conda-forge protobuf
>>> -pip install mxnet-model-server
>>> -```
>>> -
>>> -
>>> -## Serving a Model
>>> -
>>> -To serve a model you must first create or download a model archive.
>> Visit
>>> the [model zoo](https://github.com/awslabs/mxnet-model-server/
>>> docs/model_zoo.md) to browse the models. MMS options can be explored as
>>> follows:
>>> -
>>> -```bash
>>> -mxnet-model-server --help
>>> -```
>>> -
>>> -Here is an easy example for serving an object classification model. You
>>> can use any URI and the model will be downloaded first, then served from
>>> that location:
>>> -
>>> -```bash
>>> -mxnet-model-server \
>>> -  --models squeezenet=https://s3.amazonaws.com/model-server/
>>> models/squeezenet_v1.1/squeezenet_v1.1.model
>>> -```
>>> -
>>> -
>>> -### Test Inference on a Model
>>> -
>>> -Assuming you have run the previous `mxnet-model-server` command to start
>>> serving the object classification model, you can now upload an image to
>> its
>>> `predict` REST API endpoint. The following will download a picture of a
>>> kitten, then upload it to the prediction endpoint.
>>> -
>>> -```bash
>>> -curl -O 

Re: [GitHub] szha closed pull request #11154: Revert "[MXNET-503] Website landing page for MMS (#11037)"

2018-06-10 Thread Sheng Zha
Thanks, Henri. I was reverting the commit on a PR that another committer
didn't intend to merge but only realized afterwards. Given that it wasn't
convenient for him to revert and the negative effect, I committed the
revert and cc'd the original committer in the PR, both as notification and
as a proof of the claim.

-sz

On Sun, Jun 10, 2018 at 10:21 PM, Hen  wrote:

> It wasn't clear why this was commit was reverted. Things that stood out as
> odd:
>
> * I didn't see an email to dev@ on the topic of a revert.
> * Rather than reverting, if there is a minor item requiring a fix it could
> simply be fixed; if a major item then it should be raised on dev@.
> * I didn't see a reason to revert in the revert PR (11154).
> * The original PR has github:szha asking for github:piiswrong to review
> with no context; I'm concerned that it was implied that the commit could
> not go in without this review.
> * I don't see anything in the original PR to earn a revert. At best
> 'github:john-andrilla' being asked if "a flexible, scalable,
> multi-framework serving solution" was okay.
> * I find it odd that github:lupesko is a reviewer.
>
> Hen
>
>
>
> On Tue, Jun 5, 2018 at 5:08 PM, GitBox  wrote:
>
> > szha closed pull request #11154: Revert "[MXNET-503] Website landing page
> > for MMS (#11037)"
> > URL: https://github.com/apache/incubator-mxnet/pull/11154
> >
> >
> >
> >
> > This is a PR merged from a forked repository.
> > As GitHub hides the original diff on merge, it is displayed below for
> > the sake of provenance:
> >
> > As this is a foreign pull request (from a fork), the diff is supplied
> > below (as it won't show otherwise due to GitHub magic):
> >
> > diff --git a/docs/mms/index.md b/docs/mms/index.md
> > deleted file mode 100644
> > index ff6edae414b..000
> > --- a/docs/mms/index.md
> > +++ /dev/null
> > @@ -1,114 +0,0 @@
> > -# Model Server for Apache MXNet (incubating)
> > -
> > -[Model Server for Apache MXNet (incubating)](https://github.
> > com/awslabs/mxnet-model-server), otherwise known as MXNet Model Server
> > (MMS), is an open source project aimed at providing a simple yet scalable
> > solution for model inference. It is a set of command line tools for
> > packaging model archives and serving them. The tools are written in
> Python,
> > and have been extended to support containers for easy deployment and
> > scaling. MMS also supports basic logging and advanced metrics with Amazon
> > CloudWatch integration.
> > -
> > -
> > -## Multi-Framework Model Support with ONNX
> > -
> > -MMS supports both *symbolic* MXNet and *imperative* Gluon models. While
> > the name implies that MMS is just for MXNet, it is in fact much more
> > flexible, as it can support models in the [ONNX](https://onnx.ai)
> format.
> > This means that models created and trained in PyTorch, Caffe2, or other
> > ONNX-supporting frameworks can be served with MMS.
> > -
> > -To find out more about MXNet's support for ONNX models and using ONNX
> > with MMS, refer to the following resources:
> > -
> > -* [MXNet-ONNX Docs](../api/python/contrib/onnx.md)
> > -* [Export an ONNX Model to Serve with MMS](https://github.com/
> > awslabs/mxnet-model-server/docs/export_from_onnx.md)
> > -
> > -## Getting Started
> > -
> > -To install MMS with ONNX support, make sure you have Python installed,
> > then for Ubuntu run:
> > -
> > -```bash
> > -sudo apt-get install protobuf-compiler libprotoc-dev
> > -pip install mxnet-model-server
> > -```
> > -
> > -Or for Mac run:
> > -
> > -```bash
> > -conda install -c conda-forge protobuf
> > -pip install mxnet-model-server
> > -```
> > -
> > -
> > -## Serving a Model
> > -
> > -To serve a model you must first create or download a model archive.
> Visit
> > the [model zoo](https://github.com/awslabs/mxnet-model-server/
> > docs/model_zoo.md) to browse the models. MMS options can be explored as
> > follows:
> > -
> > -```bash
> > -mxnet-model-server --help
> > -```
> > -
> > -Here is an easy example for serving an object classification model. You
> > can use any URI and the model will be downloaded first, then served from
> > that location:
> > -
> > -```bash
> > -mxnet-model-server \
> > -  --models squeezenet=https://s3.amazonaws.com/model-server/
> > models/squeezenet_v1.1/squeezenet_v1.1.model
> > -```
> > -
> > -
> > -### Test Inference on a Model
> > -
> > -Assuming you have run the previous `mxnet-model-server` command to start
> > serving the object classification model, you can now upload an image to
> its
> > `predict` REST API endpoint. The following will download a picture of a
> > kitten, then upload it to the prediction endpoint.
> > -
> > -```bash
> > -curl -O https://s3.amazonaws.com/model-server/inputs/kitten.jpg
> > -curl -X POST http://127.0.0.1:8080/squeezenet/predict -F
> > "data=@kitten.jpg"
> > -```
> > -
> > -The predict endpoint will return a prediction response in JSON. It will
> > look something like the following result:
> > -
> > -```
> > -{
> > -  "prediction": [
> > -  

Re: [GitHub] szha closed pull request #11154: Revert "[MXNET-503] Website landing page for MMS (#11037)"

2018-06-10 Thread Hen
It wasn't clear why this was commit was reverted. Things that stood out as
odd:

* I didn't see an email to dev@ on the topic of a revert.
* Rather than reverting, if there is a minor item requiring a fix it could
simply be fixed; if a major item then it should be raised on dev@.
* I didn't see a reason to revert in the revert PR (11154).
* The original PR has github:szha asking for github:piiswrong to review
with no context; I'm concerned that it was implied that the commit could
not go in without this review.
* I don't see anything in the original PR to earn a revert. At best
'github:john-andrilla' being asked if "a flexible, scalable,
multi-framework serving solution" was okay.
* I find it odd that github:lupesko is a reviewer.

Hen



On Tue, Jun 5, 2018 at 5:08 PM, GitBox  wrote:

> szha closed pull request #11154: Revert "[MXNET-503] Website landing page
> for MMS (#11037)"
> URL: https://github.com/apache/incubator-mxnet/pull/11154
>
>
>
>
> This is a PR merged from a forked repository.
> As GitHub hides the original diff on merge, it is displayed below for
> the sake of provenance:
>
> As this is a foreign pull request (from a fork), the diff is supplied
> below (as it won't show otherwise due to GitHub magic):
>
> diff --git a/docs/mms/index.md b/docs/mms/index.md
> deleted file mode 100644
> index ff6edae414b..000
> --- a/docs/mms/index.md
> +++ /dev/null
> @@ -1,114 +0,0 @@
> -# Model Server for Apache MXNet (incubating)
> -
> -[Model Server for Apache MXNet (incubating)](https://github.
> com/awslabs/mxnet-model-server), otherwise known as MXNet Model Server
> (MMS), is an open source project aimed at providing a simple yet scalable
> solution for model inference. It is a set of command line tools for
> packaging model archives and serving them. The tools are written in Python,
> and have been extended to support containers for easy deployment and
> scaling. MMS also supports basic logging and advanced metrics with Amazon
> CloudWatch integration.
> -
> -
> -## Multi-Framework Model Support with ONNX
> -
> -MMS supports both *symbolic* MXNet and *imperative* Gluon models. While
> the name implies that MMS is just for MXNet, it is in fact much more
> flexible, as it can support models in the [ONNX](https://onnx.ai) format.
> This means that models created and trained in PyTorch, Caffe2, or other
> ONNX-supporting frameworks can be served with MMS.
> -
> -To find out more about MXNet's support for ONNX models and using ONNX
> with MMS, refer to the following resources:
> -
> -* [MXNet-ONNX Docs](../api/python/contrib/onnx.md)
> -* [Export an ONNX Model to Serve with MMS](https://github.com/
> awslabs/mxnet-model-server/docs/export_from_onnx.md)
> -
> -## Getting Started
> -
> -To install MMS with ONNX support, make sure you have Python installed,
> then for Ubuntu run:
> -
> -```bash
> -sudo apt-get install protobuf-compiler libprotoc-dev
> -pip install mxnet-model-server
> -```
> -
> -Or for Mac run:
> -
> -```bash
> -conda install -c conda-forge protobuf
> -pip install mxnet-model-server
> -```
> -
> -
> -## Serving a Model
> -
> -To serve a model you must first create or download a model archive. Visit
> the [model zoo](https://github.com/awslabs/mxnet-model-server/
> docs/model_zoo.md) to browse the models. MMS options can be explored as
> follows:
> -
> -```bash
> -mxnet-model-server --help
> -```
> -
> -Here is an easy example for serving an object classification model. You
> can use any URI and the model will be downloaded first, then served from
> that location:
> -
> -```bash
> -mxnet-model-server \
> -  --models squeezenet=https://s3.amazonaws.com/model-server/
> models/squeezenet_v1.1/squeezenet_v1.1.model
> -```
> -
> -
> -### Test Inference on a Model
> -
> -Assuming you have run the previous `mxnet-model-server` command to start
> serving the object classification model, you can now upload an image to its
> `predict` REST API endpoint. The following will download a picture of a
> kitten, then upload it to the prediction endpoint.
> -
> -```bash
> -curl -O https://s3.amazonaws.com/model-server/inputs/kitten.jpg
> -curl -X POST http://127.0.0.1:8080/squeezenet/predict -F
> "data=@kitten.jpg"
> -```
> -
> -The predict endpoint will return a prediction response in JSON. It will
> look something like the following result:
> -
> -```
> -{
> -  "prediction": [
> -[
> -  {
> -"class": "n02124075 Egyptian cat",
> -"probability": 0.9408261179924011
> -  },
> -...
> -```
> -
> -For more examples of serving models visit the following resources:
> -
> -* [Quickstart: Model Serving](https://github.com/
> awslabs/mxnet-model-server/README.md#serve-a-model)
> -* [Running the Model Server](https://github.com/
> awslabs/mxnet-model-server/docs/server.md)
> -
> -
> -## Create a Model Archive
> -
> -Creating a model archive involves rounding up the required model
> artifacts, then using the `mxnet-model-export` command line interface. The
> process for creating