Re: Cambricon MLU support for MXNet.

2018-12-16 Thread Chris Olivier
small point: mshadow is being deprecated. probably you shouldn’t invest too
much time on it. just an FYI

On Sun, Dec 16, 2018 at 6:33 PM 张昊翀  wrote:

> Dear MXNet community,
>
> We are from Cambricon, a leading supplier of artificial intelligence
> chips. We have two product lines, including IP products (e.g., Cambricon
> 1A/1H) and chip products (e.g., MLU100 released in May 2018)
>
> We are now adapting MXNet on Cambricon products. During the follow-up
> session, we plan to open source, and hope to merge these new features into
> the master branch of MXNet and to be a part of MXNet's long-term support.
> We firmly believe that these MLU features will promote the MXNet community
> development.
> To this end, we are ready to accept the rigorous inspection of MXNet
> community. In addition, we need advice from the community to achieve high
> quality implementation. On this basis, we very much hope to reach a
> full-scale long-term cooperation with the community.
>
> In order to achieve the above goals, we hope to keep in touch with the
> community on some issues. Looking forward to your valuable feedback.
>
> 1. MLU100 mainly focuses on inference, and we plan to first support the
> inference part of MXNet. The training part of MXNet on MLU will be released
> in the future. Is that acceptable for MXNet community?
>
> 2. Though MLU can support various operators/networks, to guarantee high
> quality, all supported operators submitted to the community should undergo
> rigorous stress test. Thus, at the beginning, we plan to release a small
> number of supported operators and networks, and more of them will be
> continuously added. Is that acceptable or do we have to support all
> networks in the ModelZoo in the first release?
>
> 3. Currently we plan to support both Python and C++ APIs. More details on
> supported APIs will be provided in a follow-up proposal.
>
> 4. We need to modify the mShadow in order to support tensor memory
> operations.
>
> 5. In order to enable the community to run and fully test our code, we
> want to provide the community with a complete test environment. At present,
> we are considering the following three ways.
> A) Provides several remote servers for community and integrates with the
> community's Jenkins.
> B) Provide a cloud platform to the community.
> C) Donate MLU100 to the community's testing platform. However, we don’t
> know the specific ways of donation, and we hope to get help. We are
> wondering about how MXNet's test servers are managed.
>
> About more technical details, a proposal will be submitted to the
> community before releasing the code.
>
> In addition to the above points, the remaining questions and suggestions
> are also welcome. Thanks!
>
> More about Cambricon:
> Cambricon is the artificial intelligence computing pioneer that engineers
> and successfully commercializes world’s first dedicated machine learning
> processor. To bring its unique AI processors from edge to cloud, enriching
> and advancing human life, is the firm mission of the company. Dr. Tianshi
> Chen is the founder and CEO of Cambricon, where he brings over 10 years
> experience in the fields of micro-processor architecture and artificial
> intelligence.
> In 2016, Cambricon released Cambricon 1A processor, the first commercial
> machine learning specific processor in the world. Later, during the 3rd
> World Internet Conference, Cambricon 1A processor was elected as one of
> “World Leading Internet Scientific and Technological Achievements“. In May
> 2018, Cambricon released MLU100, a machine learning chip which is in mass
> production now. By offering revolutionary technology and products,
> Cambricon has established and remains active relationships with various
> companies in the AI industry.
>
>
> Regards,
> Haochong Zhang
> Cambricon MXNet Development Team
>
>
>


Re: Cambricon MLU support for MXNet.

2018-12-16 Thread Hagay Lupesko
Welcome to the MXNet community Haochong!
It's exciting to learn about your plans to contribute to MXNet!

I highly recommend that you document your proposal and technical design in
MXNet's design proposals wiki [1
],
where you can go into details and ask for comprehensive feedback from the
community.

Cheers,
Hagay

[1] https://cwiki.apache.org/confluence/display/MXNET/Design+Proposals

On Sun, Dec 16, 2018 at 6:33 PM 张昊翀  wrote:

> Dear MXNet community,
>
> We are from Cambricon, a leading supplier of artificial intelligence
> chips. We have two product lines, including IP products (e.g., Cambricon
> 1A/1H) and chip products (e.g., MLU100 released in May 2018)
>
> We are now adapting MXNet on Cambricon products. During the follow-up
> session, we plan to open source, and hope to merge these new features into
> the master branch of MXNet and to be a part of MXNet's long-term support.
> We firmly believe that these MLU features will promote the MXNet community
> development.
> To this end, we are ready to accept the rigorous inspection of MXNet
> community. In addition, we need advice from the community to achieve high
> quality implementation. On this basis, we very much hope to reach a
> full-scale long-term cooperation with the community.
>
> In order to achieve the above goals, we hope to keep in touch with the
> community on some issues. Looking forward to your valuable feedback.
>
> 1. MLU100 mainly focuses on inference, and we plan to first support the
> inference part of MXNet. The training part of MXNet on MLU will be released
> in the future. Is that acceptable for MXNet community?
>
> 2. Though MLU can support various operators/networks, to guarantee high
> quality, all supported operators submitted to the community should undergo
> rigorous stress test. Thus, at the beginning, we plan to release a small
> number of supported operators and networks, and more of them will be
> continuously added. Is that acceptable or do we have to support all
> networks in the ModelZoo in the first release?
>
> 3. Currently we plan to support both Python and C++ APIs. More details on
> supported APIs will be provided in a follow-up proposal.
>
> 4. We need to modify the mShadow in order to support tensor memory
> operations.
>
> 5. In order to enable the community to run and fully test our code, we
> want to provide the community with a complete test environment. At present,
> we are considering the following three ways.
> A) Provides several remote servers for community and integrates with the
> community's Jenkins.
> B) Provide a cloud platform to the community.
> C) Donate MLU100 to the community's testing platform. However, we don’t
> know the specific ways of donation, and we hope to get help. We are
> wondering about how MXNet's test servers are managed.
>
> About more technical details, a proposal will be submitted to the
> community before releasing the code.
>
> In addition to the above points, the remaining questions and suggestions
> are also welcome. Thanks!
>
> More about Cambricon:
> Cambricon is the artificial intelligence computing pioneer that engineers
> and successfully commercializes world’s first dedicated machine learning
> processor. To bring its unique AI processors from edge to cloud, enriching
> and advancing human life, is the firm mission of the company. Dr. Tianshi
> Chen is the founder and CEO of Cambricon, where he brings over 10 years
> experience in the fields of micro-processor architecture and artificial
> intelligence.
> In 2016, Cambricon released Cambricon 1A processor, the first commercial
> machine learning specific processor in the world. Later, during the 3rd
> World Internet Conference, Cambricon 1A processor was elected as one of
> “World Leading Internet Scientific and Technological Achievements“. In May
> 2018, Cambricon released MLU100, a machine learning chip which is in mass
> production now. By offering revolutionary technology and products,
> Cambricon has established and remains active relationships with various
> companies in the AI industry.
>
>
> Regards,
> Haochong Zhang
> Cambricon MXNet Development Team
>
>
>


Cambricon MLU support for MXNet.

2018-12-16 Thread 张昊翀
Dear MXNet community,

We are from Cambricon, a leading supplier of artificial intelligence chips. We 
have two product lines, including IP products (e.g., Cambricon 1A/1H) and chip 
products (e.g., MLU100 released in May 2018)

We are now adapting MXNet on Cambricon products. During the follow-up session, 
we plan to open source, and hope to merge these new features into the master 
branch of MXNet and to be a part of MXNet's long-term support. We firmly 
believe that these MLU features will promote the MXNet community development.
To this end, we are ready to accept the rigorous inspection of MXNet community. 
In addition, we need advice from the community to achieve high quality 
implementation. On this basis, we very much hope to reach a full-scale 
long-term cooperation with the community.

In order to achieve the above goals, we hope to keep in touch with the 
community on some issues. Looking forward to your valuable feedback.

1. MLU100 mainly focuses on inference, and we plan to first support the 
inference part of MXNet. The training part of MXNet on MLU will be released in 
the future. Is that acceptable for MXNet community?

2. Though MLU can support various operators/networks, to guarantee high 
quality, all supported operators submitted to the community should undergo 
rigorous stress test. Thus, at the beginning, we plan to release a small number 
of supported operators and networks, and more of them will be continuously 
added. Is that acceptable or do we have to support all networks in the ModelZoo 
in the first release?

3. Currently we plan to support both Python and C++ APIs. More details on 
supported APIs will be provided in a follow-up proposal.

4. We need to modify the mShadow in order to support tensor memory operations. 

5. In order to enable the community to run and fully test our code, we want to 
provide the community with a complete test environment. At present, we are 
considering the following three ways.
A) Provides several remote servers for community and integrates with the 
community's Jenkins.
B) Provide a cloud platform to the community.
C) Donate MLU100 to the community's testing platform. However, we don’t know 
the specific ways of donation, and we hope to get help. We are wondering about 
how MXNet's test servers are managed.

About more technical details, a proposal will be submitted to the community 
before releasing the code.

In addition to the above points, the remaining questions and suggestions are 
also welcome. Thanks!

More about Cambricon:
Cambricon is the artificial intelligence computing pioneer that engineers and 
successfully commercializes world’s first dedicated machine learning processor. 
To bring its unique AI processors from edge to cloud, enriching and advancing 
human life, is the firm mission of the company. Dr. Tianshi Chen is the founder 
and CEO of Cambricon, where he brings over 10 years experience in the fields of 
micro-processor architecture and artificial intelligence. 
In 2016, Cambricon released Cambricon 1A processor, the first commercial 
machine learning specific processor in the world. Later, during the 3rd World 
Internet Conference, Cambricon 1A processor was elected as one of “World 
Leading Internet Scientific and Technological Achievements“. In May 2018, 
Cambricon released MLU100, a machine learning chip which is in mass production 
now. By offering revolutionary technology and products, Cambricon has 
established and remains active relationships with various companies in the AI 
industry.


Regards,
Haochong Zhang
Cambricon MXNet Development Team




Re: Proposal for a recurrent architecture meeting and long term direction

2018-12-16 Thread Pedro Larroy
Hi

I think you make good points. We can address your concerns by sending
notes to the mailing list and use the wiki / RFCs appropriately so the
community can follow and participate asynchronously so asynchronous
participation is still possible.

I would say let's try and see if they bring value, we can alway stop
if they don't or do it more often if the time is not enough, I think
once a month is a conservative choice that doesn't take too much time
from our already busy lives.

Other open source projects like IPFS do these kind of sessions and
seems to work for them.

It's also an opportunity to share what we are working on and collaborate more.

Pedro.

On Sun, Dec 16, 2018 at 7:04 PM Tianqi Chen  wrote:
>
> I feel that online meeting may not address most of the issues get raised in
> a short amount of time(1h) and still suffers the problem of not being
> publically archivable.
>
> Maybe we can not try more asynchronous way? (RFC discussion in issues.
> and/or discuss @dev). Just my two cents and I am not blocking the proposal
>
> Tianqi
>
>
> On Fri, Dec 14, 2018 at 5:34 AM Pedro Larroy 
> wrote:
>
> > Hi MXNetters
> >
> > To address the project growth and increased contributions I'm
> > proposing a monthly meeting / hangout to have community discussions
> > about MXNet architecture and mid / longer term technical directions
> > that require coordination beyond single PRs.
> >
> > TOPICS:
> >
> > The goal of this series is to address topics including but not limited to:
> >  - How to best integrate features that have a big impact on the project
> >  - Discussion about long term technical direction
> >  - Addressing of technical debt / refactoring needed.
> >  - Other architectures / framework support. Ex. ARM, Cuda etc.
> >  - Build system improvements and tooling such as code coverage, static
> > analysis etc.
> >  - Performance discussions.
> >  - Live discussion to address exceptional PRs with complex changes
> > that are better discussed live than in written form.
> >
> > FREQUENCY:
> >
> > I propose to make this meeting on the second Monday of the month at
> > 11am PST /  20pm CET
> >
> > So the tentative date for the first one would be on January 14th.
> >
> > If you think this arbitrary date is not good, please say so. In this
> > case we can proceed to make a doodle to find a slot that works for the
> > interested parties.
> >
> > I have opened a group calendar for our meetings, hangouts and other
> > events related to MXNet.
> >
> >
> > https://calendar.google.com/calendar/embed?src=6co88bqo3n4bjsbt1qrqmsvj4o%40group.calendar.google.com
> >
> > Pedro.
> >


MLConf 2019 Call for Abstracts/Talks

2018-12-16 Thread Carin Meier
The Machine Learning Conference has a Call for Abstracts/Talks open until
12/31.

https://mlconf.com/abstract-guideline

It would be great opportunity to talk about the power of MXNet.
Please consider submitting!

- Carin


Re: Proposal for a recurrent architecture meeting and long term direction

2018-12-16 Thread Tianqi Chen
I feel that online meeting may not address most of the issues get raised in
a short amount of time(1h) and still suffers the problem of not being
publically archivable.

Maybe we can not try more asynchronous way? (RFC discussion in issues.
and/or discuss @dev). Just my two cents and I am not blocking the proposal

Tianqi


On Fri, Dec 14, 2018 at 5:34 AM Pedro Larroy 
wrote:

> Hi MXNetters
>
> To address the project growth and increased contributions I'm
> proposing a monthly meeting / hangout to have community discussions
> about MXNet architecture and mid / longer term technical directions
> that require coordination beyond single PRs.
>
> TOPICS:
>
> The goal of this series is to address topics including but not limited to:
>  - How to best integrate features that have a big impact on the project
>  - Discussion about long term technical direction
>  - Addressing of technical debt / refactoring needed.
>  - Other architectures / framework support. Ex. ARM, Cuda etc.
>  - Build system improvements and tooling such as code coverage, static
> analysis etc.
>  - Performance discussions.
>  - Live discussion to address exceptional PRs with complex changes
> that are better discussed live than in written form.
>
> FREQUENCY:
>
> I propose to make this meeting on the second Monday of the month at
> 11am PST /  20pm CET
>
> So the tentative date for the first one would be on January 14th.
>
> If you think this arbitrary date is not good, please say so. In this
> case we can proceed to make a doodle to find a slot that works for the
> interested parties.
>
> I have opened a group calendar for our meetings, hangouts and other
> events related to MXNet.
>
>
> https://calendar.google.com/calendar/embed?src=6co88bqo3n4bjsbt1qrqmsvj4o%40group.calendar.google.com
>
> Pedro.
>


Re: Proposal for a recurrent architecture meeting and long term direction

2018-12-16 Thread Gavin M Bell
Nice!!! 👍🏾

Perfect!


"I'm trying real hard to be the shepherd." -Jules Winnfield


> On Dec 14, 2018, at 3:04 PM, Marco de Abreu  wrote:
> 
> That's a great idea, Pedro!
> 
> -Marco
> 
> Am Fr., 14. Dez. 2018, 14:34 hat Pedro Larroy 
> geschrieben:
> 
>> Hi MXNetters
>> 
>> To address the project growth and increased contributions I'm
>> proposing a monthly meeting / hangout to have community discussions
>> about MXNet architecture and mid / longer term technical directions
>> that require coordination beyond single PRs.
>> 
>> TOPICS:
>> 
>> The goal of this series is to address topics including but not limited to:
>> - How to best integrate features that have a big impact on the project
>> - Discussion about long term technical direction
>> - Addressing of technical debt / refactoring needed.
>> - Other architectures / framework support. Ex. ARM, Cuda etc.
>> - Build system improvements and tooling such as code coverage, static
>> analysis etc.
>> - Performance discussions.
>> - Live discussion to address exceptional PRs with complex changes
>> that are better discussed live than in written form.
>> 
>> FREQUENCY:
>> 
>> I propose to make this meeting on the second Monday of the month at
>> 11am PST /  20pm CET
>> 
>> So the tentative date for the first one would be on January 14th.
>> 
>> If you think this arbitrary date is not good, please say so. In this
>> case we can proceed to make a doodle to find a slot that works for the
>> interested parties.
>> 
>> I have opened a group calendar for our meetings, hangouts and other
>> events related to MXNet.
>> 
>> 
>> https://calendar.google.com/calendar/embed?src=6co88bqo3n4bjsbt1qrqmsvj4o%40group.calendar.google.com
>> 
>> Pedro.
>>