https://github.com/apache/incubator-mxnet/issues/10898
On 5/10/18, 10:26 PM, "Zheng, Da" wrote:
I didn't create an issue for this. I think Sheng can provide more details.
Best,
Da
On 5/10/18, 10:10 PM, "Anirudh" wrote:
I didn't create an issue for this. I think Sheng can provide more details.
Best,
Da
On 5/10/18, 10:10 PM, "Anirudh" wrote:
Hi Da,
Thanks for reporting this issue. Do you have a github issue open for this ?
I agree that we don't need to block the release
Hello,
Scientists like to develop models with Gluon or Pytorch and hand the models
over to engineer for deployment. It takes a lot of effort to deploy these
models because engineers usually need to reimplement the models (this is
especially for NLP and speech models). Recently, Pytorch
Hello,
It has been reported that MXNet v1.2 has compilation errors in old Mac. The fix
is on the way.
We have been discussing that since the problem only exists on an uncommon
hardware, maybe we don’t block the release of v1.2. Instead, we can have a
patch release in a near future.
Best,
Da
Good suggestion Kellen!
I like the idea, it will solve an existing deficiency in MXNet, that has
been worked around so far. As an example, the recently added Scala
inference API (part of 1.2RC) implemented a dispatcher in Scala to
workaround that limitation.
Would be great to better understand
Does anyone have contact info for github.com/Awyan?
It seems we have an old documentation site posted at:
https://newdocs.readthedocs.io/en/latest/
and only the absent @Awyan has the permissions to the site. See
https://github.com/apache/incubator-mxnet/issues/10409 for more details.
If no
Hello MXNet developers,
I’ve recently been speaking with users who’d like to run parallel inference
requests with MXNet on their service. They’ll do this on GPUs, and due to
resource constraints, they’d like to do this without duplicating their
model’s weights in memory. They’d also like run
Hey Andrew, thanks for the write-up. I think having a Java binding will be
very useful for enterprise users. Doc looks good but two things I'm
curious about:
How are you planning to handle thread safe inference? It'll be great if
you can hide the complexity of dealing with dispatch threading