Please join me in welcoming Zhennan Qin (https://github.com/ZhennanQin) from
Intel as a new committer.

Zhennan is the main author of accelerating MXNet/MKLDNN inference through
operator fusion and model quantization. His work has placed MXNet in an
advantageous place for inference workloads on Intel CPUs compared with
other DL frameworks.

Reply via email to