When I reviewed MXNet v0.7 in 2016, I felt that it was a promising deep learning framework with excellent scalability (nearly linear on GPU clusters), good auto-differentiation, and state-of-the-art support for CUDA GPUs. I also felt that it needed work on its documentation and tutorials, and needed a lot more examples in its model zoo. In addition, I would have liked to see a high-level interface for MXNet, which I imagined would be Keras.

To read this article in full, please click here

(Insider Story)