Tutel MoE: An Optimized Mixture-of-Experts Implementation
MIT License
maximal update parametrization (µP)
Common PyTorch Modules
MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.