maximal update parametrization (µP)
MIT License
MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.
Common PyTorch Modules
Tutel MoE: An Optimized Mixture-of-Experts Implementation