A simple implementation of MuZero algorithm for connect4 game
Together Mixture-Of-Agents (MoA) – 65.1% on AlpacaEval with OSS models
OpenMMLab Foundational Library for Training Deep Learning Models
NSGA2, NSGA3, R-NSGA3, MOEAD, Genetic Algorithms (GA), Differential Evolution (DE), CMAES, PSO
[ICML 2021] DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning | 斗地主AI
A chess library for Python, with move generation and validation, PGN parsing and writing, Polyglo...
A family of open-sourced Mixture-of-Experts (MoE) Large Language Models
MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation
Character Animation (AnimateAnyone, Face Reenactment)
A comparison of parameter space noise methods for exploration in deep reinforcement learning
MuZero
Mixture-of-Experts for Large Vision-Language Models