Luotuo-Chinese-LLM

骆驼(Luotuo): Open Sourced Chinese Language Models. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技

APACHE-2.0 License

Stars
3.6K

English | **** | | | | | |

(Luotuo):

(Luotuo) @ , @ @


Luotuo (Camel) MetaLLaMAalpaca()-Artiodactyla-Camelidae

[ ... ]

[2023-07-12]

[2023-06-07] Chat, ,,

    Andrew NgPromptLangChain
  </a>
</td>

[ ... ]

[2023-05-20] :, 3.5B

[2023-05-06]

[2023-04-27] 25AgentsGenerative Agentscolab

[2023-4-16] colab

Colab
Chat() gradio
Chat gradio Chat
Gradio
QA QA0.1
() LuotuoBERT
C GLM-6B
GPT
RPG 35Agents
B Chat - Chat
A 80GLM-6B
0.3 0.3
Gradio
(GLM) Gradio

2023320Alpaca-Lora

321githubLLaMATokenizerJapanese-Alpaca-LoRAtuning LLaMA

Chinese-alpaca-lorademoGPT

Gradio

1.0bugTokenizer

0.3

Input: 
Luotuo-Output: 
Input: 
Luotuo-0.1-Output: 
Luotuo-0.3-Output: 

100starsChatGLMGLMtuning

LoRA

-B(CamelBell-B)Chat

CamelBell-B

Instruction: ?
Answer: ,,,,,

CamelBell-C (-C)

CamelBell-C

Instruction: :
Input: 
213,2023ATP25045,6(4)-7/7-6(3)/7-6(12)5,,ATP,!

,12,;3-0,7-6(3);,915-40,,,3,5,!,,44Ace,

,250112125,,58,ATP250,,,

Answer: !5,

ChatGLM2ChatGLM2tuning

tokenembeddingClosed QAQA

OpenAIBERT

OpenAI360Demo

LuotuoBerthuggingfacecover()

QA+QAclosed QAQA-B(by )

QA

Luotuo-QA-BfinetuneChatGLM2LangchainChatHaruhileadhost

prompt

ChatGPTpromptpromptChatGPTprompt

forkcommentscolabhuggingFace

StanfordGenerative Agentsforkhttps://github.com/LC1332/Chinese-generative-agents

25 ChatGPT

5Prompting EngineeringDataWhaleDataWhalefork

Andrew NgpromptGPT

Prompt EngineeringLangChain

996

2638

DataWhaleChatHaruhiDataWhale"DeadlineDeadline,"ChatHaruhiDataWhale( top3)( top3)hackathon( top3)

Chat

30reportarxivChatpromptingWAIC77B

LaMini3B1B300M

Bert, QA, Chat

visionLuotuoBertharuhi

ChatSOS

highly motivatedpaper reading

(Sponsorships)

Top 3 Sponsors

Time Sponsor Amount
2023/6/20 Xiuhan 3000
2023/3/28 ** 2000
2023/4/2 Tand 1580

balance = 12653.03 now. Detailed balance see in sponsorship_and_balance.md

1.0

sponsorship ,

This was originally an exercise project for us, and we originally planned to train until version 1.0. However, the enthusiasm of the community exceeded our expectations. If you are willing to sponsor our project, you can scan this QR code and add this Alipay account, leaving your name.

All funds will be used for data annotation, purchase of training computing power, or distribution of subsequent peripheral products.

following

ChatGLM-6B ChatGLM-6B ()
ptuning-v2 ptuning-v2GLMp-tuning-v2
GLM-Tuning Chengxi GuoGLMLoRAp-tuning
Alpaca AlpacaLLaMA
Alpaca-LoRA LLaMALoRA2
Alpaca-ChToken Yiming CuiZiqing YangAlpacatokenLLaMAtokentoken
BELLE-7B Open in Colab BELLE()benchmark
RWKV-LM RWKV
Baize-7B LLaMA
Vicuna 7B13B13BInt4colabint4
DeepSpeed RLHFfinetune
Phoenix Phoenixcite
OpenInstruct cite!
Guanaco GuanacoJosephusCheung0.3
CNewSum CNewSumUCSB-C
Coco-CN li-xirongCocoGPTCoco,
CoQA CoQAGPTCoQA,

(Contributors)

contributions.md

contributions.md

</td>
<td>
  <img src="https://avatars.githubusercontent.com/u/25675774?v=4" alt="Contributor 2" height="150">
  <br>
  <b> @ </b>
  <br>
   
</td>
<td>
  <img src="https://avatars.githubusercontent.com/u/72334646?v=4" alt="Contributor 3" height="150">
  <br>
  <b> @  </b>
  <br>
   
</td>
</td>
<td>
  <img src="https://avatars.githubusercontent.com/u/28683036?v=4" alt="Contributor 5" height="150">
  <br>
  <b> @ </b>
  <br>
   
</td>
<td>
  <img src="https://avatars.githubusercontent.com/u/40827070?s=400&u=ab66832b0821cf43840ceba64a137da5089afe28&v=4" alt="Contributor 6" height="150">
  <br>
  <b> @ </b>
  <br>
   
</td>
</td>
<td>
  <img src="https://avatars.githubusercontent.com/u/52043573?s=400&u=21a6a45547a06472457a7d852f1894825b9d7794&v=4" alt="Contributor 4" height="150">
  <br>
  <b> @ </b>
  <br>
   QA
</td>
<td>
  <img src="https://avatars.githubusercontent.com/u/19383886?v=4" alt="Contributor 6" height="150">
  <br>
  <b>  @  </b>
  <br>
  
</td>
</td>

Citation

Please cite the repo if you use the data or code in this repo.

@misc{luotuo,
  author={Ziang Leng, Qiyuan Chen and Cheng Li},
  title = {Luotuo: An Instruction-following Chinese Language model, LoRA tuning on LLaMA},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/LC1332/Luotuo-Chinese-LLM}},
}