A demo of GPT2 model trained and infered with LightSeq
Statistics for this project are still being loaded, please check back later.
中文标点符号模型,可以给文本添加标点符号。
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
Code for paper Fine-tune BERT for Extractive Summarization
The PyTorch Implementation based on YOLOv4 of the paper: "Complex-YOLO: Real-time 3D Object Detec...
GPT2 for Chinese chitchat/用于中文闲聊的GPT2模型(实现了DialoGPT的MMI思想)
Chinese version of GPT2 training code, using BERT tokenizer.
The code for 2020 Tencent College Algorithm Contest, and the online result ranks 1st.
한국어 문장 띄어쓰기(삭제/추가) 모델입니다. 데이터 준비 후 직접 학습이 가능하도록 작성하였습니다.
An implementation of training for GPT2, supports TPUs
ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 20...
Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners