An implementation of training for GPT2, supports TPUs
MIT License
Disclaimer: This is not the official GPT2 implementation! I've done my best to follow the specifications of the original GPT2 model as closely as possible, but be warned that I have not been able to replicate the full performance of the original model using this code. I don't know why this is, I haven't been able to track down any bug that could be causing this.
An implementation of training for GPT2 that supports both GPUs and TPUs. The dataset scripts are a bit hacky and will probably need to be adapted to your needs.
For GPUs:
pip3 install tensorflow-gpu regex
For TPUs:
pip3 install tensorflow regex google-api-python-client oauth2client
For downloading the models:
pip3 install requests tqdm
For generating the dataset (in addition to Tensorflow):
pip3 install ftfy tqdm newspaper3k
If you want to use my models, I currently have "117M", "PrettyBig" and "1.5B" to offer. 117M was trained on a single v2 TPU for a week (probably less than the original OpenAI model), PrettyBig is slightly bigger than 345M and was trained on a v2-256 pod for a week. I was originally also planning to release my version of the 1.5B model, but have decided against it. You can read about my reasoning here. Since OpenAI has released their model, I have now also released my (inferior) 1.5B model, which was trained on a v3-512 pod for a week.
python3 download_model.py PrettyBig
This will create two directories, one named as the model and another named "encoder". Change the "model_dir" and "encoder_path" parameters in the .json corresponding to your model to point to these paths, respectively.
If you only want the encoder, use:
python3 download_model.py encoder
To predict you can either pass the prompt directly in the command line, or have it read from a file. (This is useful for prompts that include newlines) Text is output to the console and the file specified in the "predict_path" parameter. You need a model checkpoint and a copy of the BPE encoder at an accessible location for this to work. (Change the "model_dir" and "encoder_path" parameters in the .json)
From command line:
python3 main.py --model Your-Model.json [--top_k Top-K-Truncation] --predict_text "Hello there! My name is"
From file:
python3 main.py --model Your-Model.json [--top_k Top-K-Truncation] --predict_file input.txt
The optional top_k parameter causes the model to only consider the top k most likely tokens at each step. Setting this around 40 tends to create better results, but with less variety.
Prediction on TPUs is not supported.
To train a model, define its parameters in a .json file (see examples) and then simply call
python3 main.py --model Your-Model.json [--tpu Your-TPU-Name]
Using a TPU is optional, it runs fine on GPUs without modification. (Note: Evaluation doesn't work on TPU pods and must be commented out)
This assumes you have a version of the openwebtext corpus stored in an accessible location. If you don't, see below how to generate your own version.
GPT2 is trained on the webtext corpus, which is basically all websites linked to from Reddit with at least 3 Karma. Since the database is huge and contains a lot of copyrighted material, I can't provide a download here. Instead, I'll describe how I got it. Be aware it cost me around ~500€ in cloud compute resources to download and process the whole thing, but I'm not claiming I was optimally efficient.
You can also use your own text files as training data, but you'll need to modify some code by hand.
base_dir = "/home/connor/my_text_dir" # Path to where your .txt files are located
files_per = 175000 # How many txt files to put in one tfrecord, not too important
name = "my-custom-data" # Name of output files will be name_i.tfrecords where i is the number of the file
output_dir = "/home/connor/output" # Where to place the .tfrecords files
log_dir = "logs" # Some logs will be placed here to support restarting if the encoding is interrupted
files = glob.glob(os.path.join(base_dir, "**/*.txt")) # This needs to result in a list of paths to all of your txt files
processes = 64 # Number of encoding processes to run
encoder_path = "/home/connor/encoder" # Path to encoder files
minimum_size = 128 # The minimum length (in BPE tokens) a file is allowed to have, otherwise it is discarded.
def my_input(params, eval=False):
if not eval:
numbers = [0, 3, 4, 5, 6, 7, 8, 9] # A random subset of files for train
else:
numbers = [1, 2] # Random subset for eval
files = [os.path.join(params["data_path"], "my-custom-data_{}.tfrecords".format(str(i))) for i in numbers] # Generates the list of files
return bpe_text(params["batch_size"], files, amount=params["n_ctx"], iterations=params["iterations"], stitch=9, batch=True)
inputs = {
"openwebtext": openwebtext, # Standard OpenWebtext input
"openwebtext_longbiased": openwebtext_longbiased, # OpenWebtext with a bias towards showing more long (>512 tokens) examples
"openwebtext_long": openwebtext_long, # Openwebtext that only shows long examples
"my_input": my_input,
}
[...]
"iterations": 500,
"n_embd": 768,
"input": "my_input",
"model": "GPT2",
[...]
Because passing two dozen parameters over the command line would be tedious, you pass all the model parameters in a .json file. Note that any paths also support Google Storage paths and must be gs:// paths if you're running on TPUs.
Values you'll definitely want to change:
Values you'll probably want to change:
Model parameters:
Training parameters: