Code for the paper 'Learning TSP Requires Rethinking Generalization' (CP 2021)
MIT License
This repository contains code for the paper "Learning TSP Requires Rethinking Generalization" by Chaitanya K. Joshi, Quentin Cappart, Louis-Martin Rousseau, Thomas Laurent, accepted to the 27th International Conference on Principles and Practice of Constraint Programming (CP 2021).
Towards a controlled study of neural combinatorial optimization, we unify several state-of-the-art architectures and learning paradigms into one experimental pipeline and provide the first principled investigation on zero-shot generalization to large instances.
We open-source our framework and datasets to encourage the community to go beyond evaluating performance on fixed TSP sizes, develop more expressive and scale-invariant GNNs, as well as study transfer learning for combinatorial problems.
We ran our code on Ubuntu 16.04, using Python 3.6.7, PyTorch 1.2.0 and CUDA 10.0. We highly recommend installation via Anaconda.
# Clone the repository.
git clone https://github.com/chaitjo/learning-tsp.git
cd learning-tsp
# Set up a new conda environment and activate it.
conda create -n tsp python=3.6.7
source activate tsp
# Install all dependencies and Jupyter Lab (for using notebooks).
conda install pytorch=1.2.0 cudatoolkit=10.0 -c pytorch
conda install numpy scipy cython tqdm scikit-learn matplotlib seaborn tensorboard pandas
conda install jupyterlab -c conda-forge
pip install tensorboard_logger
# Download datasets and unpack to the /data/tsp directory.
pip install gdown
gdown https://drive.google.com/uc?id=152mpCze-v4d0m9kdsCeVkLdHFkjeDeF5
tar -xvzf tsp-data.tar.gz ./data/tsp/
For reproducing experiments, we provide a set of scripts for training, finetuning and evaluation in the /scripts
directory.
Pre-trained models for some experiments described in the paper can be found in the /pretrained
directory.
Refer to options.py
for descriptions of each option.
High-level commands are as follows:
# Training
CUDA_VISIBLE_DEVICES=<available-gpu-ids> python run.py
--problem <tsp/tspsl>
--model <attention/nar>
--encoder <gnn/gat/mlp>
--baseline <rollout/critic>
--min_size <20/50/100>
--max_size <50/100/200>
--batch_size 128
--train_dataset data/tsp/tsp<20/50/100/20-50>_train_concorde.txt
--val_datasets data/tsp/tsp20_val_concorde.txt data/tsp/tsp50_val_concorde.txt data/tsp/tsp100_val_concorde.txt
--lr_model 1e-4
--run_name <custom_run_name>
# Evaluation
CUDA_VISIBLE_DEVICES=<available-gpu-ids> python eval.py data/tsp/tsp10-200_concorde.txt
--model outputs/<custom_run_name>_<datetime>/
--decode_strategy <greedy/sample/bs>
--eval_batch_size <128/1/16>
--width <1/128/1280>
Citation:
@inproceedings{joshi2021learning,
title={Learning TSP Requires Rethinking Generalization},
author={Joshi, Chaitanya K and Cappart, Quentin and Rousseau, Louis-Martin and Laurent, Thomas},
booktitle={International Conference on Principles and Practice of Constraint Programming},
year={2021}
}
Resources:
Acknowledgement and Related Work: Our codebase is a modified clone of Wouter Kool's excellent repository for the paper "Attention, Learn to Solve Routing Problems!", and incorporates ideas from the following papers, among others: