GPTComet: AI-Powered Git Commit Message Generator
MIT License
GPTComet is a Python library designed to automate the process of generating commit messages for Git repositories. It leverages the power of AI to create meaningful commit messages based on the changes made in the codebase.
To use GPTComet, you need to have Python installed on your system. You can install the library using pip:
pip install gptcomet
Recommend install use pipx
on Mac or Linux.
pipx install gptcomet
After installing GPTComet, you will have two commands: gptcomet
and gmsg
.
$ pipx install gptcomet
installed package gptcomet 0.0.3, installed using Python 3.12.3
These apps are now globally available
- gmsg
- gptcomet
done! β¨ π β¨
To use gptcomet, follow these steps:
provider
: The provider of the language model (default openai
).api_base
: The base URL of the API (default https://api.openai.com/v1
).api_key
: The API key for the provider.model
: The model used for generating commit messages (default text-davinci-003
).retries
: The number of retries for the API request (default 2
).gmsg gen commit
.The following are the available commands for GPTComet:
gmsg config
: Config manage commands group.
set
: Set a configuration value.get
: Get a configuration value.list
: List all configuration values.reset
: Reset the configuration to its default values.keys
: List all supported keys.append
: Append a value to a configuration key. (List value only, like fileignore
)remove
: Remove a value from a configuration key. (List value only, like fileignore
)gmsg hook
: Hook manage commands group(Prototype phase.).
install
: Install the GPTComet hook.uninstall
: Uninstall the GPTComet hook.status
: Check the status of the GPTComet hook.gmsg gen
: Generate messages by changes/diff.
commit
: Generate a commit message based on the changes made in the code.pr
: Generate a pull request message based on the changes made in the code.The configuration file for GPTComet is gptcomet.yaml
. The file should contain the following keys:
provider
: The provider of the language model (default openai
).file_ignore
: The file to ignore when generating a commit.api_base
: The base URL of the API (default https://api.openai.com/v1
).api_key
: The API key for the provider.model
: The model used for generating commit messages (default text-davinci-003
).retries
: The number of retries for the API request (default 2
).proxy
: The proxy URL for the provider.max_tokens
: The maximum number of tokens for the provider.top_p
: The top_p parameter for the provider (default 0.7
).temperature
: The temperature parameter for the provider (default 0.7
).frequency_penalty
: The frequency_penalty parameter for the provider (default 0
).extra_headers
: The extra headers for the provider, json string.prompt.brief_commit_message
: The prompt for generating brief commit messages.prompt.translation
: The prompt for translating commit messages to a target language.output.lang
: The language of the commit message (default en
).The file to ignore when generating a commit. The default value is
- "bun.lockb"
- "Cargo.lock"
- "composer.lock"
- "Gemfile.lock"
- "package-lock.json"
- "pnpm-lock.yaml"
- "poetry.lock"
- "yarn.lock"
- "pdm.lock"
- "Pipfile.lock"
- "*.py[cod]"
You can add more file_ignore by using the gmsg config append file_ignore <xxx>
command.
<xxx>
is same syntax as gitignore
, like *.so
to ignore all .so
suffix files.
This project using litellm
as the bridge to LLM providers, so plenty providers are supported.
If you are using openai
, just leave the api_base
as default. Set your api_key
in the config
section.
If you are using an openai
class provider, or a provider compatible with the openai
interface, you can set the provider to openai
.
And set your custom api_base
, api_key
and model
.
For example:
Openrouter
providers api interface compatible with openai,
you can set provider to openai
and set api_base
to https://openrouter.ai/api/v1
,
api_key
to your api key from keys page
and model
to meta-llama/llama-3.1-8b-instruct:free
or some other you prefer.
gmsg config set openai.api_base https://openrouter.ai/api/v1
gmsg config set openai.api_key YOUR_API_KEY
gmsg config set openai.model meta-llama/llama-3.1-8b-instruct:free
gmsg config set openai.max_tokens 13000
Silicon providers the similar interface with openrouter, so you can set provider to openai
and set api_base
to https://api.siliconflow.cn/v1
.
Note that max tokens may vary, and will return an error if it is too large.
You can use gmsg config keys
to check supported keys.
Here is an example of how to use GPTComet:
gmsg config set openai.api_key YOUR_API_KEY
, it will generate config file at ~/.local/gptcomet/gptcomet.yaml
, includes:
provider = "openai"
api_base = "https://api.openai.com/v1"
api_key = "YOUR_API_KEY"
model = "gpt-3.5-turbo"
retries = 2
output.lang = "en"
gmsg generate commit
Note: Replace YOUR_API_KEY
with your actual API key for the provider.
If you'd like to contribute to GPTComet, feel free to fork this project and submit a pull request.
First, fork the project and clone your repo.
git clone https://github.com/<yourname>/gptcomet
Second, make sure you have pdm
, you can install by pip
, brew
or other way in their installation docs
Use just
command install dependence, just
is a handy way to save and run project-specific commands, just
docs https://github.com/casey/just
just install
Or use pdm
directly pdm install
.
Then, you can submit a pull request.
GPTComet is licensed under the MIT License.
If you have any questions or suggestions, feel free to contact.