A Reactive CLI that generates git commit messages with Ollama, ChatGPT, Gemini, Claude, Mistral and other AI
# Install globally
npm install -g aicommit2
# Set up at least one AI provider
aicommit2 config set OPENAI.key=<your-key>
# Use in your git repository
git add .
aicommit2
aicommit2 is a reactive CLI tool that automatically generates Git commit messages using various AI models. It supports simultaneous requests to multiple AI providers, allowing users to select the most suitable commit message. The core functionalities and architecture of this project are inspired by AICommits.
- Multi-AI Support: Integrates with OpenAI, Anthropic Claude, Google Gemini, Mistral AI, Cohere, Groq, Ollama and more.
- OpenAI API Compatibility: Support for any service that implements the OpenAI API specification.
- Reactive CLI: Enables simultaneous requests to multiple AIs and selection of the best commit message.
- Git Hook Integration: Can be used as a prepare-commit-msg hook.
- Custom Prompt: Supports user-defined system prompt templates.
- OpenAI
- Anthropic Claude
- Gemini
- Mistral & Codestral
- Cohere
- Groq
- Perplexity
- DeepSeek
- OpenAI API Compatibility
⚠️ The minimum supported version of Node.js is the v18. Check your Node.js version withnode --version
.
- Install aicommit2:
npm install -g aicommit2
- Set up API keys (at least ONE key must be set):
aicommit2 config set OPENAI.key=<your key>
aicommit2 config set ANTHROPIC.key=<your key>
# ... (similar commands for other providers)
- Run aicommit2 with your staged files in git repository:
git add <files...>
aicommit2
👉 Tip: Use the
aic2
alias ifaicommit2
is too long for you.
git clone https://github.com/tak-bro/aicommit2.git
cd aicommit2
npm run build
npm install -g .
Add feature to
your devcontainer.json
file:
"features": {
"ghcr.io/kvokka/features/aicommit2:1": {}
}
This CLI tool runs git diff
to grab all your latest code changes, sends them to configured AI, then returns the AI generated commit message.
If the diff becomes too large, AI will not function properly. If you encounter an error saying the message is too long or it's not a valid commit message, try reducing the commit unit.
You can call aicommit2
directly to generate a commit message for your staged changes:
git add <files...>
aicommit2
aicommit2
passes down unknown flags to git commit
, so you can pass in commit
flags.
For example, you can stage all changes in tracked files with as you commit:
aicommit2 --all # or -a
--locale
or-l
: Locale to use for the generated commit messages (default: en)--all
or-a
: Automatically stage changes in tracked files for the commit (default: false)--type
or-t
: Git commit message format (default: conventional). It supportsconventional
andgitmoji
--confirm
or-y
: Skip confirmation when committing after message generation (default: false)--clipboard
or-c
: Copy the selected message to the clipboard (default: false).- If you give this option, aicommit2 will not commit.
--generate
or-g
: Number of messages to generate (default: 1)- Warning: This uses more tokens, meaning it costs more.
--exclude
or-x
: Files to exclude from AI analysis--hook-mode
: Run as a Git hook, typically used with prepare-commit-msg hook (default: false)- This mode is automatically enabled when running through the Git hook system
- See Git hook section for more details
--pre-commit
: Run in pre-commit framework mode (default: false)- This option is specifically for use with the pre-commit framework
- See Integration with pre-commit framework section for setup instructions
Example:
aicommit2 --locale "jp" --all --type "conventional" --generate 3 --clipboard --exclude "*.json" --exclude "*.ts"
You can also integrate aicommit2 with Git via the prepare-commit-msg
hook. This lets you use Git like you normally would, and edit the commit message before committing.
In the Git repository you want to install the hook in:
aicommit2 hook install
if you prefer to set up the hook manually, create or edit the .git/hooks/prepare-commit-msg
file:
#!/bin/sh
# your-other-hook "$@"
aicommit2 --hook-mode "$@"
Make the hook executable:
chmod +x .git/hooks/prepare-commit-msg
If you're using the pre-commit framework, you can add aicommit2 to your .pre-commit-config.yaml
:
repos:
- repo: local
hooks:
- id: aicommit2
name: AI Commit Message Generator
entry: aicommit2 --pre-commit
language: node
stages: [prepare-commit-msg]
always_run: true
Make sure you have:
- Installed pre-commit:
brew install pre-commit
- Installed aicommit2 globally:
npm install -g aicommit2
- Run
pre-commit install --hook-type prepare-commit-msg
to set up the hook
Note : The
--pre-commit
flag is specifically designed for use with the pre-commit framework and ensures proper integration with other pre-commit hooks.
In the Git repository you want to uninstall the hook from:
aicommit2 hook uninstall
Or manually delete the .git/hooks/prepare-commit-msg
file.
- READ:
aicommit2 config get <key>
- SET:
aicommit2 config set <key>=<value>
Example:
aicommit2 config get OPENAI
aicommit2 config get GEMINI.key
aicommit2 config set OPENAI.generate=3 GEMINI.temperature=0.5
You can configure API keys using environment variables. This is particularly useful for CI/CD environments or when you don't want to store keys in the configuration file.
# OpenAI
OPENAI_API_KEY="your-openai-key"
# Anthropic
ANTHROPIC_API_KEY="your-anthropic-key"
# Google
GEMINI_API_KEY="your-gemini-key"
# Mistral AI
MISTRAL_API_KEY="your-mistral-key"
CODESTRAL_API_KEY="your-codestral-key"
# Other Providers
COHERE_API_KEY="your-cohere-key"
GROQ_API_KEY="your-groq-key"
PERPLEXITY_API_KEY="your-perplexity-key"
DEEPSEEK_API_KEY="your-deepseek-key"
Usage Example:
OPENAI_API_KEY="your-openai-key" ANTHROPIC_API_KEY="your-anthropic-key" aicommit2
Note: Environment variables take precedence over configuration file settings.
- Command-line arguments: use the format
--[Model].[Key]=value
aicommit2 --OPENAI.locale="jp" --GEMINI.temperatue="0.5"
- Configuration file: use INI format in the
~/.aicommit2
file or useset
command. Example~/.aicommit2
:
# General Settings
logging=true
generate=2
temperature=1.0
# Model-Specific Settings
[OPENAI]
key="<your-api-key>"
temperature=0.8
generate=1
systemPromptPath="<your-prompt-path>"
[GEMINI]
key="<your-api-key>"
generate=5
includeBody=true
[OLLAMA]
temperature=0.7
model[]=llama3.2
model[]=codestral
The priority of settings is: Command-line Arguments > Environment Variables > Model-Specific Settings > General Settings > Default Values.
The following settings can be applied to most models, but support may vary. Please check the documentation for each specific model to confirm which settings are supported.
Setting | Description | Default |
---|---|---|
systemPrompt |
System Prompt text | - |
systemPromptPath |
Path to system prompt file | - |
exclude |
Files to exclude from AI analysis | - |
type |
Type of commit message to generate | conventional |
locale |
Locale for the generated commit messages | en |
generate |
Number of commit messages to generate | 1 |
logging |
Enable logging | true |
includeBody |
Whether the commit message includes body | false |
maxLength |
Maximum character length of the Subject of generated commit message | 50 |
timeout |
Request timeout (milliseconds) | 10000 |
temperature |
Model's creativity (0.0 - 2.0) | 0.7 |
maxTokens |
Maximum number of tokens to generate | 1024 |
topP |
Nucleus sampling | 0.9 |
codeReview |
Whether to include an automated code review in the process | false |
codeReviewPromptPath |
Path to code review prompt file | - |
disabled |
Whether a specific model is enabled or disabled | false |
👉 Tip: To set the General Settings for each model, use the following command.
aicommit2 config set OPENAI.locale="jp" aicommit2 config set CODESTRAL.type="gitmoji" aicommit2 config set GEMINI.includeBody=true
- Allow users to specify a custom system prompt
aicommit2 config set systemPrompt="Generate git commit message."
systemPrompt
takes precedence oversystemPromptPath
and does not apply at the same time.
- Allow users to specify a custom file path for their own system prompt template
- Please see Custom Prompt Template
aicommit2 config set systemPromptPath="/path/to/user/prompt.txt"
- Files to exclude from AI analysis
- It is applied with the
--exclude
option of the CLI option. All files excluded through--exclude
in CLI andexclude
general setting.
aicommit2 config set exclude="*.ts"
aicommit2 config set exclude="*.ts,*.json"
NOTE:
exclude
option does not support per model. It is only supported by General Settings.
Default: conventional
Supported: conventional
, gitmoji
The type of commit message to generate. Set this to "conventional" to generate commit messages that follow the Conventional Commits specification:
aicommit2 config set type="conventional"
Default: en
The locale to use for the generated commit messages. Consult the list of codes in: https://wikipedia.org/wiki/List_of_ISO_639_language_codes.
aicommit2 config set locale="jp"
Default: 1
The number of commit messages to generate to pick from.
Note, this will use more tokens as it generates more results.
aicommit2 config set generate=2
Default: true
Option that allows users to decide whether to generate a log file capturing the responses.
The log files will be stored in the ~/.aicommit2_log
directory(user's home).
- You can remove all logs below comamnd.
aicommit2 log removeAll
Default: false
This option determines whether the commit message includes body. If you want to include body in message, you can set it to true
.
aicommit2 config set includeBody="true"
aicommit2 config set includeBody="false"
The maximum character length of the Subject of generated commit message
Default: 50
aicommit2 config set maxLength=100
The timeout for network requests in milliseconds.
Default: 10_000
(10 seconds)
aicommit2 config set timeout=20000 # 20s
Note: Each AI provider has its own default timeout value, and if the configured timeout is less than the provider's default, the setting will be ignored.
The temperature (0.0-2.0) is used to control the randomness of the output
Default: 0.7
aicommit2 config set temperature=0.3
The maximum number of tokens that the AI models can generate.
Default: 1024
aicommit2 config set maxTokens=3000
Default: 0.9
Nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
aicommit2 config set topP=0.2
Default: false
This option determines whether a specific model is enabled or disabled. If you want to disable a particular model, you can set this option to true
.
To disable a model, use the following commands:
aicommit2 config set GEMINI.disabled="true"
aicommit2 config set GROQ.disabled="true"
Default: false
The codeReview
parameter determines whether to include an automated code review in the process.
aicommit2 config set codeReview=true
NOTE: When enabled, aicommit2 will perform a code review before generating commit messages.
- The
codeReview
feature is currently experimental. - This feature performs a code review before generating commit messages.
- Using this feature will significantly increase the overall processing time.
- It may significantly impact performance and cost.
- The code review process consumes a large number of tokens.
- Allow users to specify a custom file path for code review
aicommit2 config set codeReviewPromptPath="/path/to/user/prompt.txt"
timeout | temperature | maxTokens | topP | |
---|---|---|---|---|
OpenAI | ✓ | ✓ | ✓ | ✓ |
Anthropic Claude | ✓ | ✓ | ✓ | ✓ |
Gemini | ✓ | ✓ | ✓ | |
Mistral AI | ✓ | ✓ | ✓ | ✓ |
Codestral | ✓ | ✓ | ✓ | ✓ |
Cohere | ✓ | ✓ | ✓ | ✓ |
Groq | ✓ | ✓ | ✓ | ✓ |
Perplexity | ✓ | ✓ | ✓ | ✓ |
DeepSeek | ✓ | ✓ | ✓ | ✓ |
Ollama | ✓ | ✓ | ✓ | |
OpenAI API-Compatible | ✓ | ✓ | ✓ | ✓ |
All AI support the following options in General Settings.
- systemPrompt, systemPromptPath, codeReview, codeReviewPromptPath, exclude, type, locale, generate, logging, includeBody, maxLength
aicommit2 config set \
generate=2 \
topP=0.8 \
maxTokens=1024 \
temperature=0.7 \
OPENAI.key="sk-..." OPENAI.model="gpt-4o" OPENAI.temperature=0.5 \
ANTHROPIC.key="sk-..." ANTHROPIC.model="claude-3-haiku" ANTHROPIC.maxTokens=2000 \
MISTRAL.key="your-key" MISTRAL.model="codestral-latest" \
OLLAMA.model="llama3.2" OLLAMA.numCtx=4096 OLLAMA.watchMode=true
🔍 Detailed Support Info: Check each provider's documentation for specific limits and behaviors:
aicommit2 supports custom prompt templates through the systemPromptPath
option. This feature allows you to define your own prompt structure, giving you more control over the commit message generation process.
To use a custom prompt template, specify the path to your template file when running the tool:
aicommit2 config set systemPromptPath="/path/to/user/prompt.txt"
aicommit2 config set OPENAI.systemPromptPath="/path/to/another-prompt.txt"
For the above command, OpenAI uses the prompt in the another-prompt.txt
file, and the rest of the model uses prompt.txt
.
NOTE: For the
systemPromptPath
option, set the template path, not the template content
Your custom template can include placeholders for various commit options.
Use curly braces {}
to denote these placeholders for options. The following placeholders are supported:
- {locale}: The language for the commit message (string)
- {maxLength}: The maximum length for the commit message (number)
- {type}: The type of the commit message (conventional or gitmoji)
- {generate}: The number of commit messages to generate (number)
Here's an example of how your custom template might look:
Generate a {type} commit message in {locale}.
The message should not exceed {maxLength} characters.
Please provide {generate} messages.
Remember to follow these guidelines:
1. Use the imperative mood
2. Be concise and clear
3. Explain the 'why' behind the change
Please note that the following text will ALWAYS be appended to the end of your custom prompt:
Lastly, Provide your response as a JSON array containing exactly {generate} object, each with the following keys:
- "subject": The main commit message using the {type} style. It should be a concise summary of the changes.
- "body": An optional detailed explanation of the changes. If not needed, use an empty string.
- "footer": An optional footer for metadata like BREAKING CHANGES. If not needed, use an empty string.
The array must always contain {generate} element, no more and no less.
Example response format:
[
{
"subject": "fix: fix bug in user authentication process",
"body": "- Update login function to handle edge cases\n- Add additional error logging for debugging",
"footer": ""
}
]
Ensure you generate exactly {generate} commit message, even if it requires creating slightly varied versions for similar changes.
The response should be valid JSON that can be parsed without errors.
This ensures that the output is consistently formatted as a JSON array, regardless of the custom template used.
Watch Commit mode allows you to monitor Git commits in real-time and automatically perform AI code reviews using the --watch-commit
flag.
aicommit2 --watch-commit
This feature only works within Git repository directories and automatically triggers whenever a commit event occurs. When a new commit is detected, it automatically:
- Analyzes commit changes
- Performs AI code review
- Displays results in real-time
For detailed configuration of the code review feature, please refer to the codeReview section. The settings in that section are shared with this feature.
- The Watch Commit feature is currently experimental
- This feature performs AI analysis for each commit, which consumes a significant number of API tokens
- API costs can increase substantially if there are many commits
- It is recommended to carefully monitor your token usage when using this feature
- To use this feature, you must enable watch mode for at least one AI model:
aicommit2 config set [MODEL].watchMode="true"
Check the installed version with:
aicommit2 --version
If it's not the latest version, run:
npm update -g aicommit2
This project uses functionalities from external APIs but is not officially affiliated with or endorsed by their providers. Users are responsible for complying with API terms, rate limits, and policies.
For bug fixes or feature implementations, please check the Contribution Guide.
Thanks goes to these wonderful people (emoji key):
@eltociear 📖 |
@ubranch 💻 |
@bhodrolok 💻 |
@ryicoh 💻 |
@noamsto 💻 |
@tdabasinskas 💻 |
@gnpaone 💻 |
@devxpain 💻 |
@delenzhang 💻 |
@kvokka 📖 |
If this project has been helpful, please consider giving it a Star ⭐️!
Maintainer: @tak-bro