Fixing Issues & Making PRs
MIT License
AI GitHub Maintainer is an advanced tool that leverages artificial intelligence to automate and enhance GitHub repository maintenance. It supports multiple LLM providers and offers a wide range of features to streamline your development workflow.
Code Analysis and Improvement
Documentation and Testing
Security and Compliance
Performance Optimization
Issue and PR Management
Version Control and Releases
Repository Insights
CI/CD Integration
Customization and Extensibility
Clone the repository:
git clone https://github.com/Likhithsai2580/ai-github-maintainer.git
cd ai-github-maintainer
Set up a virtual environment and install dependencies:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
Move env.example
to .env
file with your API keys and configuration:
GITHUB_TOKEN=your_github_token_here
OPENAI_API_KEY=your_openai_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here
GROQ_API_KEY=your_groq_api_key_here
LLM_PROVIDER=openai
LLM_MODEL=gpt-3.5-turbo
Customize the config.yaml
file to suit your needs.
Run the application:
python main.py
The config.yaml
file allows you to customize various aspects of the AI GitHub Maintainer:
Refer to the comments in config.yaml
for detailed explanations of each setting.
The AI GitHub Maintainer runs on a schedule defined in config.yaml
. By default, it performs weekly maintenance tasks on all accessible repositories.
You can also trigger maintenance manually using the web interface or by running python main.py --repo=<repository_name>
.
Advanced usage options:
python main.py --full-scan
: Perform a comprehensive analysis of all repositoriespython main.py --security-audit
: Run a security-focused maintenance cyclepython main.py --generate-report
: Create a detailed report of recent maintenance activitiesAccess the web interface at http://localhost:5000
. Features include:
To change the LLM provider, update the LLM_PROVIDER
and LLM_MODEL
variables in your .env
file. Supported options:
LLM_PROVIDER=openai
, LLM_MODEL=gpt-3.5-turbo
or gpt-4
LLM_PROVIDER=anthropic
, LLM_MODEL=claude-2
or claude-instant-1
LLM_PROVIDER=groq
, LLM_MODEL=llama2-70b-4096
or mixtral-8x7b-32768
LLM_PROVIDER=ollama
, LLM_MODEL=llama2
or codellama
Custom LLM integration:
llm_providers/
directory (e.g., custom_llm.py
)LLM_PROVIDER
options in config.yaml
AI GitHub Maintainer supports custom plugins to extend its functionality. For detailed information on creating and using plugins, refer to the Plugin Development Guide.
Key aspects of the plugin system:
Contributions are welcome! Please feel free to submit a Pull Request.
git checkout -b feature/AmazingFeature
)git commit -m 'Add some AmazingFeature'
)git push origin feature/AmazingFeature
)Please ensure your code adheres to our coding standards and includes appropriate tests.
This project is licensed under the MIT License - see the LICENSE file for details.
Create custom maintenance workflows by chaining multiple plugins:
workflows:
security_audit:
- security_scanner
- dependency_checker
- license_validator
performance_optimization:
- code_profiler
- query_optimizer
- cache_analyzer
Integrate the AI GitHub Maintainer into your existing tools and workflows using our RESTful API:
import requests
api_url = "http://localhost:5000/api/v1"
headers = {"Authorization": "Bearer your_api_token"}
# Trigger maintenance for a specific repository
response = requests.post(f"{api_url}/maintain", json={"repo": "user/repo"}, headers=headers)
print(response.json())
Schedule automated reports to be sent to stakeholders:
reporting:
schedule: "0 9 * * 1" # Every Monday at 9 AM
recipients:
- [email protected]
- [email protected]
format: pdf
sections:
- security_summary
- performance_metrics
- code_quality_trends
Common issues and their solutions:
Rate Limiting: If you encounter GitHub API rate limits, consider using a GitHub App instead of a personal access token for higher rate limits.
Memory Usage: For large repositories, you may need to increase the available memory. Use the --memory-limit
flag:
python main.py --memory-limit=8G
Slow Performance: Enable caching in config.yaml
to speed up repeated operations:
caching:
enabled: true
backend: redis
ttl: 3600
Plugin Errors: Check the plugin logs in logs/plugins.log
for detailed error messages. Ensure all plugin dependencies are installed.
Q: Can I use AI GitHub Maintainer with self-hosted GitHub Enterprise?
A: Yes, specify your GitHub Enterprise URL in config.yaml
:
github:
api_url: https://github.example.com/api/v3
Q: How does AI GitHub Maintainer handle sensitive data? A: Sensitive data is never stored locally and is redacted from logs. All communications with LLM providers are encrypted.
Q: Can I use AI GitHub Maintainer with other version control systems? A: Currently, only GitHub is supported, but we plan to add support for GitLab and Bitbucket in future releases.
Upcoming features and improvements:
We take security seriously. If you discover any security-related issues, please email [email protected] instead of using the issue tracker.
Security features:
Tips for optimizing AI GitHub Maintainer performance:
Use the --parallel
flag to run maintenance tasks concurrently:
python main.py --parallel=4
Enable result caching to speed up repeated analyses:
caching:
enabled: true
backend: redis
Use a faster LLM provider for less complex tasks:
llm:
provider: groq
model: mixtral-8x7b-32768
Implement custom plugins for performance-critical tasks using compiled languages (e.g., Rust, Go) and use them via the plugin system.
AI GitHub Maintainer integrates with various tools and services:
Example Slack integration:
integrations:
slack:
webhook_url: https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX
channel: #github-maintainer
notifications:
- security_alerts
- performance_reports
By leveraging the power of AI and automation, AI GitHub Maintainer helps development teams maintain high-quality, secure, and efficient code repositories with minimal manual intervention.