Create AI-powered chatbots tailored to your website content, seamlessly integrate them into your website, and boost user engagement.
MIT License
https://github.com/user-attachments/assets/c204dadd-e987-4574-ae91-5e45768282d4
SiteGenie is an innovative AI chatbot meticulously designed to enhance website interactions. Trained on user-provided links, this advanced chatbot seamlessly integrates with websites, delivering instant responses and boosting user engagement. It expedites interactions by eliminating time-consuming searches, significantly improving operational efficiency.
SiteGenie aims to deliver an intuitive and effective solution for crafting, personalizing, and launching AI-driven chatbots, ultimately elevating user engagement and satisfaction across various domains and industries.
sequenceDiagram
participant User
participant System
participant Langchain
participant OpenAI
participant Supabase
participant AWS Lambda
User ->> System: Request website URL
System ->> Langchain: Perform recursive data scraping
Langchain ->> System: Return scraped content
System ->> System: Clean content (Stemming, Lemmatization)
System ->> System: Split content into documents
System ->> OpenAI: Create vector embeddings
OpenAI ->> System: Return vector embeddings
System ->> Supabase: Add vector embeddings to vector store
Supabase ->> System: Confirm addition
System ->> User: System ready to process prompts
User ->> System: Send prompt with context
System ->> OpenAI: Generate response using vector store context
OpenAI ->> System: Return generated response
System ->> User: Return response
Note over System, AWS Lambda: Deployment
System ->> AWS Lambda: Deploy backend on AWS Lambda
AWS Lambda ->> System: Serverless deployment allows on-demand execution
Note over System: Environments
System ->> AWS Lambda: Deploy code to staging
System ->> AWS Lambda: Deploy tested code to production
Note over System: Integration Testing
System ->> System: Run integration tests for API
System ->> System: Automate deployment script
System ->> System: Clone repo, run tests locally, deploy to staging, run tests again
Note over System: GitHub Actions
System ->> GitHub Actions: Run integration tests on every commit
GitHub Actions ->> System: Ensure code quality and functionality
Note over System: Repositories
System ->> System: Maintain frontend and backend repositories
System ->> System: Secure main branch with branch protection rules
System ->> System: Require pull request and code review for merges to main branch
This sequence diagram depicts the interaction flow within a system handling website data processing and AI-driven response generation. Initially, the system scrapes data recursively from the requested URL, cleans it, and generates vector embeddings using OpenAI. These embeddings are stored in Supabase. Upon receiving prompts from users, the system leverages these stored embeddings to generate contextually relevant responses via OpenAI. Deployment on AWS Lambda allows for scalable serverless execution, with staging and production environments ensuring smooth integration through automated testing and GitHub Actions, maintaining code quality and security with stringent repository management practices like branch protection and code reviews for merges.
Clone the repository
git clone https://github.com/your_username_/Project-Name.git
Install dependencies
npm install
.env
fileYour .env
file should include
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=<YOUR_CLERK_KEY>
CLERK_SECRET_KEY=<YOUR_CLERK_SECRET_KEY>
NEXT_PUBLIC_CLERK_SIGN_IN_URL=/sign-in
NEXT_PUBLIC_CLERK_SIGN_UP_URL=/sign-up
NEXT_PUBLIC_CLERK_AFTER_SIGN_IN_URL=/user-dashboard
NEXT_PUBLIC_CLERK_AFTER_SIGN_UP_URL=/user-dashboard
[!NOTE] Contributing Guidelines
Open a terminal and run the following git command:
git clone "COPIED_URL"
e.g : git clone https://github.com/vedanti-u/db.ai.git
npm install
cd sitegenie
Now create a branch using the git checkout
command:
git checkout -b new-branch-name
e.g : git checkout -b llm-prompt-support
Name your branch according to the feature you are working on :
e.g : you want to work on creating more llm prompt support, name your branch like llm-prompt-support
(follow this naming convention i.e using "-" in between)
Create a .env
File with format
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=<YOUR_CLERK_KEY>
CLERK_SECRET_KEY=<YOUR_CLERK_SECRET_KEY>
NEXT_PUBLIC_CLERK_SIGN_IN_URL=/sign-in
NEXT_PUBLIC_CLERK_SIGN_UP_URL=/sign-up
NEXT_PUBLIC_CLERK_AFTER_SIGN_IN_URL=/user-dashboard
NEXT_PUBLIC_CLERK_AFTER_SIGN_UP_URL=/user-dashboard
$ git add filename.md
where filename is the file you have modified or created
If you are looking to add all the files you have modified in a particular directory, you can stage them all with the following command:
git add .
Or, alternatively, you can type git add -all
for all new files to be staged.
e.g : $ git push --set-upstream origin optimise-binding
your branch will be merged on code review