Welcome to Kafkamoon, a Kafka management application. This project demonstrates integration with Kafka APIs as part of a hiring test.
[!NOTE] This README documentation contains information on how to run the Kafkamoon application locally using Docker or Kubernetes. If you want to see all the information about the decisions made in this project, see here.
Before you begin, make sure you have the following tools installed:
To run the project locally, follow these steps:
git clone [email protected]:mcruzdev/kafkamoon.git
cd kafkamoon-api
make buildAll
docker-compose --profile local up -d
Running with docker-compose you will have:
[!NOTE] If you want to run only the infrastructure (Kafka) and Documentation run the following command:
docker-compose --profile dev up -d
After the application is running, you can interact with the following resources:
To run the project locally on Kubernetes, follow these steps:
kind create cluster --name local --config=kind/kind-cluster.yaml
kubectl cluster-info --context kind-local
make helm
This installation contains:
[!IMPORTANT] Keycloak only lives on production, see here.
Accessing Grafana:
kubectl get secret kafkamoon-grafana-operator-grafana-admin-credentials -o jsonpath="{.data['GF_SECURITY_ADMIN_PASSWORD']}" | base64 --decode
The output should like something like this:
abc123_d==%
[!IMPORTANT] Remove the last character (
%
) from the password.
port-forward
command:kubectl port-forward svc/kafkamoon-grafana-operator-grafana-service 8888:3000
Access the Grafana through this url.
[!IMPORTANT] The username is admin.
Accessing the application:
port-forward
:kubectl port-forward svc/kafkamoon-api 8080:80
First of all, you need:
~/.aws/credentials
) configured locally.Run the following command:
aws configure
To have a bucket created to store the terraform.tfstate
file.
Configure the bucket created on main.tf file.
Go to terraform-gitops
directory
cd terraform-gitops
terraform init
terraform init
terraform plan
terraform plan
Check the terraform plan
output.
terraform apply --auto-approve
aws eks update-kubeconfig --name <cluster_name> --region <region>
The Kafka application creates a PVC and the following configuration is necessary to give all necessary rights.
See here how to configure Amazon EBS CSI for EKS
make helmUpdate && make helmPkg && make helmInstall