Slides: Knative-Tekton-OSSNA.pdf
Last Update: 2020/10/12
ibmcloud login -a cloud.ibm.com -r <REGION> -g <IAM_RESOURCE_GROUP>
ibmcloud ks cluster config -c mycluster
kubectl version --short
v1.13.0
. You print current and latest version number
minikube update-check
v1.19.2
. If you already have a minikube with different config, you need to delete it for new configuration to take effect or create a new profile.
minikube delete
minikube config set cpus 2
minikube config set memory 2048
minikube config set kubernetes-version v1.19.2
minikube start
kubectl
, the cluster, and that you can connect to your cluster.
kubectl version --short
v0.8.1
. You can verify version with
kind --version
80
on they host to be later use by the Knative Kourier ingress. To use a different version of kubernetes check the image digest to use from the kind release page
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.18.2@sha256:7b27a6d0f2517ff88ba444025beae41491b016bc6af573ba467b70c5e8e0d85f
extraPortMappings:
- containerPort: 31080 # expose port 31380 of the node to port 80 on the host, later to be use by kourier ingress
hostPort: 80
kind create cluster --name knative --config kind/clusterconfig.yaml
kubectl
and the cluster api-server, and that you can connect to your cluster.
kubectl cluster-info --context kind-knative
kubectl
kn
tkn
REGISTRY_SERVER
, REGISTRY_NAMESPACE
and REGISTRY_PASSWORD
, The REGISTRY_NAMESPACE
most likely would be your dockerhub username. For Dockerhub use docker.io
as the value for REGISTRY_SERVER
REGISTRY_SERVER='docker.io'
REGISTRY_NAMESPACE='REPLACEME_DOCKER_USERNAME_VALUE'
REGISTRY_PASSWORD='REPLACEME_DOCKER_PASSWORD'
cp .template.env .env
# edit the file .env with variables and credentials the source the file
source .env
GIT_REPO_URL
to the url of your fork, not mine.
GIT_REPO_URL='https://github.com/REPLACEME/knative-tekton'
git clone $GIT_REPO_URL
cd knative-tekton
cp .template.env .env
# edit the file .env with variables and credentials the source the file
source .env
Install Knative Serving in namespace knative-serving
kubectl apply -f https://github.com/knative/serving/releases/download/v0.18.0/serving-crds.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/v0.18.0/serving-core.yaml
kubectl wait deployment activator autoscaler controller webhook --for=condition=Available -n knative-serving --timeout=-1s
Install Knative Layer kourier in namespaces kourier-system
and knative-serving
kubectl apply -f https://github.com/knative/net-kourier/releases/download/v0.18.0/kourier.yaml
kubectl wait deployment 3scale-kourier-gateway --for=condition=Available -n kourier-system --timeout=-1s
kubectl wait deployment 3scale-kourier-control --for=condition=Available -n knative-serving --timeout=-1s
Set the environment variable EXTERNAL_IP
to External IP Address of the Worker Node
If using minikube:
EXTERNAL_IP=$(minikube ip)
echo EXTERNAL_IP=$EXTERNAL_IP
If using kind:
EXTERNAL_IP="127.0.0.1"
If using IBM Kubernetes:
EXTERNAL_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="ExternalIP")].address}')
Verify the value
echo EXTERNAL_IP=$EXTERNAL_IP
Set the environment variable KNATIVE_DOMAIN
as the DNS domain using nip.io
KNATIVE_DOMAIN="$EXTERNAL_IP.nip.io"
echo KNATIVE_DOMAIN=$KNATIVE_DOMAIN
Double check DNS is resolving
dig $KNATIVE_DOMAIN
Configure DNS for Knative Serving
kubectl patch configmap -n knative-serving config-domain -p "{\"data\": {\"$KNATIVE_DOMAIN\": \"\"}}"
Configure Kourier to listen for http port 80 on the node
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: kourier-ingress
namespace: kourier-system
labels:
networking.knative.dev/ingress-provider: kourier
spec:
type: NodePort
selector:
app: 3scale-kourier-gateway
ports:
- name: http2
nodePort: 31080
port: 80
targetPort: 8080
EOF
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: kourier-ingress
namespace: kourier-system
labels:
networking.knative.dev/ingress-provider: kourier
spec:
selector:
app: 3scale-kourier-gateway
ports:
- name: http2
port: 80
targetPort: 8080
externalIPs:
- $EXTERNAL_IP
EOF
Configure Knative to use Kourier
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}'
Verify that Knative is Installed properly all pods should be in Running
state and our kourier-ingress
service configured.
kubectl get pods -n knative-serving
kubectl get pods -n kourier-system
kubectl get svc -n kourier-system kourier-ingress
SUB_DOMAIN
to the kubernetes namespace with Domain name <namespace>.<domainname>
this way we can use any kubernetes namespace other than default
CURRENT_CTX=$(kubectl config current-context)
CURRENT_NS=$(kubectl config view -o=jsonpath="{.contexts[?(@.name==\"${CURRENT_CTX}\")].context.namespace}")
if [[ -z "${CURRENT_NS}" ]]; then CURRENT_NS="default" fi
SUB_DOMAIN="$CURRENT_NS.$KNATIVE_DOMAIN"
echo "\n\nSUB_DOMAIN=$SUB_DOMAIN"
Using the Knative CLI kn
deploy an application usig a Container Image
kn service create hello --image gcr.io/knative-samples/helloworld-go --autoscale-window 15s
You can set a lower window. The service is scaled to zero if no request was receivedin during that time.
--autoscale-window 10s
You can list your service
kn service list hello
Use curl to invoke the Application
curl http://hello.$SUB_DOMAIN
It should print
Hello World!
You can watch the pods and see how they scale down to zero after http traffic stops to the url
kubectl get pod -l serving.knative.dev/service=hello -w
Output should look like this after a few seconds when http traffic stops:
NAME READY STATUS
hello-r4vz7-deployment-c5d4b88f7-ks95l 2/2 Running
hello-r4vz7-deployment-c5d4b88f7-ks95l 2/2 Terminating
hello-r4vz7-deployment-c5d4b88f7-ks95l 1/2 Terminating
hello-r4vz7-deployment-c5d4b88f7-ks95l 0/2 Terminating
Try to access the url again, and you will see the new pods running again.
NAME READY STATUS
hello-r4vz7-deployment-c5d4b88f7-rr8cd 0/2 Pending
hello-r4vz7-deployment-c5d4b88f7-rr8cd 0/2 ContainerCreating
hello-r4vz7-deployment-c5d4b88f7-rr8cd 1/2 Running
hello-r4vz7-deployment-c5d4b88f7-rr8cd 2/2 Running
Some people call this Serverless
TARGET
kn service update hello --env TARGET="World from v1" --revision-name "hello-v1"
curl http://hello.$SUB_DOMAIN
It should print
Hello World from v1!
Update the service hello by updating the environment variable TARGET
, send 25% traffic to this new revision hello-v2
and leaving 75% of the traffic to hello-v1
kn service update hello \
--env TARGET="Knative from v2" \
--revision-name="hello-v2" \
--traffic hello-v1=75,hello-v2=25
Describe the service to see the traffic split details
kn service describe hello
Should print this
Name: hello
Age: 6m
URL: http://hello.$SUB_DOMAIN
Revisions:
25% hello-v2 (current @latest) [3] (27s)
Image: gcr.io/knative-samples/helloworld-go (pinned to 5ea96b)
75% hello-v1 [2] (4m)
Image: gcr.io/knative-samples/helloworld-go (pinned to 5ea96b)
Invoke the service usign a while loop you will see the message Hello Knative from v2
25% of the time
while true; do
curl http://hello.$SUB_DOMAIN
sleep 0.5
done
Should print this
Hello World from v1!
Hello Knative from v2!
Hello World from v1!
Hello World from v1!
Update the service this time dark launch new version hello-v3
on a specific url, zero traffic will go to this version from the main url of the service
kn service update hello \
--env TARGET="OSS NA 2020 from v3" \
--revision-name="hello-v3" \
--traffic hello-v1=75,hello-v2=25,hello-v3=0
Describe the service to see the traffic split details, v3
doesn't get any traffic
kn service describe hello
Should print this
Revisions:
+ hello-v3 (current @latest) [4] (1m)
Image: gcr.io/knative-samples/helloworld-go (pinned to 5ea96b)
25% hello-v2 [3] (6m)
Image: gcr.io/knative-samples/helloworld-go (pinned to 5ea96b)
75% hello-v1 [2] (7m)
Image: gcr.io/knative-samples/helloworld-go (pinned to 5ea96b)
The revision hello-v3
is deployed with 0% traffic, but is not accesible by hostname routing. Update the this revision with a tag to create a custom hostname to be able to access the revision for testing/debugging then go ahead and invoke the latest directly.
Tag the revision hello-v3
kn service update hello --tag hello-v3=v3
curl http://v3-hello.$SUB_DOMAIN
It shoud print this
Hello OSS NA from v3!
We are happy with our darked launch version of the application, lets turn it live to 100% of the users on the default url
kn service update hello --traffic @latest=100
List the revisions to see the traffic assigned, hello-v3
now gets 100% of the traffic
kn revision ls
Should print this
NAME SERVICE TRAFFIC TAGS GENERATION AGE CONDITIONS READY REASON
hello-v3 hello 100% v3 4 9m9s 3 OK / 4 True
hello-v2 hello 3 10m 3 OK / 4 True
hello-v1 hello 2 11m 3 OK / 4 True
If we invoke the service in a loop you will see that 100% of the traffic is directed to revision hello-v3
of our application
while true; do
curl http://hello.$SUB_DOMAIN
sleep 0.5
done
Should print this
Hello OSS NA 2020 from v3!
Hello OSS NA 2020 from v3!
Hello OSS NA 2020 from v3!
Hello OSS NA 2020 from v3!
By using tags the custom urls with tag prefix are still available, in case you want to access an old revision of the application. Tag the old revisions to be able to access them directly.
kn service update hello --tag hello-v1=v1 --tag hello-v2=v2
curl http://v1-hello.$SUB_DOMAIN
curl http://v2-hello.$SUB_DOMAIN
curl http://v3-hello.$SUB_DOMAIN
It should print
Hello World from v1!
Hello Knative from v2!
Hello OSS NA 2020 from v3!
Now that you have your service configure and deploy, you want to reproduce this using a kubernetes manifest using YAML in a different namespace or cluster. You can define your Knative service using the following YAML you can use the command kn service export
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: hello
spec:
template:
metadata:
name: hello-v1
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
env:
- name: TARGET
value: World from v1
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: hello
spec:
template:
metadata:
name: hello-v2
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
env:
- name: TARGET
value: Knative from v2
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: hello
spec:
template:
metadata:
name: hello-v3
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
env:
- name: TARGET
value: OSS NA 2020 from v3
traffic:
- latestRevision: false
percent: 0
revisionName: hello-v1
tag: v1
- latestRevision: false
percent: 0
revisionName: hello-v2
tag: v2
- latestRevision: true
percent: 100
tag: v3
If you want to deploy usign YAML, delete the Application with kn
and redeploy with kubectl
kn service delete hello
kubectl apply -f knative/v1.yaml
kubectl wait ksvc hello --timeout=-1s --for=condition=Ready
kubectl apply -f knative/v2.yaml
kubectl wait ksvc hello --timeout=-1s --for=condition=Ready
kubectl apply -f knative/v3.yaml
kubectl wait ksvc hello --timeout=-1s --for=condition=Ready
Try the service again
while true; do
curl http://hello.$SUB_DOMAIN
done
Delete the Application and all it's revisions
kn service delete hello
tekton-pipelines
kubectl apply -f https://github.com/tektoncd/pipeline/releases/download/v0.17.0/release.yaml
kubectl wait deployment tekton-pipelines-controller tekton-pipelines-webhook --for=condition=Available -n tekton-pipelines
tekton-pipelines
kubectl apply -f https://github.com/tektoncd/dashboard/releases/download/v0.10.0/tekton-dashboard-release.yaml
kubectl wait deployment tekton-dashboard --for=condition=Available -n tekton-pipelines
KNATIVE_DOMAIN
cat <<EOF | kubectl apply -f -
apiVersion: networking.internal.knative.dev/v1alpha1
kind: Ingress
metadata:
name: tekton-dashboard
namespace: tekton-pipelines
annotations:
networking.knative.dev/ingress.class: kourier.ingress.networking.knative.dev
spec:
rules:
- hosts:
- dashboard.tekton-pipelines.$KNATIVE_DOMAIN
http:
paths:
- splits:
- appendHeaders: {}
serviceName: tekton-dashboard
serviceNamespace: tekton-pipelines
servicePort: 9097
visibility: ExternalIP
visibility: ExternalIP
EOF
TEKTON_DASHBOARD_URL
with the url to access the Dashboard
TEKTON_DASHBOARD_URL=http://dashboard.tekton-pipelines.$KNATIVE_DOMAIN
echo TEKTON_DASHBOARD_URL=$TEKTON_DASHBOARD_URL
pipelines
with the associated secret regcred
. Make sure you setup your container credentials as environment variables. Checkout the Setup Container Registry in the Setup Environment section on this tutorial. This commands will print your credentials make sure no one is looking over, the printed command is what you need to run.
echo ""
echo kubectl create secret docker-registry regcred \
--docker-server=\'${REGISTRY_SERVER}\' \
--docker-username=\'${REGISTRY_NAMESPACE}\' \
--docker-password=\'${REGISTRY_PASSWORD}\'
echo "\nRun the above command manually ^^ this avoids problems with certain charaters in your password on the shell"
NOTE: If you password have some characters that are interpreted by the shell, then do NOT use environment variables, explicit enter your values in the command wrapped by single quotes '
regcred
was created
kubectl describe secret regcred
pipeline
that contains the secret regsecret
that we just created
apiVersion: v1
kind: ServiceAccount
metadata:
name: pipeline
secrets:
- name: regcred
Run the following command with the provided YAML
kubectl apply -f tekton/sa.yaml
default
to the ServiceAccount pipeline
if you are using a different namespace than default
edit the file tekton/rbac.yaml
and provide the namespace where to create the Role
and the RoleBinding
fo more info check out the RBAC docs. Run the following command to grant access to sa pipelines
cat tekton/rbac.yaml | sed "s/namespace: default/namespace: $CURRENT_NS/g" | kubectl apply -f -
In this repository we have a sample application, you can see the source code in ./nodejs/app.js This application is using JavaScript to implement a web server, but you can use any language you want.
const app = require("express")()
const server = require("http").createServer(app)
const port = process.env.PORT || "8080"
const message = process.env.TARGET || 'Hello World'
app.get('/', (req, res) => res.send(message))
server.listen(port, function () {
console.log(`App listening on ${port}`)
});
I provided a Tekton Task that can download source code from git, build and push the Image to a registry.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build
spec:
params:
- name: repo-url
description: The git repository url
- name: revision
description: The branch, tag, or git reference from the git repo-url location
default: master
- name: image
description: "The location where to push the image in the form of <server>/<namespace>/<repository>:<tag>"
- name: CONTEXT
description: Path to the directory to use as context.
default: .
- name: BUILDER_IMAGE
description: The location of the buildah builder image.
default: quay.io/buildah/stable:v1.14.8
- name: STORAGE_DRIVER
description: Set buildah storage driver
default: overlay
- name: DOCKERFILE
description: Path to the Dockerfile to build.
default: ./Dockerfile
- name: TLSVERIFY
description: Verify the TLS on the registry endpoint (for push/pull to a non-TLS registry)
default: "false"
- name: FORMAT
description: The format of the built container, oci or docker
default: "oci"
steps:
- name: git-clone
image: alpine/git
script: |
git clone $(params.repo-url) /source
cd /source
git checkout $(params.revision)
volumeMounts:
- name: source
mountPath: /source
- name: build-image
image: $(params.BUILDER_IMAGE)
workingdir: /source
script: |
echo "Building Image $(params.image)"
buildah --storage-driver=$(params.STORAGE_DRIVER) bud --format=$(params.FORMAT) --tls-verify=$(params.TLSVERIFY) -f $(params.DOCKERFILE) -t $(params.image) $(params.CONTEXT)
echo "Pushing Image $(params.image)"
buildah --storage-driver=$(params.STORAGE_DRIVER) push --tls-verify=$(params.TLSVERIFY) --digestfile ./image-digest $(params.image) docker://$(params.image)
securityContext:
privileged: true
volumeMounts:
- name: varlibcontainers
mountPath: /var/lib/containers
- name: source
mountPath: /source
volumes:
- name: varlibcontainers
emptyDir: {}
- name: source
emptyDir: {}
Install the provided task build like this.
kubectl apply -f tekton/task-build.yaml
You can list the task that we just created using the tkn
CLI
tkn task ls
We can also get more details about the build Task using tkn task describe
tkn task describe build
Let's use the Tekton CLI to test our build Task you need to pass the ServiceAccount pipeline
to be use to run the Task. You will need to pass the GitHub URL to your fork or use this repository. You will need to pass the directory within the repository where the application in our case is nodejs
. The repository image name is knative-tekton
tkn task start build --showlog \
-p repo-url=${GIT_REPO_URL} \
-p image=${REGISTRY_SERVER}/${REGISTRY_NAMESPACE}/knative-tekton \
-p CONTEXT=nodejs \
-s pipeline
You can check out the container registry and see that the image was pushed to repository a minute ago, it should return status Code 200
curl -s -o /dev/null -w "%{http_code}\n" https://index.$REGISTRY_SERVER/v1/repositories/$REGISTRY_NAMESPACE/knative-tekton/tags/latest
I provided a Deploy Tekton Task that can run kubectl
to deploy the Knative Application using a YAML manifest.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: deploy
spec:
params:
- name: repo-url
description: The git repository url
- name: revision
description: The branch, tag, or git reference from the git repo-url location
default: master
- name: dir
description: Path to the directory to use as context.
default: .
- name: yaml
description: Path to the directory to use as context.
default: ""
- name: image
description: Path to the container image
default: ""
- name: KUBECTL_IMAGE
description: The location of the kubectl image.
default: docker.io/csantanapr/kubectl
steps:
- name: git-clone
image: alpine/git
script: |
git clone $(params.repo-url) /source
cd /source
git checkout $(params.revision)
volumeMounts:
- name: source
mountPath: /source
- name: kubectl-apply
image: $(params.KUBECTL_IMAGE)
workingdir: /source
script: |
if [ "$(params.image)" != "" ] && [ "$(params.yaml)" != "" ]; then
yq w -i $(params.dir)/$(params.yaml) "spec.template.spec.containers[0].image" "$(params.image)"
cat $(params.dir)/$(params.yaml)
fi
kubectl apply -f $(params.dir)/$(params.yaml)
volumeMounts:
- name: source
mountPath: /source
volumes:
- name: source
emptyDir: {}
Install the provided task deploy like this.
kubectl apply -f tekton/task-deploy.yaml
You can list the task that we just created using the tkn
CLI
tkn task ls
We can also get more details about the deploy Task using tkn task describe
tkn task describe deploy
I provided a Task YAML that defines our Knative Application in knative/service.yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: demo
spec:
template:
spec:
containers:
- image: docker.io/csantanapr/knative-tekton
imagePullPolicy: Always
env:
- name: TARGET
value: Welcome to the Knative Meetup
Let's use the Tekton CLI to test our deploy Task you need to pass the ServiceAccount pipeline
to be use to run the Task. You will need to pass the GitHub URL to your fork or use this repository. You will need to pass the directory within the repository where the application yaml manifest is located and the file name in our case is knative
and service.yaml
.
tkn task start deploy --showlog \
-p image=${REGISTRY_SERVER}/${REGISTRY_NAMESPACE}/knative-tekton \
-p repo-url=${GIT_REPO_URL} \
-p dir=knative \
-p yaml=service.yaml \
-s pipeline
You can check out that the Knative Application was deploy
kn service list demo
If we want to build the application image and then deploy the application, we can run the Tasks build and deploy by defining a Pipeline that contains the two Tasks, deploy the Pipeline build-deploy
.
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: build-deploy
spec:
params:
- name: repo-url
default: https://github.com/csantanapr/knative-tekton
- name: revision
default: master
- name: image
- name: image-tag
default: latest
- name: CONTEXT
default: nodejs
tasks:
- name: build
taskRef:
name: build
params:
- name: image
value: $(params.image):$(params.image-tag)
- name: repo-url
value: $(params.repo-url)
- name: revision
value: $(params.revision)
- name: CONTEXT
value: $(params.CONTEXT)
- name: deploy
runAfter: [build]
taskRef:
name: deploy
params:
- name: image
value: $(params.image):$(params.image-tag)
- name: repo-url
value: $(params.repo-url)
- name: revision
value: $(params.revision)
- name: dir
value: knative
- name: yaml
value: service.yaml
Install the Pipeline with this command
kubectl apply -f tekton/pipeline-build-deploy.yaml
You can list the pipeline that we just created using the tkn
CLI
tkn pipeline ls
We can also get more details about the build-deploy Pipeline using tkn pipeline describe
tkn pipeline describe build-deploy
Let's use the Tekton CLI to test our build-deploy Pipeline you need to pass the ServiceAccount pipeline
to be use to run the Tasks. You will need to pass the GitHub URL to your fork or use this repository. You will also pass the Image location where to push in the the registry and where Kubernetes should pull the image for the Knative Application. The directory and filename for the Kantive yaml are already specified in the Pipeline definition.
tkn pipeline start build-deploy --showlog \
-p image=${REGISTRY_SERVER}/${REGISTRY_NAMESPACE}/knative-tekton \
-p repo-url=${GIT_REPO_URL} \
-s pipeline
You can inpect the results and duration by describing the last PipelineRun
tkn pipelinerun describe --last
Check that the latest Knative Application revision is ready
kn service list demo
Run the Application using the url
curl http://demo.$SUB_DOMAIN
It shoudl print
Welcome to OSS NA 2020
tekton-pipelines
kubectl apply -f https://github.com/tektoncd/triggers/releases/download/v0.8.0/release.yaml
kubectl wait deployment tekton-triggers-controller tekton-triggers-webhook --for=condition=Available -n tekton-pipelines
When the Webhook invokes we want to start a Pipeline, we will a TriggerTemplate
to use a specification on which Tekton resources should be created, in our case will be creating a new PipelineRun
this will start a new Pipeline
install.
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
name: build-deploy
spec:
params:
- name: gitrevision
description: The git revision
default: master
- name: gitrepositoryurl
description: The git repository url
- name: gittruncatedsha
- name: image
default: REPLACE_IMAGE
resourcetemplates:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: build-deploy-run-
spec:
serviceAccountName: pipeline
pipelineRef:
name: build-deploy
params:
- name: revision
value: $(params.gitrevision)
- name: repo-url
value: $(params.gitrepositoryurl)
- name: image-tag
value: $(params.gittruncatedsha)
- name: image
value: $(params.image)
Install the TriggerTemplate
cat tekton/trigger-template.yaml | sed "s/REPLACE_IMAGE/$REGISTRY_SERVER\/$REGISTRY_NAMESPACE\/knative-tekton/g" | kubectl apply -f -
When the Webhook invokes we want to extract information from the Web Hook http request sent by the Git Server, we will use a TriggerBinding
this information is what gets passed to the TriggerTemplate
.
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerBinding
metadata:
name: build-deploy
spec:
params:
- name: gitrevision
value: $(body.head_commit.id)
- name: gitrepositoryurl
value: $(body.repository.url)
- name: gittruncatedsha
value: $(body.extensions.truncated_sha)
Install the TriggerBinding
kubectl apply -f tekton/trigger-binding.yaml
To be able to handle the http request sent by the GitHub Webhook, we need a webserver. Tekton provides a way to define this listeners that takes the TriggerBinding
and the TriggerTemplate
as specification. We can specify Interceptors to handle any customization for example I only want to start a new Pipeline only when push happens on the main branch.
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
name: cicd
spec:
serviceAccountName: pipeline
triggers:
- name: cicd-trig
bindings:
- ref: build-deploy
template:
name: build-deploy
interceptors:
- cel:
filter: "header.match('X-GitHub-Event', 'push') && body.ref == 'refs/heads/master'"
overlays:
- key: extensions.truncated_sha
expression: "body.head_commit.id.truncate(7)"
Install the Trigger EventListener
kubectl apply -f tekton/trigger-listener.yaml
The Eventlister creates a deployment and a service you can list both using this command
kubectl get deployments,eventlistener,svc -l eventlistener=cicd
cat <<EOF | kubectl apply -f -
apiVersion: networking.internal.knative.dev/v1alpha1
kind: Ingress
metadata:
name: el-cicd
namespace: $CURRENT_NS
annotations:
networking.knative.dev/ingress.class: kourier.ingress.networking.knative.dev
spec:
rules:
- hosts:
- el-cicd.$CURRENT_NS.$KNATIVE_DOMAIN
http:
paths:
- splits:
- appendHeaders: {}
serviceName: el-cicd
serviceNamespace: $CURRENT_NS
servicePort: 8080
visibility: ExternalIP
visibility: ExternalIP
EOF
CURRENT_NS
and KNATIVE_DOMAIN
GIT_WEBHOOK_URL=http://el-cicd.$CURRENT_NS.$KNATIVE_DOMAIN
echo GIT_WEBHOOK_URL=$GIT_WEBHOOK_URL
WARNING: Take into account that this URL is insecure is using http and not https, this means you should not use this type of URL for real work environments, In that case you would need to expose the service for the eventlistener using a secure connection using https
$GIT_WEBHOOK_URL
value into the Payload URL
My First Serveless App @ OSS NA 2020 !
and push the change to the master
branchcurl -H "X-GitHub-Event:push" -d @tekton/hook.json $GIT_WEBHOOK_URL
tkn pipeline logs -f --last
tkn pipelinerun describe --last
kn service describe demo
curl http://demo.$SUB_DOMAIN
It should print
My First Serveless App @ OSS NA 2020 !