An example of a simple application using AWS services: CLAMS := BAMS-in-the-Cloud
GPL-3.0 License
CLAMS => "BAMS in the Cloud"
A personal learning project using a connection to a legacy event management system written in COBOL (BAMS) as a way of illustrating serverless architectures using Go, Python, Fabric, Svelte and Terraform. CLAMS so far employs the following AWS services:
This is primarily a project for me to learn Go to establish and understand patterns for writing service and unit tests. It was used as the basis for a workshop that I first did in Todmorden in June 2022 for HacktionLab.
In the project I also attempt to use best practices around:
To use CLAMS, get the API Gateway endpoint via AWS Console; it's also displayed as the output of the deployment script (see below). There is an example Postman collection that you can use. The endpoints provided are:
To upload data to CLAMS from BAMS, please see the Uploader utility's README and the BAMS Documentation for the Home Screen's Upload to CLAMS functionality .
In the following test and deployment sections you'll need to create a pair of credentials. Log in to AWS console for the account you wish to use to deploy the application, go to IAM, and choose your user. Click the Security credentials tab and then the Create access key button. This will create a tuple of AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for you. You'll need these shortly. Note you can only create two credentials tuples per IAM user, and once you accept you'll not longer be able to view the AWS_SECRET_ACCESS_KEY. For utmost security delete these at the end of a session and recreate them at the next.
There are five AWS Lambda functions:
As an example of a packages shared between multiple Lambda. The Lambda functions all use the shared attendee and awscfg packages located in the same parent directory as the Lambdas themselves. This can be used in your own programs along the lines of:
package main
import (
"fmt"
"github.com/mikebharris/CLAMS/functions/attendee"
)
func main() {
a := attendee.Attendee{
AuthCode: "ABCDEF",
Name: "Frank Ostrowski",
Email: "[email protected]",
Telephone: "0101 0101 01010",
NumberOfKids: 0,
Diet: "I eat BASIC code for lunch",
Financials: attendee.Financials{AmountToPay: 10, AmountPaid: 10, AmountDue: 0},
ArrivalDay: "Wednesday",
NumberOfNights: 4,
StayingLate: "No",
CreatedTime: time.Now(),
}
fmt.Println(a)
}
The Terraform configuration files are in the directory, the frontend (hastily built in Svelte) is built in , and contains a utility to upload the latest group of attendees to SQS. It can be run on the command line or called from within BAMS.
The Database is deployed using Flyway (both to AWS and into a Docker instance for the Service Tests). The command that is run can be found in and the SQL migration (scheme version) files in
To build the Lambdas, change to the service in the lambdas directory and type:
make build
To build the Lambda for the target AWS environment, which may have a different processor architecture from your local development, type:
make target
This is normally because, for example, you are developing on an Intel Mac but deploying to an ARM64 AWS Lambda environment.
There are integration tests (aka service tests) that use Gherkin syntax to test integration between the Lambda and other dependent AWS services. The tests make use of Docker containers to emulate the various services locally, and therefore you need Docker running.
To run the integration tests, change to the service in the lambdas directory and type:
make int-test
Alternatively, you can change to the integration-tests directory and type:
cd lambdas/processor/integration-tests
go test
There are unit tests than can be run, again by changing to the service in the functions directory and typing:
make unit-test
You can run both unit and integration tests for a given service with:
make test
There is a Go program that helps you to build and test the Lambdas and run Terraform commands included in the repository. This program takes the following parameters:
go run pipeline.go --help
Usage of pipeline:
-account-number uint
Account number of AWS deployment target
-confirm
For destructive operations this should be set to true rather than false
-environment string
Target environment = prod, nonprod, etc (default "nonprod")
-lambdas string
Which Lambda functions to test and/or build: <name-of-lambda> or all (default "all")
-stage string
Deployment stage: unit-test, build, int-test, init, plan, apply, destroy
The --stage parameter allows you to control each distinct stage of a pipeline deployment process, locally on your development machine, or in each stage of the pipeline:
The RDS database for CLAM requires two SSM parameters to be set up in the AWS Parameter Store. Create these as /clams/{environment}/db/username and /clams/{environment}/db/password replacing {environment} with your target environment, for example:
Both should ideally be of type SecureString, though it doesn't matter to the deployment scripts.
The Route53 record requires an SSL certificate to be created using Amazon Certificate Manager (ACM).
Each deployment stage is now described in detail
This runs all the unit tests for all the Lambdas:
go run pipeline.go --stage=unit-test
Optionally you can unit test just a single Lambda by using the --lambda_ flag on the command line:
go run pipeline.go --stage=unit-test --lambda=processor
This builds all the Lambdas:
go run pipeline.go --stage=build
Optionally you can build just a single Lambda by using the --lambda_ flag on the command line:
go run pipeline.go --stage=build --lambda=processor
This runs all the integration tests for all the Lambdas:
go run pipeline.go --stage=int-test
Optionally you can run integration tests for just a single Lambda by using the --lambda_ flag on the command line:
go run pipeline.go --stage=int-test --lambda=processor
Run Terraform init:
AWS_ACCESS_KEY_ID=XXXX AWS_SECRET_ACCESS_KEY=YYYY go run pipeline.go --stage=init --account-number=123456789012 --environment=nonprod
Run Terraform plan:
AWS_ACCESS_KEY_ID=XXXX AWS_SECRET_ACCESS_KEY=YYYY go run pipeline.go --stage=plan --account-number=123456789012 --environment=nonprod
Run Terraform apply:
AWS_ACCESS_KEY_ID=XXXX AWS_SECRET_ACCESS_KEY=YYYY go run pipeline.go --stage=apply --account-number=123456789012 --environment=nonprod --confirm=true
Run Terraform destroy:
AWS_ACCESS_KEY_ID=XXXX AWS_SECRET_ACCESS_KEY=YYYY go run pipeline.go --stage=destroy --account-number=123456789012 --environment=nonprod --confirm=true
You get an error similar to the following:
$ go run pipeline.go --stage=init --account-number=123456789012 --environment=prod
2024/01/16 16:30:06 error running Init: exit status 1
Error: Backend configuration changed
This is normally due to switching between environments and caused by your local Terraform tfstate file being out-of-sync
with the remote tfstate file in S3. You can resolve it by removing the directory terraform/.terraform
and re-running
the init process.