Examples and Tutorials of Event Sourcing in NodeJS
CC-BY-SA-4.0 License
Tutorial, practical samples and other resources about Event Sourcing in NodeJS. See also my similar repositories for .NET and JVM.
Event Sourcing is a design pattern in which results of business operations are stored as a series of events.
It is an alternative way to persist data. In contrast with state-oriented persistence that only keeps the latest version of the entity state, Event Sourcing stores each state change as a separate event.
Thanks for that, no business data is lost. Each operation results in the event stored in the database. That enables extended auditing and diagnostics capabilities (both technically and business-wise). What's more, as events contains the business context, it allows wide business analysis and reporting.
In this repository I'm showing different aspects, patterns around Event Sourcing. From the basic to advanced practices.
Read more in my article:
Events, represent facts in the past. They carry information about something accomplished. It should be named in the past tense, e.g. "user added", "order confirmed". Events are not directed to a specific recipient - they're broadcasted information. It's like telling a story at a party. We hope that someone listens to us, but we may quickly realise that no one is paying attention.
Events:
Read more in my articles:
Events are logically grouped into streams. In Event Sourcing, streams are the representation of the entities. All the entity state mutations ends up as the persisted events. Entity state is retrieved by reading all the stream events and applying them one by one in the order of appearance.
A stream should have a unique identifier representing the specific object. Each event has its own unique position within a stream. This position is usually represented by a numeric, incremental value. This number can be used to define the order of the events while retrieving the state. It can be also used to detect concurrency issues.
Technically events are messages.
They may be represented, e.g. in JSON, Binary, XML format. Besides the data, they usually contain:
correlation id
, causation id
, etc.Sample event JSON can look like:
{
"id": "e44f813c-1a2f-4747-aed5-086805c6450e",
"type": "invoice-issued",
"streamId": "INV/2021/11/01",
"streamPosition": 1,
"timestamp": "2021-11-01T00:05:32.000Z",
"data": {
"issuedTo": {
"name": "Oscar the Grouch",
"address": "123 Sesame Street"
},
"amount": 34.12,
"number": "INV/2021/11/01",
"issuedAt": "2021-11-01T00:05:32.000Z"
},
"metadata": {
"correlationId": "1fecc92e-3197-4191-b929-bd306e1110a4",
"causationId": "c3cf07e8-9f2f-4c2d-a8e9-f8a612b4a7f1"
}
}
This structure could be translated directly into the TypeScript class. However, to make the code less redundant and ensure that all events follow the same convention, it's worth adding the base type. It could look as follows:
export type Event<
EventType extends string = string,
EventData extends Record<string, unknown> = Record<string, unknown>
> = Readonly<{
type: Readonly<EventType>;
data: Readonly<EventData>;
}>;
Several things are going on here:
EventType extends string = string
). It's added to be able to define the alias for the event type. Thanks to that, we're getting compiler check and IntelliSense support,EventData extends Record<string, unknown> = Record<string, unknown>
). It is the way of telling the TypeScript compiler that it may expect any type but allows you to specify your own and get a proper type check.Readonly<>
constructs a type with all properties set as readonly
. Syntax:Readonly<{
type: EventType;
data: EventData;
}>;
is equal to:
{
readonly type: EventType;
readonly data: EventData;
};
I prefer the former, as, in my opinion, it's making the type definition less cluttered.
We're also wrapping the EventType
and EventData
with Readonly<>
. This is needed as Readonly<>
does only shallow type copy. It won't change the nested types definition. So:
Readonly<{
type: "invoice-issued";
data: {
number: string;
issuedBy: string;
issuedAt: Date;
};
}>;
is the equivalent of:
{
readonly type: 'invoice-issued';
readonly data: {
number: string;
issuedBy: string;
issuedAt: Date;
}
};
while we want to have:
{
readonly type: 'invoice-issued';
readonly data: {
readonly number: string;
readonly issuedBy: string;
readonly issuedAt: Date;
}
};
Wrapping EventType
and EventType
and EventData
with Readonly<>
does that for us and enables immutability.
Note: we still need to remember to wrap nested structures inside the event data into Readonly<>
to have all properties set as readonly
.
Having that, we can define the event as eg.:
// alias for event type
type INVOICE_ISSUED = "invoice-issued";
// person DTO used in issued by event data
type Person = Readonly<{
name: string;
address: string;
}>;
// event type definition
type InvoiceIssued = Event<
INVOICE_ISSUED,
{
issuedTo: Person;
amount: number;
number: string;
issuedAt: Date;
}
>;
then create it as:
const invoiceIssued: InvoiceIssued = {
type: "invoice-issued",
data: {
issuedTo: {
name: "Oscar the Grouch",
address: "123 Sesame Street",
},
amount: 34.12,
number: "INV/2021/11/01",
issuedAt: new Date(),
},
};
In Event Sourcing, the state is stored in events. Events are logically grouped into streams. Streams can be thought of as the entities' representation. Traditionally (e.g. in relational or document approach), each entity is stored as a separate record.
Id | IssuerName | IssuerAddress | Amount | Number | IssuedAt |
---|---|---|---|---|---|
e44f813c | Oscar the Grouch | 123 Sesame Street | 34.12 | INV/2021/11/01 | 2021-11-01 |
In Event Sourcing, the entity is stored as the series of events that happened for this specific object, e.g. InvoiceInitiated
, InvoiceIssued
, InvoiceSent
.
[
{
"id": "e44f813c-1a2f-4747-aed5-086805c6450e",
"type": "invoice-initiated",
"streamId": "INV/2021/11/01",
"streamPosition": 1,
"timestamp": "2021-11-01T00:05:32.000Z",
"data": {
"issuedTo": {
"name": "Oscar the Grouch",
"address": "123 Sesame Street"
},
"amount": 34.12,
"number": "INV/2021/11/01",
"initiatedAt": "2021-11-01T00:05:32.000Z"
}
},
{
"id": "5421d67d-d0fe-4c4c-b232-ff284810fb59",
"type": "invoice-issued",
"streamId": "INV/2021/11/01",
"streamPosition": 2,
"timestamp": "2021-11-01T00:11:32.000Z",
"data": {
"issuedBy": "Cookie Monster",
"issuedAt": "2021-11-01T00:11:32.000Z"
}
},
{
"id": "637cfe0f-ed38-4595-8b17-2534cc706abf",
"type": "invoice-sent",
"streamId": "INV/2021/11/01",
"streamPosition": 3,
"timestamp": "2021-11-01T00:12:01.000Z",
"data": {
"sentVia": "email",
"sentAt": "2021-11-01T00:12:01.000Z"
}
}
]
All of those events shares the stream id ("streamId": "INV/2021/11/01"
), and have incremented stream position.
We can get to conclusion that in Event Sourcing entity is represented by stream, so sequence of event correlated by the stream id ordered by stream position.
To get the current state of entity we need to perform the stream aggregation process. We're translating the set of events into a single entity. This can be done with the following the steps:
This process is called also stream aggregation or state rehydration.
For this process we'll use the reduce function. It executes a reducer function (that you can provide) on each array element, resulting in a single output value. TypeScript extends it with the type guarantees:
InvoiceInitiated
) will provide all required fields. The other events will just do a partial update (InvoiceSent
only changes the status and sets the sending method and date).Having event types defined as:
type InvoiceInitiated = Event<
"invoice-initiated",
{
number: string;
amount: number;
issuedTo: Person;
initiatedAt: Date;
}
>;
type InvoiceIssued = Event<
"invoice-issued",
{
number: string;
issuedBy: string;
issuedAt: Date;
}
>;
type InvoiceSent = Event<
"invoice-sent",
{
number: string;
sentVia: InvoiceSendMethod;
sentAt: Date;
}
>;
Entity as:
type Invoice = Readonly<{
number: string;
amount: number;
status: InvoiceStatus;
issuedTo: Person;
initiatedAt: Date;
issued?: Readonly<{
by?: string;
at?: Date;
}>;
sent?: Readonly<{
via?: InvoiceSendMethod;
at?: Date;
}>;
}>;
We can rebuild the state with events using the reduce function:
const result = events.reduce<Partial<Invoice>>((currentState, event) => {
switch (event.type) {
case "invoice-initiated":
return {
number: event.data.number,
amount: event.data.amount,
status: InvoiceStatus.INITIATED,
issuedTo: event.data.issuedTo,
initiatedAt: event.data.initiatedAt,
};
case "invoice-issued": {
return {
...currentState,
status: InvoiceStatus.ISSUED,
issued: {
by: event.data.issuedBy,
at: event.data.issuedAt,
},
};
}
case "invoice-sent": {
return {
...currentState,
status: InvoiceStatus.SENT,
sent: {
via: event.data.sentVia,
at: event.data.sentAt,
},
};
}
default:
throw "Unexpected event type";
}
}, {});
The only thing left is to translate Partial<Invoice>
into properly typed Invoice
. We'll use type guard for that:
function isInvoice(invoice: Partial<Invoice>): invoice is Invoice {
return (
!!invoice.number &&
!!invoice.amount &&
!!invoice.status &&
!!invoice.issuedTo &&
!!invoice.initiatedAt &&
(!invoice.issued || (!!invoice.issued.at && !!invoice.issued.by)) &&
(!invoice.sent || (!!invoice.sent.via && !!invoice.sent.at))
);
}
if (!isInvoice(result)) throw "Invoice state is not valid!";
const reservation: Invoice = result;
Thanks to that, we have a proper type definition. We can make the stream aggregation more generic and reusable:
export function aggregateStream<Aggregate, StreamEvents extends Event>(
events: StreamEvents[],
when: (
currentState: Partial<Aggregate>,
event: StreamEvents,
currentIndex: number,
allEvents: StreamEvents[]
) => Partial<Aggregate>,
check?: (state: Partial<Aggregate>) => state is Aggregate
): Aggregate {
const state = events.reduce<Partial<Aggregate>>(when, {});
if (!check) {
console.warn("No type check method was provided in the aggregate method");
return <Aggregate>state;
}
if (!check(state)) throw "Aggregate state is not valid";
return state;
}
See full sample: link.
Read more in my article:
Event Sourcing is not related to any type of storage implementation. As long as it fulfils the assumptions, it can be implemented having any backing database (relational, document, etc.). The state has to be represented by the append-only log of events. The events are stored in chronological order, and new events are appended to the previous event. Event Stores are the databases' category explicitly designed for such purpose.
The simplest (dummy and in-memory) Event Store can be defined in TypeScript as:
class EventStore {
private events: { readonly streamId: string; readonly data: string }[] = [];
appendToStream(streamId: string, ...events: any[]): void {
const serialisedEvents = events.map((event) => {
return { streamId: streamId, data: JSON.stringify(event) };
});
this.events.push(...serialisedEvents);
}
readFromStream<T = any>(streamId: string): T[] {
return this.events
.filter((event) => event.streamId === streamId)
.map<T>((event) => JSON.parse(event.data));
}
}
In the further samples, I'll use EventStoreDB. It's the battle-tested OSS database created and maintained by the Event Sourcing authorities. It supports many dev environments via gRPC clients, including NodeJS.
Read more in my article:
Read also more on the Event Sourcing and CQRS topics in my blog posts:
Event Sourcing is perceived as a complex pattern. Some believe that it's like Nessie, everyone's heard about it, but rarely seen it. In fact, Event Sourcing is a pretty practical and straightforward concept. It helps build predictable applications closer to business. Nowadays, storage is cheap, and information is priceless. In Event Sourcing, no data is lost.
The workshop aims to build the knowledge of the general concept and its related patterns for the participants. The acquired knowledge will allow for the conscious design of architectural solutions and the analysis of associated risks.
You can do the workshop as a self-paced kit. That should give you a good foundation for starting your journey with Event Sourcing and learning tools like EventStoreDB.
If you'd like to get full coverage with all nuances of the private workshop, check training page on my blog for more details feel free to contact me via email.
Read also more in my article Introduction to Event Sourcing - Self Paced Kit.
Follow the instructions in exercises folders.
Install Node.js - https://Node.js.org/en/download/. Recommended NVM.
Create project:
npm init -y
ExpressJS - Web Server for REST API.
npm i express
TypeScript - We'll be doing Type Driven Development
Node.js
and Express
and TS Node
npm i -D typescript @types/express @types/node ts-node
npm i -g typescript
{
"scripts": {
"build:ts": "tsc"
}
}
tsc
globally you can init TypeScript config by running:tsc --init
{
"compilerOptions": {
"target": "es2020",
"module": "commonjs",
"outDir": "./dist",
"strict": true,
"strictNullChecks": true,
"noUnusedLocals": true,
"noImplicitReturns": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": ["./src"]
}
ESLint - We'd like to have static code analysis:
npx eslint --init
β How would you like to use ESLint? Β· style
β What type of modules does your project use? Β· esm
β Which framework does your project use? Β· none
β Does your project use TypeScript? Β· No / Yes
β Where does your code run? Β· node
β How would you like to define a style for your project? Β· guide
β Which style guide do you want to follow? Β· standard
β What format do you want your config file to be in? Β· JSON
npm
:npm i -D @typescript-eslint/eslint-plugin eslint-config-standard eslint eslint-plugin-import eslint-plugin-node eslint-plugin-promise @typescript-eslint/parser
{
"env": {
"es2023": true,
"node": true
},
"extends": ["standard"],
"parser": "@typescript-eslint/parser",
"parserOptions": {
"ecmaVersion": 2023,
"sourceType": "module"
},
"plugins": ["@typescript-eslint"],
"rules": {}
}
/node_modules/*
# build artifacts
dist/*coverage/*
# data definition files
**/*.d.ts
# custom definition files
/src/types/
Prettier, as we aim to write pretty code:
npm i -D prettier eslint-config-prettier eslint-plugin-prettier
{
"tabWidth": 2,
"singleQuote": true
}
{
"env": {
"es2020": true,
"node": true
},
"extends": [
"plugin:@typescript-eslint/recommended", <-- updated
"prettier/@typescript-eslint", <-- added
"plugin:prettier/recommended" <-- added
],
"parser": "@typescript-eslint/parser",
"parserOptions": {
"ecmaVersion": 2023,
"sourceType": "module"
},
"plugins": [
"@typescript-eslint"
],
"rules": {
}
}
Define tasks for ESLint and Prettier in package.json:
{
"scripts": {
"lint": "npm run lint:eslint && npm run lint:prettier",
"lint:prettier": "prettier --check \"src/**/**/!(*.d).{ts,json,md}\"",
"lint:eslint": "eslint src/**/*.ts"
}
}
{
"scripts": {
"lint:eslint": "eslint src/**/*.ts",
"prettier:fix": "prettier --write \"src/**/**/!(*.d).{ts,json,md}\""
}
}
Husky is a tool that enables to run scripts on precommit git hook. We'll use it to run ESLint
and Prettier
to make sure that code is formatted and following rules.
npm i -D husky@4
{
"husky": {
"hooks": {
"pre-commit": "npm run lint"
}
}
}
To make sure that all is working fine we'll create the new app (e.g. in the src/index.ts
)
import express, { Application, Request, Response } from "express";
import http from "http";
const app: Application = express();
const server = http.createServer(app);
app.get("/", (req: Request, res: Response) => {
res.json({ greeting: "Hello World!" });
});
const PORT = 5000;
server.listen(PORT);
server.on("listening", () => {
console.info("server up listening");
});
This will create an Express application that will be listening on port 5000
and return the JSON (with dummy data greeting with "Hello World!"
).
Nodemon to have hot-reload of the running Express server code.
npm i -D nodemon
{
"scripts": {
"dev:start": "nodemon src/index.ts",
}
}
npm run dev:start
{ "greeting": "Hello World!" }
To configure VSCode debug you need to add launch.json file in the .vscode folder.
To not need to synchronise two separate configurations, we'll reuse the existing NPM script dev:start
that starts the application.
{
"version": "0.2.0",
"configurations": [
{
"name": "Debug",
"type": "node",
"request": "launch",
"runtimeExecutable": "npm",
"runtimeArgs": ["run-script", "dev:start", "--", "--inspect-brk=9229"],
"port": 9229
}
]
}
As we have TypeScript configured, then we don't need any additional setup. We're reusing the native node debugging capabilities by using the --inspect-brk=9229
parameter. Read more in the Node.js documentation
npm i -D jest @types/jest ts-jest
npx ts-jest config:init
module.exports = {
preset: "ts-jest",
testEnvironment: "node",
};
We'll update it to match our configuration. Without that it'll match both source ts
files and generated js
running tests twice.
module.exports = {
preset: "ts-jest",
testEnvironment: "node",
// tells Jest where are our test files
roots: ["<rootDir>/src"],
// tells Jest to use only TypeScript files
transform: {
"^.+\\.(ts|tsx)$": "ts-jest",
},
};
src/greetings/getGreeting.ts
export function getGreeting() {
return {
greeting: "Hello World!",
};
}
src/greetings/getGreetings.unit.test.ts
import { getGreeting } from "./getGreeting";
describe("getGreeting", () => {
it('should return greeting "Hello World!"', () => {
const result = getGreeting();
expect(result).toBeDefined();
expect(result.greeting).toBe("Hello World!");
});
});
{
"scripts": {
"test:unit": "jest unit"
}
}
Now you can run them with:
npm run test:unit
Jest will be smart enough to find by convention all files with .unit.test.ts
suffix. 7. To be able to debug our tests we have to add new debug configurations to launch.json. We'll be using watch
settings, so we don't have re-run tests when we updated logic or test code.
{
"version": "0.2.0",
"configurations": [
{
"name": "Jest all tests",
"type": "node",
"request": "launch",
"program": "${workspaceRoot}/node_modules/jest/bin/jest.js",
"args": ["--verbose", "-i", "--no-cache", "--watchAll"],
"console": "integratedTerminal",
"internalConsoleOptions": "neverOpen"
},
{
"name": "Jest current test",
"type": "node",
"request": "launch",
"program": "${workspaceFolder}/node_modules/jest/bin/jest",
"args": [
"${fileBasename}",
"--verbose",
"-i",
"--no-cache",
"--watchAll"
],
"console": "integratedTerminal",
"internalConsoleOptions": "neverOpen"
}
]
}
SuperTest is a useful library that allows testing Express HTTP applications.
To install it run:
npm i -D supertest @types/supertest
SuperTest
takes as input Express application. We have to structure our code to return it, e.g.
import express, { Application, Request, Response } from "express";
import { getGreeting } from "./greetings/getGreeting";
const app: Application = express();
app.get("/", (_req: Request, res: Response) => {
res.json(getGreeting());
});
export default app;
Our updated intex will look like:
import app from "./app";
import http from "http";
const server = http.createServer(app);
const PORT = 5000;
server.listen(PORT);
server.on("listening", () => {
console.info("server up listening");
});
Let's create the test for the default route. For that, create a file, e.g. getGreetings.api.test.ts
. We'll be using a different prefix, api.test.ts
, as those tests are not unit but integration/acceptance. They will be running the Express server. Having the Express app extracted, we can use the SuperTest
library as:
import request from "supertest";
import app from "../app";
describe("GET /", () => {
it('should return greeting "Hello World!"', () => {
return request(app)
.get("/")
.expect("Content-Type", /json/)
.expect(200, { greeting: "Hello World!" });
});
});
SuperTest
wraps the Express app and making the API calls easier. It also provides a set of useful methods to check the response params.
As the final step we'll add a separate NPM script to package.json for running API tests and also script to run all of them.
{
"scripts": {
"test": "npm run test:unit && npm run test:api", // <-- added
"test:unit": "jest unit",
"test:api": "jest api" // <-- added
}
}
It's important to have your changes be verified during the pull request process. We'll use GitHub Actions as a sample of how to do that. You need to create the .github/workflows folder and putt there new file (e.g. samples_simple.yml). This file will contain YAML configuration for your action:
The simplest setup will look like this:
name: Node.js Continuous Integration
on:
# run it on push to the default repository branch
push:
branches: [main]
# run it during pull request
pull_request:
defaults:
run:
# relative path to the place where source code (with package.json) is located
working-directory: samples/simple
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Use Node.js 20.10.x
uses: actions/setup-node@v4
with:
node-version: 20.10.x
# install dependencies based on the package log
- run: npm ci
# run linting (ESlint and Prettier)
- run: npm run lint
# run build
- run: npm run build:ts
# run tests
- run: npm test
If you want to make sure that your code will be running properly for a few Node.js versions and different operating systems (e.g. because developers may have different environment configuration) then you can use matrix tests:
name: Node.js Continuous Integration
on:
# run it on push to the default repository branch
push:
branches: [main]
# run it during pull request
pull_request:
defaults:
run:
# relative path to the place where source code (with package.json) is located
working-directory: samples/simple
jobs:
build:
# use system defined below in the tests matrix
runs-on: ${{ matrix.os }}
strategy:
# define the test matrix
matrix:
# selected operation systems to run Continuous Integration
os: [windows-latest, ubuntu-latest, macos-latest]
# selected node version to run Continuous Integration
node-version: [18.x, 20.10.x]
steps:
- uses: actions/checkout@v4
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
with:
# use the node version defined in matrix above
node-version: ${{ matrix.node-version }}
# install dependencies
- run: npm ci
# run linting (ESlint and Prettier)
- run: npm run lint
# run build
- run: npm run build:ts
# run tests
- run: npm test
Docker allows creating lightweight images with preconfigured services. Thanks to its immutable nature, it allows having the same experience in runtime configuration independent of the operating systems. It makes deployment and testing easier and more predictable. Most of the hosting (both cloud and on-premise) supports Docker images deployment.
The basis of the Docker configuration is an image definition. It's defined as the text file and usually named by convention as Dockerfile
. It starts with information of which base image we'll be using and then customisation. Most of the technologies provide various types of base images. We'll use node:lts-alpine
. Which represent the latest Long-Term Support version. Alpine type of image is recommended as usually the smallest with a minimum set of dependencies.
The best practice for building the docker image is to use multistage build feature. It allows to use at the first image with all build dependencies, build artefacts and copy it to the final "smaller" image.
Each line in the Dockerfile will create a separate layer. Such layer is immutable, and if the file, if this line was not changed or, e.g. copied file in this file, was not changed, it won't be rebuilt and reused. That's why it's essential to at first copy files that are changed rarely (e.g. package.json
file and running installing node modules), then copy source codes.
Sample Dockerfile looks like:
########################################
# First stage of multistage build
########################################
# Use Build image with label `builder
########################################
# use
FROM node:lts-alpine AS builder
# Setup working directory for project
WORKDIR /app
COPY ./package.json ./
COPY ./package-lock.json ./
COPY ./tsconfig.json ./
# install node modules
# use `npm ci` instead of `npm install`
# to install exact version from `package-lock.json`
RUN npm ci
# Copy project files
COPY src ./src
# Build project
RUN npm run build:ts
# sets environment to production
# and removes packages from devDependencies
RUN npm prune --production
########################################
# Second stage of multistage build
########################################
# Use other build image as the final one
# that won't have source codes
########################################
FROM node:lts-alpine
# Setup working directory for project
WORKDIR /app
# Copy published in previous stage binaries
# from the `builder` image
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
# Set URL that App will be exposed
EXPOSE 5000
# sets entry point command to automatically
# run application on `docker run`
ENTRYPOINT ["node", "./dist/index.js"]
It's also worth adding .dockerignore file and exclude local built artefacts (e.g. from dist
folder) and dependencies (e.g. node_modules
folder). It will speed up the build time and ensure that platform-specific files won't clash with each other.
**/dist/
**/node_modules/
Docker also allows orchestration of multiple Docker containers with Docker Compose YAML file. It should be named by convention as docker-compose.yml
. For our single service, sample docker-compose.yml will look as such:
version: "3.5"
services:
app:
build:
# use local image
dockerfile: Dockerfile
context: .
container_name: eventsourcing_js
ports:
- "5555:5000"
eventsourcing.js.simple
based on the Dockerfile
in the current directory:$ docker build -t . eventsourcing.js.simple
eventsourcing.js.simple
$ docker run -it eventsourcing.js.simple
$ docker pull oskardudycz/eventsourcing.js.simple
docker-compose.yml
file in the current directory$ docker-compose build
$ docker ps
$ docker ps -a
$ docker-compose up
$ docker-compose kill
$ docker-compose down -v
As an example of continuous delivery, we'll use deployment do Docker registry.
Docker Hub is the default, free registry that Docker provides, and it's commonly used for the public available images. However, from November 2020, it has significant limits for free accounts.
GitHub introduced its own container registry. It allows both public and private hosting (which is crucial for commercial projects).
https://github.com/{your_username}/{your_repository_name}/settings/secrets/actions
).DOCKERHUB_USERNAME
- with the name of your Docker Hub account (do not mistake it with GitHub account)DOCKERHUB_TOKEN
- with the pasted value of a token generated in point 3.repo
read:packages
write:packages
https://github.com/{your_username}/{your_repository_name}/settings/secrets/actions
).GHCR_PAT
- with the pasted value of a token generated in point 2.Let's add a new step to the GitHub Action to publish to both containers. It should be only run if the first step with build and tests passed. Updated worfklow (samples_simple.yml):
name: Node.js Continuous Integration and Continuous Delivery
on:
# run it on push to the default repository branch
push:
branches: [main]
# run it during pull request
pull_request:
defaults:
run:
# relative path to the place where source code (with package.json) is located
working-directory: samples/simple
jobs:
build-and-test-code:
name: Build and test application code
# use system defined below in the tests matrix
runs-on: ${{ matrix.os }}
strategy:
# define the test matrix
matrix:
# selected operation systems to run CI
os: [windows-latest, ubuntu-latest, macos-latest]
# selected node version to run CI
node-version: [18.x, 20.10.x]
steps:
- uses: actions/checkout@v4
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
with:
# use the node version defined in matrix above
node-version: ${{ matrix.node-version }}
# install dependencies
- run: npm ci
# run linting (ESlint and Prettier)
- run: npm run lint
# run build
- run: npm run build:ts
# run tests
- run: npm test
build-and-push-docker-image:
name: Build Docker image and push to repositories
# run only when code is compiling and tests are passing
needs: build-and-test-code
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# setup Docker buld action
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to Github Packages
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GHCR_PAT }}
- name: Build image and push to Docker Hub and GitHub Container Registry
uses: docker/build-push-action@v2
with:
# relative path to the place where source code (with package.json) is located
context: ./samples/simple
# Note: tags has to be all lower-case
tags: |
oskardudycz/eventsourcing.nodejs.simple:latest
ghcr.io/oskardudycz/eventsourcing.nodejs/simple:latest
# build on feature branches, push only on main branch
push: ${{ github.ref == 'refs/heads/main' }}
- name: Image digest
run: echo ${{ steps.docker_build.outputs.digest }}
Create React App
for creating EventStoreDB Node.js App