drizzle-orm

Headless TypeScript ORM with a head. Runs on Node, Bun and Deno. Lives on the Edge and yes, it's a JavaScript ORM too 😅

APACHE-2.0 License

Downloads
4.4M
Stars
19.3K
Committers
93

Bot releases are visible (Hide)

drizzle-orm - 0.31.1 Latest Release

Published by github-actions[bot] 5 months ago

New Features

Live Queries 🎉

For a full explanation about Drizzle + Expo welcome to discussions

As of v0.31.1 Drizzle ORM now has native support for Expo SQLite Live Queries!
We've implemented a native useLiveQuery React Hook which observes necessary database changes and automatically re-runs database queries. It works with both SQL-like and Drizzle Queries:

import { useLiveQuery, drizzle } from 'drizzle-orm/expo-sqlite';
import { openDatabaseSync } from 'expo-sqlite/next';
import { users } from './schema';
import { Text } from 'react-native';

const expo = openDatabaseSync('db.db');
const db = drizzle(expo);

const App = () => {
  // Re-renders automatically when data changes
  const { data } = useLiveQuery(db.select().from(users));

  // const { data, error, updatedAt } = useLiveQuery(db.query.users.findFirst());
  // const { data, error, updatedAt } = useLiveQuery(db.query.users.findMany());


  return <Text>{JSON.stringify(data)}</Text>;
};

export default App;

We've intentionally not changed the API of ORM itself to stay with conventional React Hook API, so we have useLiveQuery(databaseQuery) as opposed to db.select().from(users).useLive() or db.query.users.useFindMany()

We've also decided to provide data, error and updatedAt fields as a result of hook for concise explicit error handling following practices of React Query and Electric SQL

drizzle-orm - 0.31.0

Published by github-actions[bot] 5 months ago

Breaking changes

Note: [email protected] can be used with [email protected] or higher. The same applies to Drizzle Kit. If you run a Drizzle Kit command, it will check and prompt you for an upgrade (if needed). You can check for Drizzle Kit updates. below

PostgreSQL indexes API was changed

The previous Drizzle+PostgreSQL indexes API was incorrect and was not aligned with the PostgreSQL documentation. The good thing is that it was not used in queries, and drizzle-kit didn't support all properties for indexes. This means we can now change the API to the correct one and provide full support for it in drizzle-kit

Previous API

  • No way to define SQL expressions inside .on.
  • .using and .on in our case are the same thing, so the API is incorrect here.
  • .asc(), .desc(), .nullsFirst(), and .nullsLast() should be specified for each column or expression on indexes, but not on an index itself.
// Index declaration reference
index('name')
  .on(table.column1, table.column2, ...) or .onOnly(table.column1, table.column2, ...)
  .concurrently()
  .using(sql``) // sql expression
  .asc() or .desc()
  .nullsFirst() or .nullsLast()
  .where(sql``) // sql expression

Current API

// First example, with `.on()`
index('name')
  .on(table.column1.asc(), table.column2.nullsFirst(), ...) or .onOnly(table.column1.desc().nullsLast(), table.column2, ...)
  .concurrently()
  .where(sql``)
  .with({ fillfactor: '70' })

// Second Example, with `.using()`
index('name')
  .using('btree', table.column1.asc(), sql`lower(${table.column2})`, table.column1.op('text_ops'))
  .where(sql``) // sql expression
  .with({ fillfactor: '70' })

New Features

🎉 "pg_vector" extension support

There is no specific code to create an extension inside the Drizzle schema. We assume that if you are using vector types, indexes, and queries, you have a PostgreSQL database with the pg_vector extension installed.

You can now specify indexes for pg_vector and utilize pg_vector functions for querying, ordering, etc.

Let's take a few examples of pg_vector indexes from the pg_vector docs and translate them to Drizzle

L2 distance, Inner product and Cosine distance

// CREATE INDEX ON items USING hnsw (embedding vector_l2_ops);
// CREATE INDEX ON items USING hnsw (embedding vector_ip_ops);
// CREATE INDEX ON items USING hnsw (embedding vector_cosine_ops);

const table = pgTable('items', {
    embedding: vector('embedding', { dimensions: 3 })
}, (table) => ({
    l2: index('l2_index').using('hnsw', table.embedding.op('vector_l2_ops'))
    ip: index('ip_index').using('hnsw', table.embedding.op('vector_ip_ops'))
    cosine: index('cosine_index').using('hnsw', table.embedding.op('vector_cosine_ops'))
}))

L1 distance, Hamming distance and Jaccard distance - added in pg_vector 0.7.0 version

// CREATE INDEX ON items USING hnsw (embedding vector_l1_ops);
// CREATE INDEX ON items USING hnsw (embedding bit_hamming_ops);
// CREATE INDEX ON items USING hnsw (embedding bit_jaccard_ops);

const table = pgTable('table', {
    embedding: vector('embedding', { dimensions: 3 })
}, (table) => ({
    l1: index('l1_index').using('hnsw', table.embedding.op('vector_l1_ops'))
    hamming: index('hamming_index').using('hnsw', table.embedding.op('bit_hamming_ops'))
    bit: index('bit_jaccard_index').using('hnsw', table.embedding.op('bit_jaccard_ops'))
}))

For queries, you can use predefined functions for vectors or create custom ones using the SQL template operator.

You can also use the following helpers:

import { l2Distance, l1Distance, innerProduct, 
          cosineDistance, hammingDistance, jaccardDistance } from 'drizzle-orm'

l2Distance(table.column, [3, 1, 2]) // table.column <-> '[3, 1, 2]'
l1Distance(table.column, [3, 1, 2]) // table.column <+> '[3, 1, 2]'

innerProduct(table.column, [3, 1, 2]) // table.column <#> '[3, 1, 2]'
cosineDistance(table.column, [3, 1, 2]) // table.column <=> '[3, 1, 2]'

hammingDistance(table.column, '101') // table.column <~> '101'
jaccardDistance(table.column, '101') // table.column <%> '101'

If pg_vector has some other functions to use, you can replicate implimentation from existing one we have. Here is how it can be done

export function l2Distance(
  column: SQLWrapper | AnyColumn,
  value: number[] | string[] | TypedQueryBuilder<any> | string,
): SQL {
  if (is(value, TypedQueryBuilder<any>) || typeof value === 'string') {
    return sql`${column} <-> ${value}`;
  }
  return sql`${column} <-> ${JSON.stringify(value)}`;
}

Name it as you wish and change the operator. This example allows for a numbers array, strings array, string, or even a select query. Feel free to create any other type you want or even contribute and submit a PR

Examples

Let's take a few examples of pg_vector queries from the pg_vector docs and translate them to Drizzle

import { l2Distance } from 'drizzle-orm';

// SELECT * FROM items ORDER BY embedding <-> '[3,1,2]' LIMIT 5;
db.select().from(items).orderBy(l2Distance(items.embedding, [3,1,2]))

// SELECT embedding <-> '[3,1,2]' AS distance FROM items;
db.select({ distance: l2Distance(items.embedding, [3,1,2]) })

// SELECT * FROM items ORDER BY embedding <-> (SELECT embedding FROM items WHERE id = 1) LIMIT 5;
const subquery = db.select({ embedding: items.embedding }).from(items).where(eq(items.id, 1));
db.select().from(items).orderBy(l2Distance(items.embedding, subquery)).limit(5)

// SELECT (embedding <#> '[3,1,2]') * -1 AS inner_product FROM items;
db.select({ innerProduct: sql`(${maxInnerProduct(items.embedding, [3,1,2])}) * -1` }).from(items)

// and more!

🎉 New PostgreSQL types: point, line

You can now use point and line from PostgreSQL Geometric Types

Type point has 2 modes for mappings from the database: tuple and xy.

  • tuple will be accepted for insert and mapped on select to a tuple. So, the database Point(1,2) will be typed as [1,2] with drizzle.

  • xy will be accepted for insert and mapped on select to an object with x, y coordinates. So, the database Point(1,2) will be typed as { x: 1, y: 2 } with drizzle

const items = pgTable('items', {
 point: point('point'),
 pointObj: point('point_xy', { mode: 'xy' }),
});

Type line has 2 modes for mappings from the database: tuple and abc.

  • tuple will be accepted for insert and mapped on select to a tuple. So, the database Line{1,2,3} will be typed as [1,2,3] with drizzle.

  • abc will be accepted for insert and mapped on select to an object with a, b, and c constants from the equation Ax + By + C = 0. So, the database Line{1,2,3} will be typed as { a: 1, b: 2, c: 3 } with drizzle.

const items = pgTable('items', {
 line: line('line'),
 lineObj: point('line_abc', { mode: 'abc' }),
});

🎉 Basic "postgis" extension support

There is no specific code to create an extension inside the Drizzle schema. We assume that if you are using postgis types, indexes, and queries, you have a PostgreSQL database with the postgis extension installed.

geometry type from postgis extension:

const items = pgTable('items', {
  geo: geometry('geo', { type: 'point' }),
  geoObj: geometry('geo_obj', { type: 'point', mode: 'xy' }),
  geoSrid: geometry('geo_options', { type: 'point', mode: 'xy', srid: 4000 }),
});

mode
Type geometry has 2 modes for mappings from the database: tuple and xy.

  • tuple will be accepted for insert and mapped on select to a tuple. So, the database geometry will be typed as [1,2] with drizzle.
  • xy will be accepted for insert and mapped on select to an object with x, y coordinates. So, the database geometry will be typed as { x: 1, y: 2 } with drizzle

type

The current release has a predefined type: point, which is the geometry(Point) type in the PostgreSQL PostGIS extension. You can specify any string there if you want to use some other type

Drizzle Kit updates: [email protected]

Release notes here are partially duplicated from [email protected]

New Features

🎉 Support for new types

Drizzle Kit can now handle:

  • point and line from PostgreSQL
  • vector from the PostgreSQL pg_vector extension
  • geometry from the PostgreSQL PostGIS extension

🎉 New param in drizzle.config - extensionsFilters

The PostGIS extension creates a few internal tables in the public schema. This means that if you have a database with the PostGIS extension and use push or introspect, all those tables will be included in diff operations. In this case, you would need to specify tablesFilter, find all tables created by the extension, and list them in this parameter.

We have addressed this issue so that you won't need to take all these steps. Simply specify extensionsFilters with the name of the extension used, and Drizzle will skip all the necessary tables.

Currently, we only support the postgis option, but we plan to add more extensions if they create tables in the public schema.

The postgis option will skip the geography_columns, geometry_columns, and spatial_ref_sys tables

import { defineConfig } from 'drizzle-kit'

export default defaultConfig({
  dialect: "postgresql",
  extensionsFilters: ["postgis"],
})

Improvements

Update zod schemas for database credentials and write tests to all the positive/negative cases

  • support full set of SSL params in kit config, provide types from node:tls connection
import { defineConfig } from 'drizzle-kit'

export default defaultConfig({
  dialect: "postgresql",
  dbCredentials: {
    ssl: true, //"require" | "allow" | "prefer" | "verify-full" | options from node:tls
  }
})
import { defineConfig } from 'drizzle-kit'

export default defaultConfig({
  dialect: "mysql",
  dbCredentials: {
    ssl: "", // string | SslOptions (ssl options from mysql2 package)
  }
})

Normilized SQLite urls for libsql and better-sqlite3 drivers

Those drivers have different file path patterns, and Drizzle Kit will accept both and create a proper file path format for each

Updated MySQL and SQLite index-as-expression behavior

In this release MySQL and SQLite will properly map expressions into SQL query. Expressions won't be escaped in string but columns will be

export const users = sqliteTable(
  'users',
  {
    id: integer('id').primaryKey(),
    email: text('email').notNull(),
  },
  (table) => ({
    emailUniqueIndex: uniqueIndex('emailUniqueIndex').on(sql`lower(${table.email})`),
  }),
);

-- before
CREATE UNIQUE INDEX `emailUniqueIndex` ON `users` (`lower("users"."email")`);

-- now
CREATE UNIQUE INDEX `emailUniqueIndex` ON `users` (lower("email"));

Bug Fixes

  • [BUG]: multiple constraints not added (only the first one is generated) - #2341
  • Drizzle Studio: Error: Connection terminated unexpectedly - #435
  • Unable to run sqlite migrations local - #432
  • error: unknown option '--config' - #423

How push and generate works for indexes

Limitations

You should specify a name for your index manually if you have an index on at least one expression

Example

index().on(table.id, table.email) // will work well and name will be autogeneretaed
index('my_name').on(table.id, table.email) // will work well

// but

index().on(sql`lower(${table.email})`) // error
index('my_name').on(sql`lower(${table.email})`) // will work well

Push won't generate statements if these fields(list below) were changed in an existing index:

  • expressions inside .on() and .using()
  • .where() statements
  • operator classes .op() on columns

If you are using push workflows and want to change these fields in the index, you would need to:

  • Comment out the index
  • Push
  • Uncomment the index and change those fields
  • Push again

For the generate command, drizzle-kit will be triggered by any changes in the index for any property in the new drizzle indexes API, so there are no limitations here.

drizzle-orm - v0.31.0-beta

Published by AndriiSherman 5 months ago

Breaking changes

PostgreSQL indexes API was changed

The previous Drizzle+PostgreSQL indexes API was incorrect and was not aligned with the PostgreSQL documentation. The good thing is that it was not used in queries, and drizzle-kit didn't support all properties for indexes. This means we can now change the API to the correct one and provide full support for it in drizzle-kit

Previous API

  • No way to define SQL expressions inside .on.
  • .using and .on in our case are the same thing, so the API is incorrect here.
  • .asc(), .desc(), .nullsFirst(), and .nullsLast() should be specified for each column or expression on indexes, but not on an index itself.
// Index declaration reference
index('name')
  .on(table.column1, table.column2, ...) or .onOnly(table.column1, table.column2, ...)
  .concurrently()
  .using(sql``) // sql expression
  .asc() or .desc()
  .nullsFirst() or .nullsLast()
  .where(sql``) // sql expression

Current API

// First example, with `.on()`
index('name')
  .on(table.column1.asc(), table.column2.nullsFirst(), ...) or .onOnly(table.column1.desc().nullsLast(), table.column2, ...)
  .concurrently()
  .where(sql``)
  .with({ fillfactor: '70' })

// Second Example, with `.using()`
index('name')
  .using('btree', table.column1.asc(), sql`lower(${table.column2})`, table.column1.op('text_ops'))
  .where(sql``) // sql expression
  .with({ fillfactor: '70' })

New Features

🎉 "pg_vector" extension support

There is no specific code to create an extension inside the Drizzle schema. We assume that if you are using vector types, indexes, and queries, you have a PostgreSQL database with the pg_vector extension installed.

You can now specify indexes for pg_vector and utilize pg_vector functions for querying, ordering, etc.

Let's take a few examples of pg_vector indexes from the pg_vector docs and translate them to Drizzle

L2 distance, Inner product and Cosine distance

// CREATE INDEX ON items USING hnsw (embedding vector_l2_ops);
// CREATE INDEX ON items USING hnsw (embedding vector_ip_ops);
// CREATE INDEX ON items USING hnsw (embedding vector_cosine_ops);

const table = pgTable('items', {
    embedding: vector('embedding', { dimensions: 3 })
}, (table) => ({
    l2: index('l2_index').using('hnsw', table.embedding.op('vector_l2_ops'))
    ip: index('ip_index').using('hnsw', table.embedding.op('vector_ip_ops'))
    cosine: index('cosine_index').using('hnsw', table.embedding.op('vector_cosine_ops'))
}))

L1 distance, Hamming distance and Jaccard distance - added in pg_vector 0.7.0 version

// CREATE INDEX ON items USING hnsw (embedding vector_l1_ops);
// CREATE INDEX ON items USING hnsw (embedding bit_hamming_ops);
// CREATE INDEX ON items USING hnsw (embedding bit_jaccard_ops);

const table = pgTable('table', {
    embedding: vector('embedding', { dimensions: 3 })
}, (table) => ({
    l1: index('l1_index').using('hnsw', table.embedding.op('vector_l1_ops'))
    hamming: index('hamming_index').using('hnsw', table.embedding.op('bit_hamming_ops'))
    bit: index('bit_jaccard_index').using('hnsw', table.embedding.op('bit_jaccard_ops'))
}))

For queries, you can use predefined functions for vectors or create custom ones using the SQL template operator.

You can also use the following helpers:

import { l2Distance, l1Distance, innerProduct, 
          cosineDistance, hammingDistance, jaccardDistance } from 'drizzle-orm'

l2Distance(table.column, [3, 1, 2]) // table.column <-> '[3, 1, 2]'
l1Distance(table.column, [3, 1, 2]) // table.column <+> '[3, 1, 2]'

innerProduct(table.column, [3, 1, 2]) // table.column <#> '[3, 1, 2]'
cosineDistance(table.column, [3, 1, 2]) // table.column <=> '[3, 1, 2]'

hammingDistance(table.column, '101') // table.column <~> '101'
jaccardDistance(table.column, '101') // table.column <%> '101'

If pg_vector has some other functions to use, you can replicate implimentation from existing one we have. Here is how it can be done

export function l2Distance(
  column: SQLWrapper | AnyColumn,
  value: number[] | string[] | TypedQueryBuilder<any> | string,
): SQL {
  if (is(value, TypedQueryBuilder<any>) || typeof value === 'string') {
    return sql`${column} <-> ${value}`;
  }
  return sql`${column} <-> ${JSON.stringify(value)}`;
}

Name it as you wish and change the operator. This example allows for a numbers array, strings array, string, or even a select query. Feel free to create any other type you want or even contribute and submit a PR

Examples

Let's take a few examples of pg_vector queries from the pg_vector docs and translate them to Drizzle

import { l2Distance } from 'drizzle-orm';

// SELECT * FROM items ORDER BY embedding <-> '[3,1,2]' LIMIT 5;
db.select().from(items).orderBy(l2Distance(items.embedding, [3,1,2]))

// SELECT embedding <-> '[3,1,2]' AS distance FROM items;
db.select({ distance: l2Distance(items.embedding, [3,1,2]) })

// SELECT * FROM items ORDER BY embedding <-> (SELECT embedding FROM items WHERE id = 1) LIMIT 5;
const subquery = db.select({ embedding: items.embedding }).from(items).where(eq(items.id, 1));
db.select().from(items).orderBy(l2Distance(items.embedding, subquery)).limit(5)

// SELECT (embedding <#> '[3,1,2]') * -1 AS inner_product FROM items;
db.select({ innerProduct: sql`(${maxInnerProduct(items.embedding, [3,1,2])}) * -1` }).from(items)

// and more!
drizzle-orm - 0.30.10

Published by github-actions[bot] 6 months ago

New Features

🎉 .if() function added to all WHERE expressions

Select all users after cursors if a cursor value was provided

async function someFunction(categories: string[] = [], views = 0) {
  await db
    .select()
    .from(users)
    .where(
       and(
          gt(posts.views, views).if(views > 100),
          inArray(posts.category, categories).if(categories.length > 0),
       ),
    );
}

Bug Fixes

  • Fixed internal mappings for sessions .all, .values, .execute functions in AWS DataAPI
drizzle-orm - 0.30.9

Published by github-actions[bot] 6 months ago

  • 🐛 Fixed migrator in AWS Data API
  • Added setWhere and targetWhere fields to .onConflictDoUpdate() config in SQLite instead of single where field
  • 🛠️ Added schema information to Drizzle instances via db._.fullSchema
drizzle-orm - 0.30.8

Published by github-actions[bot] 6 months ago

  • 🎉 Added custom schema support to enums in Postgres (fixes #669 via #2048):

⚠️ Only available in drizzle-orm for now, drizzle-kit support will arrive soon

import { pgSchema } from 'drizzle-orm/pg-core';

const mySchema = pgSchema('mySchema');
const colors = mySchema.enum('colors', ['red', 'green', 'blue']);
  • 🎉 Changed D1 migrate() function to use batch API (#2137)
  • 🐛 Split where clause in Postgres .onConflictDoUpdate method into setWhere and targetWhere clauses, to support both where cases in on conflict ... clause (fixes #1628, #1302 via #2056)
  • 🐛 Fixed query generation for where clause in Postgres .onConflictDoNothing method, as it was placed in a wrong spot (fixes #1628 via #2056)
  • 🐛 Fixed multiple issues with AWS Data API driver (fixes #1931, #1932, #1934, #1936 via #2119)
  • 🐛 Fix inserting and updating array values in AWS Data API (fixes #1912 via #1911)

Thanks @hugo082 and @livingforjesus!

drizzle-orm - 0.30.7

Published by github-actions[bot] 7 months ago

Bug fixes

  • Add mappings for @vercel/postgres package
  • Fix interval mapping for neon drivers - #1542
drizzle-orm - 0.30.6

Published by github-actions[bot] 7 months ago

New Features

🎉 PGlite driver Support

PGlite is a WASM Postgres build packaged into a TypeScript client library that enables you to run Postgres in the browser, Node.js and Bun, with no need to install any other dependencies. It is only 2.6mb gzipped.

It can be used as an ephemeral in-memory database, or with persistence either to the file system (Node/Bun) or indexedDB (Browser).

Unlike previous "Postgres in the browser" projects, PGlite does not use a Linux virtual machine - it is simply Postgres in WASM.

Usage Example

import { PGlite } from '@electric-sql/pglite';
import { drizzle } from 'drizzle-orm/pglite';

// In-memory Postgres
const client = new PGlite();
const db = drizzle(client);

await db.select().from(users);

There are currently 2 limitations, that should be fixed on Pglite side:

drizzle-orm - 0.30.5

Published by github-actions[bot] 7 months ago

New Features

🎉 $onUpdate functionality for PostgreSQL, MySQL and SQLite

Adds a dynamic update value to the column.
The function will be called when the row is updated, and the returned value will be used as the column value if none is provided.
If no default (or $defaultFn) value is provided, the function will be called when the row is inserted as well, and the returned value will be used as the column value.

Note: This value does not affect the drizzle-kit behavior, it is only used at runtime in drizzle-orm.

const usersOnUpdate = pgTable('users_on_update', {
  id: serial('id').primaryKey(),
  name: text('name').notNull(),
  updateCounter: integer('update_counter').default(sql`1`).$onUpdateFn(() => sql`update_counter + 1`),
  updatedAt: timestamp('updated_at', { mode: 'date', precision: 3 }).$onUpdate(() => new Date()),
  alwaysNull: text('always_null').$type<string | null>().$onUpdate(() => null),
});

Fixes

  • [BUG]: insertions on columns with the smallserial datatype are not optional - #1848

Thanks @Angelelz and @gabrielDonnantuoni!

drizzle-orm - 0.30.4

Published by github-actions[bot] 7 months ago

New Features

🎉 xata-http driver support

According their official website, Xata is a Postgres data platform with a focus on reliability, scalability, and developer experience. The Xata Postgres service is currently in beta, please see the Xata docs on how to enable it in your account.

Drizzle ORM natively supports both the xata driver with drizzle-orm/xata package and the postgres or pg drivers for accessing a Xata Postgres database.

The following example use the Xata generated client, which you obtain by running the xata init CLI command.

pnpm add drizzle-orm @xata.io/client
import { drizzle } from 'drizzle-orm/xata-http';
import { getXataClient } from './xata'; // Generated client

const xata = getXataClient();
const db = drizzle(xata);

const result = await db.select().from(...);

You can also connect to Xata using pg or postgres.js drivers

drizzle-orm - 0.30.3

Published by github-actions[bot] 7 months ago

  • 🎉 Added raw query support (db.execute(...)) to batch API in Neon HTTP driver
  • 🐛 Fixed @neondatabase/serverless HTTP driver types issue (#1945, neondatabase/serverless#66)
  • 🐛 Fixed sqlite-proxy driver .run() result (https://github.com/drizzle-team/drizzle-orm/pull/2038)
drizzle-orm - 0.30.2

Published by github-actions[bot] 7 months ago

Improvements

LibSQL migrations have been updated to utilize batch execution instead of transactions. As stated in the documentation, LibSQL now supports batch operations

A batch consists of multiple SQL statements executed sequentially within an implicit transaction. The backend handles the transaction: success commits all changes, while any failure results in a full rollback with no modifications.

Bug fixed

  • [Sqlite] Fix findFirst query for bun:sqlite #1885 - thanks @shaileshaanand
drizzle-orm - 0.30.1

Published by github-actions[bot] 8 months ago

New Features

🎉 OP-SQLite driver Support

Usage Example

import { open } from '@op-engineering/op-sqlite';
import { drizzle } from 'drizzle-orm/op-sqlite';

const opsqlite = open({
	name: 'myDB',
});
const db = drizzle(opsqlite);

await db.select().from(users);

For more usage and setup details, please check our op-sqlite docs

Bug fixes

  • Migration hook fixed for Expo driver
drizzle-orm - 0.30.0

Published by github-actions[bot] 8 months ago

Breaking Changes

The Postgres timestamp mapping has been changed to align all drivers with the same behavior.

❗ We've modified the postgres.js driver instance to always return strings for dates, and then Drizzle will provide you with either strings of mapped dates, depending on the selected mode. The only issue you may encounter is that once you provide the `postgres.js`` driver instance inside Drizzle, the behavior of this object will change for dates, which will always be strings.

We've made this change as a minor release, just as a warning, that:

  • If you were using timestamps and were waiting for a specific response, the behavior will now be changed.
    When mapping to the driver, we will always use .toISOString for both timestamps with timezone and without timezone.

  • If you were using the postgres.js driver outside of Drizzle, all postgres.js clients passed to Drizzle will have mutated behavior for dates. All dates will be strings in the response.

Parsers that were changed for postgres.js.

const transparentParser = (val: any) => val;

// Override postgres.js default date parsers: https://github.com/porsager/postgres/discussions/761
for (const type of ['1184', '1082', '1083', '1114']) {
	client.options.parsers[type as any] = transparentParser;
	client.options.serializers[type as any] = transparentParser;
}

Ideally, as is the case with almost all other drivers, we should have the possibility to mutate mappings on a per-query basis, which means that the driver client won't be mutated. We will be reaching out to the creator of the postgres.js library to inquire about the possibility of specifying per-query mapping interceptors and making this flow even better for all users.

If we've overlooked this capability and it is already available with `postgres.js``, please ping us in our Discord!

A few more references for timestamps without and with timezones can be found in our docs

Bug fixed in this release

Big thanks to @Angelelz!

  • [BUG]: timestamp with mode string is returned as Date object instead of string - #806
  • [BUG]: Dates are always dates #971
  • [BUG]: Inconsistencies when working with timestamps and corresponding datetime objects in javascript. #1176
  • [BUG]: timestamp columns showing string type, however actually returning a Date object. #1185
  • [BUG]: Wrong data type for postgres date colum #1407
  • [BUG]: invalid timestamp conversion when using PostgreSQL with TimeZone set to UTC #1587
  • [BUG]: Postgres insert into timestamp with time zone removes milliseconds #1061
  • [BUG]: update timestamp field (using AWS Data API) #1164
  • [BUG]: Invalid date from relational queries #895
drizzle-orm - 0.29.5

Published by github-actions[bot] 8 months ago

New Features

🎉 WITH UPDATE, WITH DELETE, WITH INSERT - thanks @L-Mario564

You can now use WITH statements with INSERT, UPDATE and DELETE statements

Usage examples

const averageAmount = db.$with('average_amount').as(
	db.select({ value: sql`avg(${orders.amount})`.as('value') }).from(orders),
);
const result = await db
	.with(averageAmount)
	.delete(orders)
	.where(gt(orders.amount, sql`(select * from ${averageAmount})`))
	.returning({
		id: orders.id,
	});

Generated SQL:

with "average_amount" as (select avg("amount") as "value" from "orders") 
delete from "orders" 
where "orders"."amount" > (select * from "average_amount") 
returning "id"

For more examples for all statements, check docs:

🎉 Possibility to specify custom schema and custom name for migrations table - thanks @g3r4n

  • Custom table for migrations

By default, all information about executed migrations will be stored in the database inside the __drizzle_migrations table,
and for PostgreSQL, inside the drizzle schema. However, you can configure where to store those records.

To add a custom table name for migrations stored inside your database, you should use the migrationsTable option

Usage example

await migrate(db, {
	migrationsFolder: './drizzle',
	migrationsTable: 'my_migrations',
});
  • Custom schema for migrations

Works only with PostgreSQL databases

To add a custom schema name for migrations stored inside your database, you should use the migrationsSchema option

Usage example

await migrate(db, {
	migrationsFolder: './drizzle',
	migrationsSchema: 'custom',
});

🎉 SQLite Proxy bacth and Relational Queries support

  • You can now use .query.findFirst and .query.findMany syntax with sqlite proxy driver

  • SQLite Proxy supports batch requests, the same as it's done for all other drivers. Check full docs

    You will need to specify a specific callback for batch queries and handle requests to proxy server:

import { drizzle } from 'drizzle-orm/sqlite-proxy';

type ResponseType = { rows: any[][] | any[] }[];

const db = drizzle(
	async (sql, params, method) => {
		// single query logic
	},
	// new batch callback
	async (
		queries: {
			sql: string;
			params: any[];
			method: 'all' | 'run' | 'get' | 'values';
		}[],
	) => {
		try {
			const result: ResponseType = await axios.post(
				'http://localhost:3000/batch',
				{ queries },
			);

			return result;
		} catch (e: any) {
			console.error('Error from sqlite proxy server:', e);
			throw e;
		}
	},
);

And then you can use db.batch([]) method, that will proxy all queries

Response from the batch should be an array of raw values (an array within an array), in the same order as they were sent to the proxy server

drizzle-orm - 0.29.4

Published by github-actions[bot] 8 months ago

New Features

🎉 Neon HTTP Batch

For more info you can check Neon docs

Example

const batchResponse: BatchType = await db.batch([
	db.insert(usersTable).values({ id: 1, name: 'John' }).returning({
		id: usersTable.id,
	}),
	db.insert(usersTable).values({ id: 2, name: 'Dan' }),
	db.query.usersTable.findMany({}),
	db.query.usersTable.findFirst({}),
]);
type BatchType = [
	{
		id: number;
	}[],
	NeonHttpQueryResult<never>,
	{
		id: number;
		name: string;
		verified: number;
		invitedBy: number | null;
	}[],
	{
		id: number;
		name: string;
		verified: number;
		invitedBy: number | null;
	} | undefined,
];

Improvements

Thanks to the database-js and PlanetScale teams, we have updated the default behavior and instances of database-js.

As suggested by the database-js core team, you should use the Client instance instead of connect():

import { Client } from '@planetscale/database';
import { drizzle } from 'drizzle-orm/planetscale-serverless';

// create the connection
const client = new Client({
	host: process.env['DATABASE_HOST'],
	username: process.env['DATABASE_USERNAME'],
	password: process.env['DATABASE_PASSWORD'],
});

const db = drizzle(client);

Warning: In this version, there are no breaking changes, but starting from version 0.30.0, you will encounter an error if you attempt to use anything other than a Client instance.

We suggest starting to change connections to PlanetScale now to prevent any runtime errors in the future.

Previously our docs stated to use connect() and only this function was can be passed to drizzle. In this realase we are adding support for new Client() and deprecating connect(), by suggesting from database-js team. In this release you will see a warning when trying to pass connect() function result:

Warning text

Warning: You need to pass an instance of Client:

import { Client } from "@planetscale/database";

const client = new Client({
  host: process.env["DATABASE_HOST"],
  username: process.env["DATABASE_USERNAME"],
  password: process.env["DATABASE_PASSWORD"],
});

const db = drizzle(client);

Starting from version 0.30.0, you will encounter an error if you attempt to use anything other than a Client instance.

Please make the necessary changes now to prevent any runtime errors in the future
drizzle-orm - 0.29.3

Published by github-actions[bot] 10 months ago

  • fix: make expo peer dependencies optional #1714
drizzle-orm - 0.29.2

Published by github-actions[bot] 10 months ago

Fixes

  • Added improvements to the planescale relational tests #1579 - thanks @Angelelz
  • [Pg] FIX: correct string escaping for empty PgArrays #1640 - thanks @Angelelz
  • Fix wrong syntax for exists fn in sqlite #1647 - thanks @Angelelz
  • Properly handle dates in AWS Data API
  • Fix Hermes mixins constructor issue

ESLint Drizzle Plugin, v0.2.3

npm i [email protected]

🎉 [ESLint] Add support for functions and improve error messages #1586 - thanks @ngregrichardson

  • Allowed Drizzle object to be or to be retrieved from a function, e.g.
  • Added better context to the suggestion in the error message.

New Drivers

🎉 Expo SQLite Driver is available

For starting with Expo SQLite Driver, you need to install expo-sqlite and drizzle-orm packages.

npm install drizzle-orm expo-sqlite@next

Then, you can use it like this:

import { drizzle } from "drizzle-orm/expo-sqlite";
import { openDatabaseSync } from "expo-sqlite/next";

const expoDb = openDatabaseSync("db.db");

const db = drizzle(expoDb);

await db.select().from(...)...

// or

db.select().from(...).then(...);

// or

db.select().from(...).all();

If you want to use Drizzle Migrations, you need to update babel and metro configuration files.

  1. Install babel-plugin-inline-import package.
npm install babel-plugin-inline-import
  1. Update babel.config.js and metro.config.js files.

babel.config.js

module.exports = function(api) {
  api.cache(true);

  return {
    presets: ['babel-preset-expo'],
+   plugins: [["inline-import", { "extensions": [".sql"] }]]
  };
};

metro.config.js

const { getDefaultConfig } = require('expo/metro-config');

/** @type {import('expo/metro-config').MetroConfig} */
const config = getDefaultConfig(__dirname);

+config.resolver.sourceExts.push('sql');

module.exports = config;
  1. Create drizzle.config.ts file in your project root folder.
import type { Config } from 'drizzle-kit';

export default {
	schema: './db/schema.ts',
	out: './drizzle',
	driver: 'expo',
} satisfies Config;

After creating schema file and drizzle.config.ts file, you can generate migrations like this:

npx drizzle-kit generate:sqlite

Then you need to import migrations.js file in your App.tsx file from ./drizzle folder and use hook useMigrations or migrate function.

import { drizzle } from "drizzle-orm/expo-sqlite";
import { openDatabaseSync } from "expo-sqlite/next";
import { useMigrations } from 'drizzle-orm/expo-sqlite/migrator';
import migrations from './drizzle/migrations';

const expoDb = openDatabaseSync("db.db");

const db = drizzle(expoDb);

export default function App() {
    const { success, error } = useMigrations(db, migrations);

    if (error) {
        return (
            <View>
                <Text>Migration error: {error.message}</Text>
            </View>
        );
    }

    if (!success) {
        return (
            <View>
                <Text>Migration is in progress...</Text>
            </View>
        );
    }

    return ...your application component;
}
drizzle-orm - 0.29.1

Published by github-actions[bot] 11 months ago

Fixes

  • Forward args correctly when using withReplica feature #1536. Thanks @Angelelz
  • Fix selectDistinctOn not working with multiple columns #1466. Thanks @L-Mario564

New Features/Helpers

🎉 Detailed JSDoc for all query builders in all dialects - thanks @realmikesolo

You can now access more information, hints, documentation links, etc. while developing and using JSDoc right in your IDE. Previously, we had them only for filter expressions, but now you can see them for all parts of the Drizzle query builder

🎉 New helpers for aggregate functions in SQL - thanks @L-Mario564

Remember, aggregation functions are often used with the GROUP BY clause of the SELECT statement. So if you are selecting using aggregating functions and other columns in one query,
be sure to use the .groupBy clause

Here is a list of functions and equivalent using sql template

count

await db.select({ value: count() }).from(users);
await db.select({ value: count(users.id) }).from(users);

// It's equivalent to writing
await db.select({ 
  value: sql`count('*'))`.mapWith(Number) 
}).from(users);
await db.select({ 
  value: sql`count(${users.id})`.mapWith(Number) 
}).from(users);

countDistinct

await db.select({ value: countDistinct(users.id) }).from(users);

// It's equivalent to writing
await db.select({ 
  value: sql`count(${users.id})`.mapWith(Number) 
}).from(users);

avg

await db.select({ value: avg(users.id) }).from(users);

// It's equivalent to writing
await db.select({ 
  value: sql`avg(${users.id})`.mapWith(String) 
}).from(users);

avgDistinct

await db.select({ value: avgDistinct(users.id) }).from(users);

// It's equivalent to writing
await db.select({ 
  value: sql`avg(distinct ${users.id})`.mapWith(String) 
}).from(users);

sum

await db.select({ value: sum(users.id) }).from(users);

// It's equivalent to writing
await db.select({ 
  value: sql`sum(${users.id})`.mapWith(String) 
}).from(users);

sumDistinct

await db.select({ value: sumDistinct(users.id) }).from(users);

// It's equivalent to writing
await db.select({ 
  value: sql`sum(distinct ${users.id})`.mapWith(String) 
}).from(users);

max

await db.select({ value: max(users.id) }).from(users);

// It's equivalent to writing
await db.select({ 
  value: sql`max(${expression})`.mapWith(users.id) 
}).from(users);

min

await db.select({ value: min(users.id) }).from(users);

// It's equivalent to writing
await db.select({ 
  value: sql`min(${users.id})`.mapWith(users.id) 
}).from(users);

New Packages

🎉 ESLint Drizzle Plugin

For cases where it's impossible to perform type checks for specific scenarios, or where it's possible but error messages would be challenging to understand, we've decided to create an ESLint package with recommended rules. This package aims to assist developers in handling crucial scenarios during development

Big thanks to @Angelelz for initiating the development of this package and transferring it to the Drizzle Team's npm

Install

[ npm | yarn | pnpm | bun ] install eslint eslint-plugin-drizzle

You can install those packages for typescript support in your IDE

[ npm | yarn | pnpm | bun ] install @typescript-eslint/eslint-plugin @typescript-eslint/parser

Usage

Create a .eslintrc.yml file, add drizzle to the plugins, and specify the rules you want to use. You can find a list of all existing rules below

root: true
parser: '@typescript-eslint/parser'
parserOptions:
  project: './tsconfig.json'
plugins:
  - drizzle
rules:
  'drizzle/enforce-delete-with-where': "error"
  'drizzle/enforce-update-with-where': "error"

All config

This plugin exports an all config that makes use of all rules (except for deprecated ones).

root: true
extends:
  - "plugin:drizzle/all"
parser: '@typescript-eslint/parser'
parserOptions:
  project: './tsconfig.json'
plugins:
  - drizzle

At the moment, all is equivalent to recommended

root: true
extends:
  - "plugin:drizzle/recommended"
parser: '@typescript-eslint/parser'
parserOptions:
  project: './tsconfig.json'
plugins:
  - drizzle

Rules

enforce-delete-with-where: Enforce using delete with the.where() clause in the .delete() statement. Most of the time, you don't need to delete all rows in the table and require some kind of WHERE statements.

Error Message:

Without `.where(...)` you will delete all the rows in a table. If you didn't want to do it, please use `db.delete(...).where(...)` instead. Otherwise you can ignore this rule here

Optionally, you can define a drizzleObjectName in the plugin options that accept a string or string[]. This is useful when you have objects or classes with a delete method that's not from Drizzle. Such a delete method will trigger the ESLint rule. To avoid that, you can define the name of the Drizzle object that you use in your codebase (like db) so that the rule would only trigger if the delete method comes from this object:

Example, config 1:

"rules": {
  "drizzle/enforce-delete-with-where": ["error"]
}
class MyClass {
  public delete() {
    return {}
  }
}

const myClassObj = new MyClass();

// ---> Will be triggered by ESLint Rule
myClassObj.delete()

const db = drizzle(...)
// ---> Will be triggered by ESLint Rule
db.delete()

Example, config 2:

"rules": {
  "drizzle/enforce-delete-with-where": ["error", { "drizzleObjectName": ["db"] }],
}
class MyClass {
  public delete() {
    return {}
  }
}

const myClassObj = new MyClass();

// ---> Will NOT be triggered by ESLint Rule
myClassObj.delete()

const db = drizzle(...)
// ---> Will be triggered by ESLint Rule
db.delete()

enforce-update-with-where: Enforce using update with the.where() clause in the .update() statement. Most of the time, you don't need to update all rows in the table and require some kind of WHERE statements.

Error Message:

Without `.where(...)` you will update all the rows in a table. If you didn't want to do it, please use `db.update(...).set(...).where(...)` instead. Otherwise you can ignore this rule here

Optionally, you can define a drizzleObjectName in the plugin options that accept a string or string[]. This is useful when you have objects or classes with a delete method that's not from Drizzle. Such as update method will trigger the ESLint rule. To avoid that, you can define the name of the Drizzle object that you use in your codebase (like db) so that the rule would only trigger if the delete method comes from this object:

Example, config 1:

"rules": {
  "drizzle/enforce-update-with-where": ["error"]
}
class MyClass {
  public update() {
    return {}
  }
}

const myClassObj = new MyClass();

// ---> Will be triggered by ESLint Rule
myClassObj.update()

const db = drizzle(...)
// ---> Will be triggered by ESLint Rule
db.update()

Example, config 2:

"rules": {
  "drizzle/enforce-update-with-where": ["error", { "drizzleObjectName": ["db"] }],
}
class MyClass {
  public update() {
    return {}
  }
}

const myClassObj = new MyClass();

// ---> Will NOT be triggered by ESLint Rule
myClassObj.update()

const db = drizzle(...)
// ---> Will be triggered by ESLint Rule
db.update()
drizzle-orm - 0.29.0

Published by github-actions[bot] 12 months ago

Drizzle ORM version 0.29.0 will require a minimum Drizzle Kit version of 0.20.0, and vice versa. Therefore, when upgrading to a newer version of Drizzle ORM, you will also need to upgrade Drizzle Kit. This may result in some breaking changes throughout the versions, especially if you need to upgrade Drizzle Kit and your Drizzle ORM version is older than <0.28.0

New Features

🎉 MySQL unsigned option for bigint

You can now specify bigint unsigned type

const table = mysqlTable('table', {
  id: bigint('id', { mode: 'number', unsigned: true }),
});

Read more in docs

🎉 Improved query builder types

Starting from 0.29.0 by default, as all the query builders in Drizzle try to conform to SQL as much as possible, you can only invoke most of the methods once. For example, in a SELECT statement there might only be one WHERE clause, so you can only invoke .where() once:

const query = db
  .select()
  .from(users)
  .where(eq(users.id, 1))
  .where(eq(users.name, 'John')); // ❌ Type error - where() can only be invoked once

This behavior is useful for conventional query building, i.e. when you create the whole query at once. However, it becomes a problem when you want to build a query dynamically, i.e. if you have a shared function that takes a query builder and enhances it. To solve this problem, Drizzle provides a special 'dynamic' mode for query builders, which removes the restriction of invoking methods only once. To enable it, you need to call .$dynamic() on a query builder.

Let's see how it works by implementing a simple withPagination function that adds LIMIT and OFFSET clauses to a query based on the provided page number and an optional page size:

function withPagination<T extends PgSelect>(
  qb: T,
  page: number,
  pageSize: number = 10,
) {
  return qb.limit(pageSize).offset(page * pageSize);
}

const query = db.select().from(users).where(eq(users.id, 1));
withPagination(query, 1); // ❌ Type error - the query builder is not in dynamic mode

const dynamicQuery = query.$dynamic();
withPagination(dynamicQuery, 1); // ✅ OK

Note that the withPagination function is generic, which allows you to modify the result type of the query builder inside it, for example by adding a join:

function withFriends<T extends PgSelect>(qb: T) {
  return qb.leftJoin(friends, eq(friends.userId, users.id));
}

let query = db.select().from(users).where(eq(users.id, 1)).$dynamic();
query = withFriends(query);

Read more in docs

🎉 Possibility to specify name for primary keys and foreign keys

There is an issue when constraint names exceed the 64-character limit of the database. This causes the database engine to truncate the name, potentially leading to issues. Starting from 0.29.0, you have the option to specify custom names for both primaryKey() and foreignKey(). We have also deprecated the old primaryKey() syntax, which can still be used but will be removed in future releases

const table = pgTable('table', {
  id: integer('id'),
  name: text('name'),
}, (table) => ({
  cpk: primaryKey({ name: 'composite_key', columns: [table.id, table.name] }),
  cfk: foreignKey({
    name: 'fkName',
    columns: [table.id],
    foreignColumns: [table.name],
  }),
}));

Read more in docs

🎉 Read Replicas Support

You can now use the Drizzle withReplica function to specify different database connections for read replicas and the main instance for write operations. By default, withReplicas will use a random read replica for read operations and the main instance for all other data modification operations. You can also specify custom logic for choosing which read replica connection to use. You have the freedom to make any weighted, custom decision for that. Here are some usage examples:

const primaryDb = drizzle(client);
const read1 = drizzle(client);
const read2 = drizzle(client);

const db = withReplicas(primaryDb, [read1, read2]);

// read from primary
db.$primary.select().from(usersTable);

// read from either read1 connection or read2 connection
db.select().from(usersTable)

// use primary database for delete operation
db.delete(usersTable).where(eq(usersTable.id, 1))

Implementation example of custom logic for selecting read replicas, where the first replica has a 70% chance of being chosen, and the second replica has a 30% chance of being chosen. Note that you can implement any type of random selection for read replicas

const db = withReplicas(primaryDb, [read1, read2], (replicas) => {
    const weight = [0.7, 0.3];
    let cumulativeProbability = 0;
    const rand = Math.random();

    for (const [i, replica] of replicas.entries()) {
      cumulativeProbability += weight[i]!;
      if (rand < cumulativeProbability) return replica;
    }
    return replicas[0]!
});

withReplicas function is available for all dialects in Drizzle ORM

Read more in docs

🎉 Set operators support (UNION, UNION ALL, INTERSECT, INTERSECT ALL, EXCEPT, EXCEPT ALL)

Huge thanks to @Angelelz for the significant contribution he made, from API discussions to proper type checks and runtime logic, along with an extensive set of tests. This greatly assisted us in delivering this feature in this release

Usage examples:
All set operators can be used in a two ways: import approach or builder approach

Import approach
import { union } from 'drizzle-orm/pg-core'

const allUsersQuery = db.select().from(users);
const allCustomersQuery = db.select().from(customers);

const result = await union(allUsersQuery, allCustomersQuery)
Builder approach
const result = await db.select().from(users).union(db.select().from(customers));

Read more in docs

🎉 New MySQL Proxy Driver

A new driver has been released, allowing you to create your own implementation for an HTTP driver using a MySQL database. You can find usage examples in the ./examples/mysql-proxy folder

You need to implement two endpoints on your server that will be used for queries and migrations(Migrate endpoint is optional and only if you want to use drizzle migrations). Both the server and driver implementation are up to you, so you are not restricted in any way. You can add custom mappings, logging, and much more

You can find both server and driver implementation examples in the ./examples/mysql-proxy folder

// Driver
import axios from 'axios';
import { eq } from 'drizzle-orm/expressions';
import { drizzle } from 'drizzle-orm/mysql-proxy';
import { migrate } from 'drizzle-orm/mysql-proxy/migrator';
import { cities, users } from './schema';

async function main() {
  const db = drizzle(async (sql, params, method) => {
    try {
      const rows = await axios.post(`${process.env.REMOTE_DRIVER}/query`, {
        sql,
        params,
        method,
      });

      return { rows: rows.data };
    } catch (e: any) {
      console.error('Error from pg proxy server:', e.response.data);
      return { rows: [] };
    }
  });

  await migrate(db, async (queries) => {
    try {
      await axios.post(`${process.env.REMOTE_DRIVER}/migrate`, { queries });
    } catch (e) {
      console.log(e);
      throw new Error('Proxy server cannot run migrations');
    }
  }, { migrationsFolder: 'drizzle' });

  await db.insert(cities).values({ id: 1, name: 'name' });

  await db.insert(users).values({
    id: 1,
    name: 'name',
    email: 'email',
    cityId: 1,
  });

  const usersToCityResponse = await db.select().from(users).leftJoin(
    cities,
    eq(users.cityId, cities.id),
  );
}

🎉 New PostgreSQL Proxy Driver

Same as MySQL you can now implement your own http driver for PostgreSQL database. You can find usage examples in the ./examples/pg-proxy folder

You need to implement two endpoints on your server that will be used for queries and migrations (Migrate endpoint is optional and only if you want to use drizzle migrations). Both the server and driver implementation are up to you, so you are not restricted in any way. You can add custom mappings, logging, and much more

You can find both server and driver implementation examples in the ./examples/pg-proxy folder

import axios from 'axios';
import { eq } from 'drizzle-orm/expressions';
import { drizzle } from 'drizzle-orm/pg-proxy';
import { migrate } from 'drizzle-orm/pg-proxy/migrator';
import { cities, users } from './schema';

async function main() {
  const db = drizzle(async (sql, params, method) => {
    try {
      const rows = await axios.post(`${process.env.REMOTE_DRIVER}/query`, { sql, params, method });

      return { rows: rows.data };
    } catch (e: any) {
      console.error('Error from pg proxy server:', e.response.data);
      return { rows: [] };
    }
  });

  await migrate(db, async (queries) => {
    try {
      await axios.post(`${process.env.REMOTE_DRIVER}/query`, { queries });
    } catch (e) {
      console.log(e);
      throw new Error('Proxy server cannot run migrations');
    }
  }, { migrationsFolder: 'drizzle' });

  const insertedCity = await db.insert(cities).values({ id: 1, name: 'name' }).returning();
  const insertedUser = await db.insert(users).values({ id: 1, name: 'name', email: 'email', cityId: 1 });
  const usersToCityResponse = await db.select().from(users).leftJoin(cities, eq(users.cityId, cities.id));
}

🎉 D1 Batch API support

Reference: https://developers.cloudflare.com/d1/platform/client-api/#dbbatch

Batch API usage example:

const batchResponse = await db.batch([
  db.insert(usersTable).values({ id: 1, name: 'John' }).returning({
    id: usersTable.id,
  }),
  db.update(usersTable).set({ name: 'Dan' }).where(eq(usersTable.id, 1)),
  db.query.usersTable.findMany({}),
  db.select().from(usersTable).where(eq(usersTable.id, 1)),
  db.select({ id: usersTable.id, invitedBy: usersTable.invitedBy }).from(
    usersTable,
  ),
]);

Type for batchResponse in this example would be:

type BatchResponse = [
  {
    id: number;
  }[],
  D1Result,
  {
    id: number;
    name: string;
    verified: number;
    invitedBy: number | null;
  }[],
  {
    id: number;
    name: string;
    verified: number;
    invitedBy: number | null;
  }[],
  {
    id: number;
    invitedBy: number | null;
  }[],
];

All possible builders that can be used inside db.batch:

`db.all()`,
`db.get()`,
`db.values()`,
`db.run()`,
`db.query.<table>.findMany()`,
`db.query.<table>.findFirst()`,
`db.select()...`,
`db.update()...`,
`db.delete()...`,
`db.insert()...`,

More usage examples here: integration-tests/tests/d1-batch.test.ts and in docs


Drizzle Kit 0.20.0

  1. New way to define drizzle.config using defineConfig function
  2. Possibility to access Cloudflare D1 with Drizzle Studio using wrangler.toml file
  3. Drizzle Studio is migrating to https://local.drizzle.studio/
  4. bigint unsigned support
  5. primaryKeys and foreignKeys now can have custom names
  6. Environment variables are now automatically fetched
  7. Some bug fixes and improvements

You can read more about drizzle-kit updates here