atdatabases

TypeScript clients for databases that prevent SQL Injection

MIT License

Downloads
4.4M
Stars
588
Committers
23

Bot releases are hidden (Show)

atdatabases - @databases/[email protected]

Published by ForbesLindesay about 2 years ago

New Features

  • Initial release (#240)
atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

Breaking Changes

  • Update sqlite3 to v5 (#244)
atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

Bug Fixes

  • Transactions that were retried due to serialization failures were not handled correctly (#242)
atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

New Features

  • Added .bulkInsertOrIgnore(options) (#229)

    Like bulkInsert except it will ignore conflicting inserts.

  • Added .bulkInsertOrUpdate(options) (#229)

    Like bulkInsert except it will update records where insert would conflict.

    async function setUserFavoriteColors(
      users: {
        email: string;
        favorite_color: string;
      }[],
    ) {
      await tables.users(db).bulkInsertOrUpdate({
        columnsToInsert: [`email`, `favorite_color`],
        columnsThatConflict: [`email`],
        columnsToUpdate: [`favorite_color`],
        records: users,
      });
    }
    
atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

New Features

  • Add --schemaName parameter (#238)
atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

New Features

  • Added bulkInsertStatement utility (#229)

    This returns the SQL statement, rather than immediately executing it. This can be useful because it lets you add ON CONFLICT handlers.

atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

New Features

  • Add --schemaName parameter (#238)
atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

Bug Fixes

  • An error was thrown when a query returns NULL in a column of type TIMESTAMP (#235)
atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

Bug Fixes

  • pg-test run some-command always exited with code 0, even if the command failed (#227)

  • Help text for sub-commands was not shown correctly (#227)

atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

Bug Fixes

  • Fix error message typo (#226)
atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

Bug Fixes

  • Fix error message typo (#226)
atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

Bug Fixes

  • mysql-test run some-command always exited with code 0, even if the command failed (#227)

  • Help text for sub-commands was not shown correctly (#227)

atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

Bug Fixes

  • Fix error message typo (#226)
atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

New Features

  • Add .andWhere to select queries (#225)

    This lets you add extra conditions to a .bulkFind query. e.g.

    import {gt} from '@databases/pg-typed';
    import db, {users} from './database';
    
    export async function getPostsSince(since: Date) {
      await tables
        .posts(db)
        .bulkFind({
          whereColumnNames: [`org_id`, `user_id`],
          whereConditions: [
            {org_id: 1, user_id: 10},
            {org_id: 2, user_id: 20},
          ],
        })
        .andWhere({created_at: gt(since)})
        .all();
    }
    
atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

Bug Fixes

  • If a connection was terminated unexpectedly while not fully idle (i.e. in the pool) but also not currently actively executing a query, it could crash the node.js process (#224)

    This happened because @databases/pg was only attaching the idle connection error handler while @databases/pg saw the connection as idle. The issue with this is that it's possible to do non-database async things within a transaction, such as reading and writing files. This could cause a connection to briefly be seen as "active" but not actually be executing a query. If a connection is terminated unexpectedly at that point (e.g. the Postgres server is rebooted) it would throw an unhandled exception.

    We now handle the error and remove the connection from the pool after the transaction.

  • We were recycling connections that had errored, providing that we could successfully execute a simple query on the connection. This is wasteful, but more importantly may fail to detect some edge cases where the connection is in an unexpected state (such as stuck in a read only transaction) but is still able to execute the test query. (#224)

    We now check the error that was thrown to determine whether the connection can still be recycled. If the error was anything other than a short list of common errors (various types of constraint violation) we close the connection and get a fresh connection for the pool.

atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

New Features

  • feat: allow using @databases/pg without a connection string (#222)

    This lets you use $PGHOST, $PGUSER etc. to configure your connection, which can work better in environments that expect you to connect using a unix socket and/or if you want to store only your database password in a secret manager.

atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

Performance Improvements

  • Skip quote escaping for identifiers that do not contain quotes (#223)

    The vast majority of identifiers in most applications do not include any quote characters. Since pg-typed & mysql-typed have to pass every table/column name to escapeDatabaseIdentifier, this could have been causing a significant performance penalty for some applications.

    On my local machine, escaping 10,000,000 Postgres identifiers goes from approximately 2,400ms to approximately 600ms.

atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

Refactors

  • Simplify usage of UNNEST (#218)

    You can use one UNNEST call with multiple arrays. This is the recommended way to produce records like this.

atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

New Features

  • Added .orderByAscDistinct and .orderByDescDistinct to SelectQuery (#216)

    This lets you select only the first record with a given id.

  • You can combine conditions on a field with allOf (#217)

    For example, you can do:

    import {allOf, greaterThan, lessThan} from '@databases/pg-typed';
    
    /**
     * Get posts where:
     *
     *   timestamp >= start AND timestamp < end
     */
    async function getPostsBetween(start: Date, end: Date) {
      return await posts(db)
        .find({
          timestamp: allOf([anyOf([greaterThan(start), start]), lessThan(end)]),
        })
        .all();
    }
    
  • You can ignore case when comparing string using caseInsensitive (#217)

    For example, you can do:

    import {caseInsensitive} from '@databases/pg-typed';
    
    /**
     * Return true if there is a user with this username, ignoring
     * the case of the user, so ForbesLindesay would be equivalent
     * to forbeslindesay
     */
    async function userExists(username: string) {
      return (
        0 !==
        (await users(db).count({
          username: caseInsensitive(username),
        }))
      );
    }
    
  • You can test fields in JSON and JSONB columns using jsonPath (#217)

    For example, you can do:

    import {jsonPath} from '@databases/pg-typed';
    
    /**
     * return events where:
     *
     *   event_data.type = 'FEEDBACK'
     */
    async function getFeedbackEvents() {
      return await events(db)
        .find({
          event_data: jsonPath(['type'], 'feedback'),
        })
        .all();
    }
    
  • You can combine entire queries using or and and (#217)

    For example, you can do:

    import {or, and, greaterThan} from '@databases/pg-typed';
    
    /**
     * return posts where:
     *
     *   user_id=${authorId}
     *   AND (
     *     (is_public IS TRUE AND view_count > 1000)
     *     OR (is_public IS FALSE AND view_count > 100)
     *   )
     */
    async function getPopularPostsByAuthor(authorId: User['id']) {
      return await posts(db)
        .find(
          and(
            {user_id: authorId},
            or(
              {
                is_public: true,
                view_count: greaterThan(1_000),
              },
              {
                is_public: false,
                view_count: greaterThan(100),
              },
            ),
          ),
        )
        .all();
    }
    

    This could be written without using and as:

    import {or, greaterThan} from '@databases/pg-typed';
    
    /**
     * return posts where:
     *
     *   (user_id=${authorId} AND is_public IS TRUE AND view_count > 1000)
     *   OR (user_id=${authorId} AND is_public IS FALSE AND view_count > 100)
     */
    async function getPopularPostsByAuthor(authorId: User['id']) {
      return await posts(db)
        .find(
          or(
              {
                user_id: authorId,
                is_public: true,
                view_count: greaterThan(1_000),
              },
              {
                user_id: authorId,
                is_public: false,
                view_count: greaterThan(100),
              },
            ),
        )
        .all();
    }
    
  • Added error codes and isNoResultFoundError and isMultipleResultsFoundError utilities to allow testing for common errors that come from pg-typed (#211)

  • Added the table name to the "More than one row matched..." error (#211)

  • Added a new databaseSchema option (#211)

    If you pass the databaseSchema generated by pg-schema-cli into pg-typed, it enables the new "bulk" operations:

    • bulkFind
    • bulkInsert
    • bulkUpdate
    • bulkDelete

    Check the pg-typed documentation for more details.

  • Added .findOneRequired to the table API (#211)

    This behaves exactly like .findOne except that it throws an error rather than returning null if the record is not found.

  • Added .one and .oneRequired to SelectQuery (#211)

    This can be helpful if you want to select only a subset of fields. For example:

    function getPostWithAuthorName(id: DbPost['id']) {
      const post = await posts(database).findOne({id});
      if (!post) return null;
      const author = await users(database)
        .find({id: post.author_id})
        .select(`name`)
        .oneRequired();
      return {post, author: author.name};
    }
    
  • Added table.key utility for following foreign keys. (#211)

    This is like a type safe version of the inQueryResults field query:

    function getPostAuthor(postId: DbPost['id']) {
      return await users(database).findOne({
        // find users where the id matches the
        // author_id field in posts where the post's
        // id is postId
        id: posts.key(`author_id`, {id: postId}),
      });
    }
    

    These can even be nested:

    async function getPostAuthorOrg(postId: DbPost['id']) {
      return await orgs(database).findOne({
        id: users.key(
          `org_id`,
          {id: posts.key(`author_id`, {id: postId})}
        ),
      });
    }
    

Performance Improvements

  • Skip running queries that are guarnted to return no results (#217)

    If you had a query like:

    import {anyOf} from '@databases/pg-typed'
    
    async function getUsers(ids: number[]) {
      return users(db).find({id: anyOf(ids)}).all()
    }
    

    and you called getUsers([]) we used to run the query:

    SELECT * FROM users WHERE FALSE
    

    We now detect that the WHERE condition always evaluates to FALSE and skip sending the query to the database.

atdatabases - @databases/[email protected]

Published by ForbesLindesay over 2 years ago

New Features

  • Re-export table and insert record types from index.ts by default. (#211)

    You can set types.tableReExportFileName and types.tableInsertParametersReExportFileName to null to disable this behaviour if you don't want this.

  • Generate a schema.json file containing an array of table names and some basic metadata about the columns. (#211)

    This structure can be passed to pg-typed instead of serializeValue to not only enable proper JSON serialisation, but also enable the new bulkOperation methods.

    If you don't want to generate this file, you can set schemaJsonFileName to null to disable it.