Run Datasette on AWS serverless.
APACHE-2.0 License
Run Datasette on AWS as a serverless application:
Sufficiently small databases (unzipped size up to ~250 MB, zipped size up to ~50 MB) will be inlined in the Lambda deployment package. Others will be published to S3 and fetched on Lambda startup.
You can see a demo using Datasette's fixtures db here: https://datasette-demo.code402.com/
Clone the repo and run ./update-stack <stack-name> [flags] <sqlite.db> [<sqlite.db> ...]
, e.g.:
git clone https://github.com/code402/datasette-lambda.git
cd datasette-lambda
./update-stack northwinds northwinds.db`
Some Datasette flags are supported:
--config key:value
, to set config options
--cors
, to enable Access-Control-Allow-Origin: *
headers on responses--metadata <metadata.json>
, to provide metadata
And some non-Datasette flags are supported:
--domain example.com
or --domain subdomain.example.com
, if example.com
is a hosted zone in Route 53
CNAME
record that points to the CloudFront distribution--prefix some/path
, to mount the Datasette app at a path other than the rootA CloudFormation stack will be created (or updated) with an S3 bucket.
The stub code and SQLite database(s) will be uploaded to the S3 bucket.
A second CloudFormation stack will then be created (or updated) with the necessary IAM roles, CloudFront, API Gateway and Lambda entities to expose your Datasette instance to the web.
./tail-logs <stack-name>
will watch the CloudWatch logs for the Lambda (NB: not the API Gateway) service - this can be useful for debugging runtime errors in Datasette itself.
Run ./delete-stack <stack-name>
to tear down the infrastructure.
Note: AWS has a rough edge with deleting Lambda@Edge functions. You will need to run delete-stack
, then wait a period of time, and run it again for the entire stack to be successfully removed. Ref: AWS docs
base_url
not always being respected in generated URLs (maybe issue in how we use Mangum?)Maybe:
publish
command, fixing #236