video-describer

Project built during Rocketseat's Next Level Week #13

MIT License

Stars
3

Video Describer

Allow users to upload a .mp3 file and get a transcription of the audio and later to send a prompt requesting to Open AI API to generate a text following the prompt rules.

Table of Contents

Installing

Easy peasy lemon squeezy:

$ yarn

Or:

$ npm install

Was installed and configured the eslint and prettier to keep the code clean and patterned.

Configuring

The application use just one database: SQLite. For the fastest setup is recommended to use docker-compose, you just need to up all services:

$ docker-compose up -d

Migrations

Remember to run the database migrations:

$ npx prisma migrate dev

See more information on Prisma Migrate.

.env

In this file you may configure your Postgres, MongoDB and Redis database connection, JWT settings, email and storage driver and app's urls. Rename the .env.example in the root directory to .env then just update with your settings.

key description default
DATABASE_URL Database connection Url. file:./dev.db
OPENAI_API_KEY Open AI API Key -

Refer to Where do I find my Secret API Key? to get/generate your OPENAI_API_KEY

Usage

To start up the app run:

$ yarn dev:server

Or:

npm run dev:server

Routes

route HTTP Method params description
/prompts GET - Return available prompts.
/upload POST Multipart payload with a file field with a mp3 file. Upload mp3 file.
/videos/:id/transcription POST id query parameter and body with a prompt of keywords Request video (mp3 file) transcription.
/videos/:id/generate POST id query parameter and body with the prompt and temperature. Generate an ouput based in the prompt and temperature sent.

Requests

  • POST /upload

MP3 file

  • POST /videos/:id/transcription

Request body:

{
  "prompt": "skate, skateboarding, BASAC",
}
  • POST /videos/:id/generate

Request body:

{
  "prompt": "Generate a small summary for the following text: '''\n{transcription}\n'''",
  "temperature": 0.5
}

{transcription} is a placeholder for the transcription generated in the previous route.

Running the tests

Jest was the choice to test the app, to run:

$ yarn test

Or:

$ npm run test

Coverage report

You can see the coverage report inside tests/coverage. They are automatically created after the tests run.