MIT License
This repository contains code to deploy a web application to detect and announce gestures in the Rock-Paper-Scissors-Lizard-Spock game created by Sam Kass and Karen Bryla.
It is configured for deployment with Azure Static Web Apps, and uses Neural Text to Speech and Custom Vision from Azure Cognitive Services.
See this blog post for more details about this app.
Fork this repository.
In your fork, delete the .github/workflows
folder.
You will need an Azure subscription. Get one here for free if you don't have one already.
Set up an Azure Static Web App resource in a region of your choice. During the setup process, name your Static Web App rpsweb
(or something else if you'd like), sign into your Github account, and select this forked repository.
When prompted to enter various application paths in the "Build" setup tab, leave the defaults for the "App location" and "Api location" fields unchanged, and enter "app" for the "App artifact location". Note the URL shown after your Static Web App is deployed — this is where your site will eventually be, but nothing will be live there yet.
rpskey
in the East US region. (You can choose a different region if you wish, but these instructions assume eastus
.) Feel free to choose the "Free F0" tier unless you expect your website to get lots of usage. (Note that neural text-to-speech is available in limited regions.) Copy "Key 1" from the "Keys and Endpoint" tab once the resource has been created.COGNITIVE_SERVICES_SUBSCRIPTION_KEY
, and contain "Key 1" from your Azure Cognitive Services subscription keys (which you copied in the last step). The other should be COGNITIVE_SERVICES_REGION
, and should be set to eastus
unless you created the key in another region..github/workflows/some-filename.yml
), add the following lines in between the "actions/checkout@v2" and "Build and Deploy" steps/code blocks. You can refer to the yml file in this repo for a reference. - name: Inject secret keys
run: |
sed -i.bak "s/\$(COGNITIVE_SERVICES_SUBSCRIPTION_KEY)/${{ secrets.COGNITIVE_SERVICES_SUBSCRIPTION_KEY}}/" .env
sed -i.bak "s/\$(COGNITIVE_SERVICES_REGION)/${{ secrets.COGNITIVE_SERVICES_REGION}}/" .env
Once you have committed this change, GitHub Actions will automatically attempt to build your app again. If it succeeds (and it should!) then you should be able to see your app live on the Internet!
If you want to be able to run the app locally, copy the .env.local.sample
file to .env.local
and add your keys there as well. Do not commit this file to git. After running npm install
, you can run a local server to test the app with npm run dev
.
You can tweak the code of this application to change the spoken phrases, and even what gestures or objects the camera detects. This is a great way to try out the capabilities of Neural Text to Speech and Custom Vision, and all you need to do is push changes to the GitHub repository and Static Web Apps will rebuild and redeploy the application for you. Check out the instructions in CUSTOMIZATION.md for details.