Substrate has a concept of an entity
. You can think of an entity
as a network accessible object that accepts incoming messages (which we call commands
) and has references to other entities (which we call links
). Entities are identified by their URLs.
Any URL that provides a well-formed response to an HTTP REFLECT request can act as an entity. An entity will reply to REFLECT request with a JSON object containing a description and its commands.
{ "description": ... "commands: { ... } }
Entity commands are flexible. They can have side effects in the real world or they can just return useful information.
At present there is only one command:
By default substrate provides a root entity at https://substrate.home.arpa/
For now the easiest way to see these commands is to open https://substrate.home.arpa/ in Chrome and use the JavaScript Console in DevTools to programmatically explore and run them.
Here's a bit of code you can use to do that.
let {ReflectCommands} = await import("/tool-call/js/commands.js")
await (await new ReflectCommands("/").reflect())
You can see it has commands:
To list the links available from the root entity use the links:query
command.
await (await new ReflectCommands("/").reflect())["links:query"].run()
This will return an object with a links
field. Expanding it, you can see that the root entity has links to:
A space is a filesystem that can serve files to the browser and can be accessed by services instances that are started with that space as a parameter.
Each space has links to:
A space's file tree links to:
and has commands to:
Each folder in a space links to:
and has commands to:
Each file in a space links to:
and has commands to:
A service instance can return whatever links it prefers. Services authors are encouraged to map their own data models onto links and entities.
There are a lot of rough edges here, but hopefully this is much better base for folks to start with.
Download the most recent ISO
Burn it to a USB drive
It should boot directly to a terminal. If not, you can open the UEFI shell and boot using that.
Shell> fs0:
fs0:> .\EFI\BOOT\bootx64.efi
Run the installer
# THIS WILL REFORMAT THE COMPUTER WITHOUT CONFIRMATION
sudo coreos-installer install /dev/nvme0n1
Then you can reboot (and remove the USB drive).
sudo reboot
Back on your development machine you should add the NUC's IP address to your /etc/hosts
file and ~/.ssh/config
. Be sure to use the machine's actual IP address, which is not always going to be 192.168.1.193
.
# /etc/hosts
192.168.1.193 substrate.home.arpa
# ~/.ssh/config
Host substrate.home.arpa
User core
IdentityFile ~/.ssh/id_substrate
Then visit the root debug shell at: https://substrate:[email protected]/debug/shell.
Set a password for the core user (we are no longer using the substrate user), and set your authorized_key with something like:
passwd core
# enter a new password
su core
mkdir -p ~/.ssh/authorized_keys.d
cat > ~/.ssh/authorized_keys.d/dev <<EOF
ssh-ed25519 ...
EOF
Build the container images, resourcedirs, and systemd units on the remote machine:
# HACK this is a workaround because we aren't properly mounting the oob files
./remote ssh sudo mkdir -p /run/media/oob/imagestore
./remote ./dev.sh systemd-reload
Under the hood, ./remote ...
will:
substrate.home.arpa
device. This includes any staged or unstaged changes in tracked files, but not ignored or untracked files../dev.sh systemd-reload
) on the NUC itselfUnder the hood, ./dev.sh systemd-reload
will:
systemd daemon-reload
On your laptop, visit https://substrate.home.arpa/bridge/. Select your microphone, click "Unmute", and try speaking.
After the initial reload, you can limit your build to a specific image. For example:
./remote ./dev.sh systemd-reload bridge