Next-gen, Pusher-compatible, open-source WebSockets server. Simple, fast, and resilient. 📣
AGPL-3.0 License
Bot releases are hidden (Show)
Published by rennokki over 2 years ago
Published by rennokki over 2 years ago
Published by rennokki over 2 years ago
Published by rennokki over 2 years ago
Fixed NPM release
Published by rennokki over 2 years ago
NATS is an alternative to Redis PubSub for intra-process communication. Starting with this version, you may use the NATS adapter instead of Redis Pub/Sub. The driver is available for testing and you may use it with your NATS server.
The expectation for NATS is to increase the performance and prepare soketi for more robust features in distributed systems, offered by NATS, such as more KV operations support, like incr
or add
, as this will allow using the built-in Key-Value storage for rate limiters.
init()
handlers for adapters and closing connections after server gets closedWith the introduction of NATS, there was the need of making sure the adapters initialize before the server bootstrapping runs. This way, the adapters will now behave predictably.
As a result, the app got some polishing for the namespaces and plugins (adapters, rate limiters, queue managers), so that after the server closes, the handlers for their respective connections (like Redis connections or NATS connections) would close without suspending them. The sockets will also be evicted from memory on each server close.
We tested it thoroughly and it's not expected to break anything.
Published by rennokki over 2 years ago
Hotfixed the previous patch
Published by rennokki over 2 years ago
Published by rennokki over 2 years ago
Published by rennokki over 2 years ago
Starting with this version, you can set maximum presence channel members' size, maximum presence channel members count, and other limits at the app-level, with fallback to the server-declared defaults.
You need to add the following fields to your table:
`max_presence_members_per_channel` tinyint(1) NULL,
`max_presence_member_size_in_kb` tinyint(1) NULL,
`max_channel_name_length` tinyint(1) NULL,
`max_event_channels_at_once` tinyint(1) NULL,
`max_event_name_length` tinyint(1) NULL,
`max_event_payload_in_kb` tinyint(1) NULL,
`max_event_batch_size` tinyint(1) NULL
Setting any of them to null
or ''
will ignore the setting, and use the limits associated with the server-level declared defaults
Existing apps running on <0.29.0
will still work even if you don't have these fields added after the migration to 0.29.0
. You should add these fields to keep your database up-to-date or to have the choice to, later on, imply limits to your apps.
You need to add the following fields to your table:
max_presence_members_per_channel integer DEFAULT NULL,
max_presence_member_size_in_kb integer DEFAULT NULL,
max_channel_name_length integer DEFAULT NULL,
max_event_channels_at_once integer DEFAULT NULL,
max_event_name_length integer DEFAULT NULL,
max_event_payload_in_kb integer DEFAULT NULL,
max_event_batch_size integer DEFAULT NULL
Setting any of them to null
or ''
will ignore the setting, and use the limits associated with the server-level declared defaults
Existing apps running on <0.29.0
will still work even if you don't have these fields added after the migration to 0.29.0
. You should add these fields to keep your database up-to-date or to have the choice to, later on, imply limits to your apps.
Your items in DynamoDB can have the following new fields:
MaxPresenceMembersPerChannel: { N: '-1' },
MaxPresenceMemberSizeInKb: { N: '-1' },
MaxChannelNameLength: { N: '-1' },
MaxEventChannelsAtOnce: { N: '-1' },
MaxEventNameLength: { N: '-1' },
MaxEventPayloadInKb: { N: '-1' },
MaxEventBatchSize: { N: '-1' },
Not setting any of the above fields will ignore the setting, and will fallback to the limits associated with the server-level declared defaults.
Existing apps running on <0.29.0
will still work even if you don't have these fields added after the migration to 0.29.0
. You should add these fields to keep your database up-to-date or to have the choice to, later on, imply limits to your apps.
MODE
variable for the running modeThe app itself is full-stack, meaning it works as an HTTP API, WebSocket Server, and Queue processor, all at once. The MODE
variable is now introduced to help you scale apps that use external drivers (like Redis for queue). You may want to have two fleets: one that actively interacts with the userbase HTTP/WS, and one fleet that scales independently to process queues for webhooks.
MODE=full
The default mode, and is the mode in which the apps <0.29.0
currently run. (no breaking changes expected for this feature)
MODE=server
It does not process queues. It will only serve HTTP/WS requests to your clients.
MODE=worker
You need to pair this specific mode setting with PORT
to choose a different port to run on. The servers running in this mode will have an HTTP server running so that you can check for /metrics
, /
(health checks), and /ready
(readiness checks). There are NO WebSocket endpoints running.
Published by rennokki over 2 years ago
/accept-traffic
endpoint to see if new traffic can be redirected to the current instance (https://github.com/soketi/soketi/commit/537d420ff70aaa560ce9de1b4a0952ad14b27e40)Published by rennokki almost 3 years ago
Patched package-lock.json
Published by rennokki almost 3 years ago
SSL_CA
variable to specify the CA file path (https://github.com/soketi/soketi/pull/285)pusher:connection_established
is sent (https://github.com/soketi/soketi/pull/287)Published by rennokki almost 3 years ago
Published by rennokki almost 3 years ago
.send()
fails that looked at backpressure in a wrong way or that tried to send messages to already closed sockets (https://github.com/soketi/soketi/pull/282)Published by rennokki almost 3 years ago
Published by rennokki almost 3 years ago
Published by rennokki almost 3 years ago
PresenceMember
types within the app (no effect on servers) (https://github.com/soketi/soketi/pull/248, thanks @stayallive for pointing out)/batch_events
endpoint (https://github.com/soketi/soketi/pull/243, thanks to @stayallive for review 👍)/events
and /batch_events
, in case the payload is too big, the app will return 413
instead of 400
, according to the documentationPublished by rennokki almost 3 years ago
Published by rennokki almost 3 years ago
Published by rennokki almost 3 years ago