Bee is a Swarm client implemented in Go. It’s the basic building block for the Swarm network: a private; decentralized; and self-sustaining network for permissionless publishing and access to your (application) data.
BSD-3-CLAUSE License
Bot releases are hidden (Show)
Published by istae 10 months ago
The Bee team is excited to announce release candidate v2.0.0-rc1! 🎉
In this release we introduce a brand new mechanism of data redundancy in Swarm with erasure coding, which, under the hood, makes use of Reed-Solomon erasure coding and dispersed replicas.
A new header Swarm-Redundancy-Level: n
can be passed to upload requests to turn on erasure coding where n is [0, 4]. Refer to the table below for different levels and expected error rates.
Redundancy Level | Pseudonym | Expected Chunk Retrieval Error Rate |
---|---|---|
0 | None | 0% |
1 | Medium | 1% |
2 | Strong | 5% |
3 | Insane | 10% |
4 | Paranoid | 50% |
The triggering of a storage radius decrease now depends on the reserve size within the radius (chunks in the reserve with proximity order >= current radius), not the total reserve size, being lower than the 50% of the reserve capacity.
The btcd crypto library that Bee uses for things like signing chunks and maintaining keys has been updated.
Warning: this is an experimental release. Users only interested in trying out the features and changes in this release should update to this version.
For questions, comments, and feedback, reach out on Discord.
Published by istae 11 months ago
This is a patch release that properly resets the batchstore so that batches can be resynced from the new postage stamp contract.
For questions, comments, and feedback, reach out on Discord.
For a full PR rundown please consult the v1.18.1 diff.
Published by acha-bill 11 months ago
Full Changelog: https://github.com/ethersphere/bee/compare/v1.18.0-rc7...v1.18.0-rc8
Published by istae 12 months ago
With this release, many hardening issues were tackled. The team's focus has been mostly on improving connectivity of nodes across the network and bringing performance improvements to chunk caching operations.
Also featured is a new DB command that will perform a chunk validation of the chunkstore, similar to the optional step in the compaction command.
The retrieval protocol now has a similar multiplexing capability like pushsync, where multiple, in parallel, requests are fired from a forwarder peer that can directly access the neighborhood of a chunk.
For questions, comments, and feedback, reach out on Discord.
POST /pins/{ref}
API endpoint now stores chunks in parallel. ( #4427 )Published by istae about 1 year ago
In this small but important release, the Bee team introduces a new db compaction command to recover disk space. To prevent any data loss, operators should run the compaction on a copy of the localstore directory and, if successful, replace the original localstore with the compacted copy. The command is available as a sub-command under db
as such:
bee db compact --data-dir=
The pushsync and retrieval protocols now feature a fallback mechanism of trying un-reachable and un-healthy peers in the case that no reachable or healthy peers are left.
We've also added new logging guidelines for contributors in the readme.
For questions, comments, and feedback, reach out on Discord.
For a full PR rundown please consult the v1.17.5
diff.
Published by istae about 1 year ago
For the past few weeks, the Bee team's focus has been on improving network health, observability, and user experience.
Node operators, can now mine an overlay address for specific neighborhoods for fresh nodes by using the new --target-neighborhood
option. The new Swarm Scanner neighborhoods page displays neighborhood sizes and is a great tool to be used in tandem with this new feature.
Uploads are now by default deferred, as they were before the v1.17.0 release.
Additionally, the default postage stamp batch type is now immutable.
Another behavioral change is that swap-enable
is now by default false
and the bee start
command without additional options starts the node in ultra-light mode. Full node operators must enable the option with swap-enable: true
if not already enabled for their nodes to continue to operate as normal.
We have also improved logging across many different services and protocols.
Pushsync and retrieval protocols now report error messages back to the origin node instead of the generic "stream reset" errors. As a result, the protocol version has been bumped, making this a breaking change. It is imperative that operators update their nodes asap.
Previously, nuking a node could cause syncing problems due to the fact that
syncing intervals were never reset. This issue has now been tackled by having nodes detect that a peer's localstore has been nuked. They are able to do this by comparing the peer's localstore epoch time across connections.
For questions, comments, and feedback, reach out on Discord.
bee start
cmd starts the node in ultra-light mode. ( #4326)For a full PR rundown please consult the v1.17.4
diff.
Published by istae over 1 year ago
The bee team is excited to announce v1.16.0! 🎉
The team has been busy researching and testing ways to help swarm remain a reliable and healthy network, and to that end, we are happy to announce a brand new health service, salud.
With salud, nodes will periodically perform certain health checks on it's connected peers with data acquried from the status protocol. The checks as of this release are based on the duration of response to the status protocol msg, number of connected peers, the storage radius, and total batch commitment as computed by each peer.
For duration and number of connected peers, each peer must be within the 80th percentile to be deemed as healthy. Radius and batch commitment are measured in terms of the most common values as reported by each connected peer. Measurements are created anew for each peridoic health check.
A self check is also in place where if the node's own storage radius does not match with the rest of the networks', the node won't participate in the schelling game.
With this release, only the pushsync protocol utilizes the filtering of peers for requests based on the status of health.
For questions, comments, and feedback, reach out on discord.
For a full PR rundown please consult the v1.16.0
diff.
Published by istae over 1 year ago
The bee team is excited to announce v1.15.0!
With this release, we introduce a new pushsync feature to improve chunk syncing and replication in the network during uploading. Peers that forward chunk into the neighborhood will fire multiple requests to target multiple storer nodes. Forwarding will also terminate at the first peer within the neighborhood and will no longer be forwarded to the most closest peer within the network.
As a result of the change in protocol logic, the protocol version has been bumped so it's important that you upgrage your nodes to the latest version.
We've added two new fields to the status protocol response: total amount from the chainstate and mode of operation of the peer (light or full mode).
For questions, comments, and feedback, reach out on discord.
For a full PR rundown please consult the v1.15.0
diff.
Published by istae over 1 year ago
This one is a patch release to fix the innacurate the number of reported active nodes in the swarm scanner.
With this change, light nodes attempting to connect to a peer with a full topology bin won't get rejected.
For questions, comments, and feedback, reach out on discord.
Published by istae over 1 year ago
The Bee team is excited to announce the latest release!
The main focus of the team for past few weeks has been tightening loose ends surrounding chunk syncing.
The release also features a new API endpoint that returns the status of the node related to the storage incentives, like total reward won and the last round the node has participated and won.
For a full PR rundown please consult the v1.12.0
milestone.
Published by istae almost 2 years ago
The Bee team is excited to announce the latest release which mostly consists of minor fixes to chunk syncing and improved uploading experience.
The important news is uploads with mutable batches (which are the most common type of batches) are no longer terminated when max utilization of the batch is reached, whereby newer chunks that belong to the batch will simply replace older chunks in the network.
For a full PR rundown please consult the v1.11.0
milestone.