Bee is a Swarm client implemented in Go. It’s the basic building block for the Swarm network: a private; decentralized; and self-sustaining network for permissionless publishing and access to your (application) data.
BSD-3-CLAUSE License
Bot releases are visible (Hide)
The Bee team is elated to announce the official v2.0.0 release. 🎉
In this release we introduce a brand new mechanism of data redundancy in Swarm with erasure coding, which, under the hood, makes use of Reed-Solomon erasure coding and dispersed replicas. This brings a whole new level of protection against potential data loss.
A new header Swarm-Redundancy-Level: n
can be passed to upload requests to turn on erasure coding where n is [0, 4]. Refer to the table below for different levels of redundancy and chunk loss tolerance.
Redundancy Level | Pseudonym | Chunk Retrieval Failure Tolerance |
---|---|---|
0 | None | 0% |
1 | Medium | 1% |
2 | Strong | 5% |
3 | Insane | 10% |
4 | Paranoid | 50% |
With this milestone release, the Swarm Testnet is now officially running on the Sepolia blockchain.
Apply the configuration changes below to a fresh node to be able connect to the Sepolia Testnet.
bootnode:
- /dnsaddr/sepolia.testnet.ethswarm.org
blockchain-rpc-endpoint: {a-sepolia-rpc-endpoint}
For questions, comments, and feedback, reach out on Discord.
/bzz
endpoint with the Head
request type. ( #4588 )pins/check
endpoint. ( #4573 )ReserveSizeWithRadius
field, which is the count of chunks in the reserve that falls under the responsibility of the node has been added to the status protocol. ( #4585 )Published by github-actions[bot] 7 months ago
The Bee team is elated to announce the official v2.0.0 release. 🎉
In this release we introduce a brand new mechanism of data redundancy in Swarm with erasure coding, which, under the hood, makes use of Reed-Solomon erasure coding and dispersed replicas. This brings a whole new level of protection against potential data loss.
A new header Swarm-Redundancy-Level: n
can be passed to upload requests to turn on erasure coding where n is [0, 4]. Refer to the table below for different levels of redundancy and chunk loss tolerance.
Redundancy Level | Pseudonym | Chunk Retrieval Failure Tolerance |
---|---|---|
0 | None | 0% |
1 | Medium | 1% |
2 | Strong | 5% |
3 | Insane | 10% |
4 | Paranoid | 50% |
With this milestone release, the Swarm Testnet is now officially running on the Sepolia blockchain.
Apply the configuration changes below to a fresh node to be able connect to the Sepolia Testnet.
bootnode:
- /dnsaddr/sepolia.testnet.ethswarm.org
blockchain-rpc-endpoint: {a-sepolia-rpc-endpoint}
For questions, comments, and feedback, reach out on Discord.
/bzz
endpoint with the Head
request type. ( #4588 )pins/check
endpoint. ( #4573 )ReserveSizeWithRadius
field, which is the count of chunks in the reserve that falls under the responsibility of the node has been added to the status protocol. ( #4585 )Published by github-actions[bot] 7 months ago
Published by github-actions[bot] 7 months ago
Published by github-actions[bot] 8 months ago
Published by github-actions[bot] 9 months ago
Published by github-actions[bot] 9 months ago
Published by github-actions[bot] 9 months ago
Published by github-actions[bot] 9 months ago
Published by istae 10 months ago
The Bee team is excited to announce release candidate v2.0.0-rc1! 🎉
In this release we introduce a brand new mechanism of data redundancy in Swarm with erasure coding, which, under the hood, makes use of Reed-Solomon erasure coding and dispersed replicas.
A new header Swarm-Redundancy-Level: n
can be passed to upload requests to turn on erasure coding where n is [0, 4]. Refer to the table below for different levels and expected error rates.
Redundancy Level | Pseudonym | Expected Chunk Retrieval Error Rate |
---|---|---|
0 | None | 0% |
1 | Medium | 1% |
2 | Strong | 5% |
3 | Insane | 10% |
4 | Paranoid | 50% |
The triggering of a storage radius decrease now depends on the reserve size within the radius (chunks in the reserve with proximity order >= current radius), not the total reserve size, being lower than the 50% of the reserve capacity.
The btcd crypto library that Bee uses for things like signing chunks and maintaining keys has been updated.
Warning: this is an experimental release. Users only interested in trying out the features and changes in this release should update to this version.
For questions, comments, and feedback, reach out on Discord.
Published by github-actions[bot] 10 months ago
Building upon the previous release, the sync intervals are re-synced so that nodes may collect any potentially missing chunks from the network.
The initial syncing a node performs to collect missing chunks from peers, aka historical syncing, is now rate limited to lower and stabilize CPU usage.
For questions, comments, and feedback, reach out on Discord.
Published by istae 11 months ago
This is a patch release that properly resets the batchstore so that batches can be resynced from the new postage stamp contract.
For questions, comments, and feedback, reach out on Discord.
For a full PR rundown please consult the v1.18.1 diff.
Published by github-actions[bot] 11 months ago
The main theme of this release is the delivery of the last phase of storage incentives, the fourth phase, and thus the end of the storage incentive saga. For this reason, this is a breaking release, as the handshake version has been bumped. The release also includes one bug fix and minor improvements, which can be found below.
Published by acha-bill 11 months ago
Full Changelog: https://github.com/ethersphere/bee/compare/v1.18.0-rc7...v1.18.0-rc8
Published by github-actions[bot] 11 months ago
Published by github-actions[bot] 11 months ago
Published by github-actions[bot] 11 months ago
Published by istae 12 months ago
With this release, many hardening issues were tackled. The team's focus has been mostly on improving connectivity of nodes across the network and bringing performance improvements to chunk caching operations.
Also featured is a new DB command that will perform a chunk validation of the chunkstore, similar to the optional step in the compaction command.
The retrieval protocol now has a similar multiplexing capability like pushsync, where multiple, in parallel, requests are fired from a forwarder peer that can directly access the neighborhood of a chunk.
For questions, comments, and feedback, reach out on Discord.
POST /pins/{ref}
API endpoint now stores chunks in parallel. ( #4427 )Published by github-actions[bot] 12 months ago
Published by github-actions[bot] 12 months ago