If you like HyveDA, give us a follow on X
Protocol
Lifecycle of a Blob

Lifecycle of a blob

The HyveDA network consists of serveral components and layers that are maintained by Hyve. All these components are completely open source, with the ultimate goal of enabling them to operate in a fully permissionless and decentralized manner. These components work together to process blobs, ensuring data availability.

Please note that these layers and components are actively being developed and will evolve over time.

Overview

Blob Lifecycle Diagram

Before delving into the architecture and technical aspects of HyveDA, let's briefly review the different components responsible for processing a blob. Detailed information about each component can be found in the rest of this documentation.

Blob

A blob (Binary Large Object) refers to any piece of data that needs to be validated and protected against data withholding attacks. The most common example is a batch of transactions for rollups. Other examples include the execution trace of an inference request on a decentralized AI network, cross-chain messages for communication protocols, and intents for intent-based protocols. Blobs have a time-to-live (TTL) that can be adjusted by the sender. As long as the TTL has not expired, the blob should remain available in the DA network.

In HyveDA, blobs are sent, signed, committed, and shared by the Client.

Client

The client is a server developed by Hyve and operated by the network, integrating HyveDA for data availability. It is a highly optimized piece of software that can quickly and concurrently create a KZG polynomial commitment and erasure-encoded chunks according to the security parameters. Clients are not trusted entities, and anyone can run a client. If the client sends incorrect data to the network, it will simply be dropped.

Shard

A shard is an erasure-encoded chunk that is sent into the DA node network for attestation. The shard includes not only the erasure-encoded data but also the necessary information to verify the chunk's validity. This verifiable data includes the blob's polynomial commitment, a KZG proof that the shard is part of the commitment's polynomial, the security parameters, and signatures. Shards have a hash address derived from their content, and based on that address, they are assigned to a node in the DAC.

DA Nodes

DA nodes are responsible for making data availability attestations. They receive the shards and, if the shard is assigned to them, process it. Processing for DA nodes involves verifying the shard, storing it, and then signing an attestation confirming that they have stored it. DA Nodes are operated by Symbiotic operators, who have a stake behind their node that can be partially slashed in the event of misbehavior. They are also responsible for returning the data as long as the TTL has not expired and can be partially slashed if they fail to do so.

DA nodes do not have a consensus algorithm for their attestations, allowing new DA nodes to join the network without increasing the network's bloat. This improves the network's throughput. DA nodes operate on a P2P network, quickly gossiping shards to each other without a single point of failure.

Attestation

An attestation is created by a DA Node after it has verified and stored a shard. Attestations are the backbone of the data availability network, providing the client with confidence and accountability for processed blobs. Attestations are signed using the BLS12-381 signature scheme, the same one used in Ethereum, allowing for easy aggregation.

Collector

The collector is responsible for gathering DA attestations and grouping them by blobs. When a blob has received all or enough DA attestations, the collector will aggregate the attestations and include them in the next DAC Certificate. The collector can also provide early access to DA attestations using cryptographic inclusion proofs, enabling clients to minimize latency while maintaining maximum security. Currently, the collector is a trusted entity, but over time, it will become decentralized and subject to its own slashing rules.

DAC Certificate

The DAC Certificate is the aggregation of all attestations made by DA nodes. It contains multiple blobs and is created approximately every 12 seconds to align with Ethereum's block production. The DAC Certificate includes all the information necessary for provers and clients to either prove data availability or initiate a security parameter breach. It is posted on Ethereum L1 to provide immutable security and facilitate the prover game.

Components outside of HyveDA

You may have noticed an L2 inbox in the diagram. This demonstrates the interoperability between, for example, a rollup and HyveDA. The Rollup sequencer posts the HyveDA Blob ID to its L2 Inbox address on Ethereum L1. Verifiers can then use this Blob ID to verify data availability and access the Rollup's data from the HyveDA network.