If you like HyveDA, give us a follow on X
Protocol
Components
Client: creates the Shreds

HyveDA Client

The HyveDA client serves as the access point to the network. It can be run locally, in the cloud, or used through an infrastructure provider. Instructions on how to deploy a HyveDA client will be released soon.

A look inside HyveDA Client

The HyveDA client must follow specific rules for blobs to be accepted by the DA Network. Fortunately, as long as you do not alter the code, this process will be handled in the most efficient manner possible. The client's code is divided into multiple pipelined stages to maximize efficiency. Below, we’ll explore these stages and how they work together.

Client Pipelines

Most of HyveDA's stages are multi-threaded, allowing multiple blobs to be processed in parallel. We won’t delve too deeply into this, but you’re welcome to check out our open-source code.

Pipelines

Receiver

The first step in the HyveDA Client is to receive incoming DA requests. This is handled by the receiver, which is a simple QUIC server. QUIC ensures that blobs can be quickly received by the system while mitigating the risk of packet loss and errors that UDP does not handle on its own.

When a new request is received, the raw blob data and the payload data are separated. The payload data contains information about the blob sender, which is necessary to verify the request's validity. The blob is placed in the Blob Bank so that it can be accessed by any thread without needing to be moved between threads.

Verification Stage

The signature verification stage confirms that the payload data is correctly signed and properly constructed. It also ensures that the KZG challenge created for the entire blob is accurate (similar to EIP4844). If this stage is skipped, an invalid blob will be uploaded to the DA network, which will subsequently be dropped by the DA nodes.

Encoding Stage

The encoding stage prepares the blob for distribution throughout the DA network. It is responsible for seven steps for each blob:

  1. Determining the number of erasure-encoded shards to create.
  2. Representing the blob as a polynomial.
  3. Creating a polynomial commitment for the blob.
  4. Evaluating the polynomial at several distinct indices.
  5. Grouping these evaluations to form erasure-encoded chunks.
  6. Applying a KZG Proof to the chunks from the encoder, ensuring that DA nodes can efficiently prove the data they receive is part of the original blob.
  7. Using the metadata of the blob, KZG Commitment, KZG Proof, and Erasure Encoded Chunk to calculate a Shred address hash, which is used to assign the Shred to a DA Node.

Shreds play a crucial role in data availability. They ensure that blobs can be reconstructed even in the presence of adversarial nodes in the DA network:

  • As long as the number of chunks for a blob (NumChunks) satisfies NumChunks x ChunkBytes >= BlobBytes, the blob can be reconstructed.
  • Using KZG Commitments and KZG Proofs eliminates the need to rely on fraud proofs.

Broadcast Stage

The broadcast stage is responsible for dispersing the chunks across the P2P gossip protocol. When the shreds arrive at this stage, DA Nodes will pick up the shreds and process them.

Code

™️

Open-Sourcing soon: we'll be open-sourcing our code soon.