! [DA Scalability: Avail’s Current State] (https://img-cdn.gateio.im/webp-social/moments-7f230462a9-d4e2f4e2b4-dd1a6f-cd5cc0.webp)
As users begin to integrate Avail into their chain designs, the question often arises: “How many transactions can Avail process?” In this post, we will compare the throughput of Ethereum and Avail based on the current architecture of the two chains.
This is the first in a series on Avail scalability that will discuss Avail’s current performance and its ability to scale in the near and long term.
Avail vs Ethereum
Ethereum’s blocks can hold up to 1.875 MB of data and have a block time of about 13 seconds. However, Ethereum’s blocks are usually not filled. Almost every block will not reach the upper limit of data because it reaches the gas limit, because both execution and settlement consume gas. As a result, the amount of data stored in each block is variable.
The need to combine execution, settlement, and data availability in the same block is a central issue with a single blockchain architecture. L2 rollups started the movement for modular blockchains, allowing execution operations to be handled on a single chain, and the chain’s blocks are dedicated to execution. Avail takes a step further by adopting a modular design that decouples data availability as well, allowing blocks of a chain to be dedicated to data availability.
! [DA Scalability: Avail’s Current State] (https://img-cdn.gateio.im/webp-social/moments-7f230462a9-60a64e927d-dd1a6f-cd5cc0.webp)
Currently, Avail’s block time is 20 seconds, and each block can hold about 2 MB of data. Assuming an average transaction size of 250 bytes, each Avail block can hold about 8, 400 transactions today (420 transactions per second).
What’s more, Avail can always fill blocks up to the storage limit and increase the size as needed. We have a number of levers that can be quickly adjusted to increase the number of transactions per block to more than 500, 000 (25, 000 transactions per second) when needed.
Can we increase throughput?
In order to increase throughput (especially transactions per second), the chain’s architects need to increase the block size or decrease the block time.
To be added to the chain, each block must generate commitments, build proofs, propagate them, and have all other nodes verify those proofs. These steps always take time, which sets a natural upper bound on the time it takes for blocks to be generated and confirmed.
Therefore, we can’t simply reduce the block time to, say, one second. This simply doesn’t have enough time to generate commitments, generate proofs, and propagate those parts to all participants across the network. In a theoretical one-second block time, even if each network participant runs the most powerful machine capable of producing commitments and proofs in an instant, the bottleneck lies in the propagation of data. Due to internet speed limitations, the network is unable to notify all full nodes of blocks fast enough. So we have to make sure that the block time is high enough to allow the data to be distributed to the network after consensus is reached.
Conversely, it is also possible to increase throughput by increasing the block size, i.e. increasing the amount of data that we can contain in each block.
Current Architecture: Add a block to the chain
First, let’s look at the steps required to add a block to the chain. There are three main steps required to add each block to the chain. This includes the time it takes to generate a block, propagate it, and validate it.
! [DA Scalability: Avail’s Current State] (https://img-cdn.gateio.im/webp-social/moments-7f230462a9-8d735187bc-dd1a6f-cd5cc0.webp)
1. Block generation
This step includes the time it takes to collect and sort Avail transactions, build commitments, and scale (erasure coding) the data matrix.
Block generation measures the time it takes to generate a block, as this will always take at least some time. Therefore, we must take into account not only the time in the best case, but also the average situation and the time in the worst case on different machines.
The weakest machine that can participate in the generation of new blocks is the one that reaches the limit of performance on average. All the slower machines will eventually fall behind because they can’t catch up with the faster machines.
2. Propagation delay
Propagation delay is a measure of the time it takes to propagate a block from a producer to a validator and a peer-to-peer network.
Currently, Avail’s block size is 2 MB. Within the current 20-second block time limit, such a block size can be propagated. Larger block sizes make propagation trickier.
For example, if we increase Avail to support a 128 MB block, the computation may be able to scale (about 7 seconds). However, the bottleneck becomes the time it takes to send and download these blocks on the network.
Sending a 128 MB block to the globe over a peer-to-peer network in 5 seconds may be the limit of what is currently achievable.
The limit of 128 MB has nothing to do with data availability or our commitment scenario, but rather a matter of communication bandwidth limitations.
This need to account for propagation latency gives us Avail’s current theoretical block size limit.
3. Block validation
Once propagated, participating validators don’t simply trust the blocks provided to them by the block proposer — they need to verify that the produced block actually contains the data claimed by the producer.
There is a certain tension between these three steps. We can make all validators powerful machines and tightly connected by an excellent network in the same data center – this will reduce production and validation time, and allow us to propagate a lot more data. However, because we also want to have a decentralized, diverse network with different types of participants, this is not an ideal approach.
Instead, the increase in throughput will be achieved by understanding the steps required to add blocks to the Avail chain, and which steps can be optimized.
! [DA Scalability: Avail’s Current State] (https://img-cdn.gateio.im/webp-social/moments-7f230462a9-1d534dd1ad-dd1a6f-cd5cc0.webp)
Currently, validators using Avail take the entire block and copy all the commitments generated by the proposer to validate the block. This means that block producers and all validators need to perform each of the steps in the chart above.
In a single blockchain, it is the default practice for each validator to reconstruct the entire block. However, on a chain like Avail, where transactions are not executed, this rebuild is not necessary. Therefore, one way we can optimize Avail is by allowing validators to achieve their own guarantees of data availability through sampling, rather than by reconstructing blocks. This is less resource-intensive for validators than requiring them to replicate all commitments. More on this in a future article.
How does exploratory data availability sampling work?
In Avail, light clients use three core tools to confirm the availability of data: samples, commitments, and proofs.
Currently light clients perform sample operations that request the value of a particular cell and its associated proof of validity from the Avail network. The more samples they take, the more confident they are that all the data is available.
Commitments are generated by block proposers and summarize an entire row of data in an Avail block. (Hint: This is the step we’ll be optimizing later in this series.) )
Each cell in the network generates a proof. Light clients use attestations and promises to verify that the values of the cells provided to them are correct.
Using these tools, the light client then performs three steps.
Decision: The required availability confidence determines the number of samples for light client execution. They don’t need many samples (8-30 samples) to achieve more than 99.95% availability guarantee.
Download: The light client then requests these samples and their associated proofs and downloads them from the network (full node or other light client).
Validation: They look at the promise in the block header (which is always accessible to light clients) and verify the proof of each cell against the promise.
With this alone, light clients can confirm the availability of all data in a block without having to download most of the block’s contents. Other steps taken by light clients also contribute to Avail’s security, but are not listed here. For example, light clients are able to share samples and proofs they download with other light clients in case they need to. But that’s the procedure for light clients to confirm data availability!
In the second part of this series, we’ll explore ways to increase Avail throughput in the short term. We’ll explain why we believe Avail can meet the needs of any network in the coming year, and how we can improve the network to meet the challenges of the years ahead.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
DA scalability: Avail's current state
! [DA Scalability: Avail’s Current State] (https://img-cdn.gateio.im/webp-social/moments-7f230462a9-d4e2f4e2b4-dd1a6f-cd5cc0.webp)
As users begin to integrate Avail into their chain designs, the question often arises: “How many transactions can Avail process?” In this post, we will compare the throughput of Ethereum and Avail based on the current architecture of the two chains.
This is the first in a series on Avail scalability that will discuss Avail’s current performance and its ability to scale in the near and long term.
Avail vs Ethereum
Ethereum’s blocks can hold up to 1.875 MB of data and have a block time of about 13 seconds. However, Ethereum’s blocks are usually not filled. Almost every block will not reach the upper limit of data because it reaches the gas limit, because both execution and settlement consume gas. As a result, the amount of data stored in each block is variable.
The need to combine execution, settlement, and data availability in the same block is a central issue with a single blockchain architecture. L2 rollups started the movement for modular blockchains, allowing execution operations to be handled on a single chain, and the chain’s blocks are dedicated to execution. Avail takes a step further by adopting a modular design that decouples data availability as well, allowing blocks of a chain to be dedicated to data availability.
! [DA Scalability: Avail’s Current State] (https://img-cdn.gateio.im/webp-social/moments-7f230462a9-60a64e927d-dd1a6f-cd5cc0.webp)
Currently, Avail’s block time is 20 seconds, and each block can hold about 2 MB of data. Assuming an average transaction size of 250 bytes, each Avail block can hold about 8, 400 transactions today (420 transactions per second).
What’s more, Avail can always fill blocks up to the storage limit and increase the size as needed. We have a number of levers that can be quickly adjusted to increase the number of transactions per block to more than 500, 000 (25, 000 transactions per second) when needed.
Can we increase throughput?
In order to increase throughput (especially transactions per second), the chain’s architects need to increase the block size or decrease the block time.
To be added to the chain, each block must generate commitments, build proofs, propagate them, and have all other nodes verify those proofs. These steps always take time, which sets a natural upper bound on the time it takes for blocks to be generated and confirmed.
Therefore, we can’t simply reduce the block time to, say, one second. This simply doesn’t have enough time to generate commitments, generate proofs, and propagate those parts to all participants across the network. In a theoretical one-second block time, even if each network participant runs the most powerful machine capable of producing commitments and proofs in an instant, the bottleneck lies in the propagation of data. Due to internet speed limitations, the network is unable to notify all full nodes of blocks fast enough. So we have to make sure that the block time is high enough to allow the data to be distributed to the network after consensus is reached.
Conversely, it is also possible to increase throughput by increasing the block size, i.e. increasing the amount of data that we can contain in each block.
Current Architecture: Add a block to the chain
First, let’s look at the steps required to add a block to the chain. There are three main steps required to add each block to the chain. This includes the time it takes to generate a block, propagate it, and validate it.
! [DA Scalability: Avail’s Current State] (https://img-cdn.gateio.im/webp-social/moments-7f230462a9-8d735187bc-dd1a6f-cd5cc0.webp)
1. Block generation
This step includes the time it takes to collect and sort Avail transactions, build commitments, and scale (erasure coding) the data matrix.
Block generation measures the time it takes to generate a block, as this will always take at least some time. Therefore, we must take into account not only the time in the best case, but also the average situation and the time in the worst case on different machines.
The weakest machine that can participate in the generation of new blocks is the one that reaches the limit of performance on average. All the slower machines will eventually fall behind because they can’t catch up with the faster machines.
2. Propagation delay
Propagation delay is a measure of the time it takes to propagate a block from a producer to a validator and a peer-to-peer network.
Currently, Avail’s block size is 2 MB. Within the current 20-second block time limit, such a block size can be propagated. Larger block sizes make propagation trickier.
For example, if we increase Avail to support a 128 MB block, the computation may be able to scale (about 7 seconds). However, the bottleneck becomes the time it takes to send and download these blocks on the network.
Sending a 128 MB block to the globe over a peer-to-peer network in 5 seconds may be the limit of what is currently achievable.
The limit of 128 MB has nothing to do with data availability or our commitment scenario, but rather a matter of communication bandwidth limitations.
This need to account for propagation latency gives us Avail’s current theoretical block size limit.
3. Block validation
Once propagated, participating validators don’t simply trust the blocks provided to them by the block proposer — they need to verify that the produced block actually contains the data claimed by the producer.
There is a certain tension between these three steps. We can make all validators powerful machines and tightly connected by an excellent network in the same data center – this will reduce production and validation time, and allow us to propagate a lot more data. However, because we also want to have a decentralized, diverse network with different types of participants, this is not an ideal approach.
Instead, the increase in throughput will be achieved by understanding the steps required to add blocks to the Avail chain, and which steps can be optimized.
! [DA Scalability: Avail’s Current State] (https://img-cdn.gateio.im/webp-social/moments-7f230462a9-1d534dd1ad-dd1a6f-cd5cc0.webp)
Currently, validators using Avail take the entire block and copy all the commitments generated by the proposer to validate the block. This means that block producers and all validators need to perform each of the steps in the chart above.
In a single blockchain, it is the default practice for each validator to reconstruct the entire block. However, on a chain like Avail, where transactions are not executed, this rebuild is not necessary. Therefore, one way we can optimize Avail is by allowing validators to achieve their own guarantees of data availability through sampling, rather than by reconstructing blocks. This is less resource-intensive for validators than requiring them to replicate all commitments. More on this in a future article.
How does exploratory data availability sampling work?
In Avail, light clients use three core tools to confirm the availability of data: samples, commitments, and proofs.
Using these tools, the light client then performs three steps.
With this alone, light clients can confirm the availability of all data in a block without having to download most of the block’s contents. Other steps taken by light clients also contribute to Avail’s security, but are not listed here. For example, light clients are able to share samples and proofs they download with other light clients in case they need to. But that’s the procedure for light clients to confirm data availability!
In the second part of this series, we’ll explore ways to increase Avail throughput in the short term. We’ll explain why we believe Avail can meet the needs of any network in the coming year, and how we can improve the network to meet the challenges of the years ahead.