Interview with Celestia: Modularity, Ethereum, and the Scaling Future of the Crypto World

Currently, blockchains face a triple dilemma: scalability, security, and a lack of decentralization – a lack of trustless cross-chain communication, a lack of rollup scalability when the number of transactions grows large enough, and an inability to maintain a high level of security and decentralization while aiming to increase throughput. It’s a long-standing problem, and the essence of the problem is finding a way to safely store data in a smaller, lighter container without using too large or expensive storage devices.

Most blockchains today are monolithic, the core functions of the blockchain, including execution and consensus, occur simultaneously and are executed by the same set of validators, monolithic architectures are difficult to scale because each transaction must be executed by a full node, resulting in bottlenecks, while modular blockchains are blockchains that completely outsource at least one of the 4 components (consensus, data availability, execution, settlement) to an external chain.

Whereas, Celestia is the first modular blockchain network and a cloud computing network for Web 3, which is a pluggable consensus and data availability layer that enables anyone to quickly deploy a decentralized blockchain without the overhead of bootstrapping a new consensus network. There is an opinion in the industry that Celestia is the most important underlying innovation in the blockchain industry since Ethereum. And both Ethereum and Celestia are building a secure base layer. At TOKEN 2049, BlockBeats sat down with Nick White, co-founder and COO of Celestia, to explore the relationship between Celestia and Ethereum and the story behind Celestia.

Without Celestia, Ethereum can’t scale rollups

Of these triple dilemmas, the lack of scalability has the greatest impact – only by enhancing the scalability of the blockchain can hundreds of millions of people also open the window to the chain. This is also the biggest dilemma faced by mainstream blockchains, including Ethereum. Currently, Ethereum already has scaling solutions such as Optimism, ZKsync, and Starknet. However, the data availability of these scaling solutions is heavily dependent on Ethereum itself. At the same time, Ethereum’s gas fees are still expensive.

Previously, Ethereum founder Vitalik had sketched what he saw as the ultimate form of the Ethereum blockchain, with a lot of space depicting a new Ethereum built with Rollups and DAs. To some extent, this undoubtedly points to the way to break the game for Ethereum in the next decade - modularity.

Related Reading: Modularity: Ethereum’s Game Breaker for the Next Decade

BlockBeats: Can you tell us a little bit about yourself and your background?

Nick: Absolutely. I’m Nick White. I’m the Chief Operating Officer (COO) of Celestia Labs. We’re building Celestia, which is the first modular blockchain network, which means that Celestia is a proponent of a new paradigm for building blockchains, where instead of trying to do everything in one protocol, we’re splitting the protocol into different layers, each focusing on a specific function, and then those layers can be recombined to build blockchains and applications.

As a result, Celestia focuses on the consensus and data availability layers of the stack without any execution. Execution is achieved through rollups, one of the Layer 2 schemes. People can deploy rollups on top of Celestia, and Celestia provides a scalable, decentralized block space for people to build. So you can think of us as the first layer of a future design specifically centered around Rollups to scale Rollups.

BlockBeats: When did you first start looking to adopt modular blockchains?

Nick: It all stems from two white papers that came out in 2018 and 2019. The first white paper was co-authored by Celestia’s co-founder, Mustafa Albasan, with Vitalik, and is called Data Availability Sampling and Fraud Proofs. In this paper, he solves the scalability problem by showing that it is possible to build a blockchain that can expand the lock space with the number of nodes in the network.

He then wrote another white paper called “Lazy Ledger” based on his previous work. “Lazy Ledger” is a continuation and expansion of the concept of data availability scaling, in which he proposes a new idea - to build a blockchain that is only responsible for data availability and does not perform any transactions. At the time, he referred to it as a “client-side smart contract”.

Related Reading: Understanding Celestia and Fuel, Modular Blockchains to Watch for in 2023

Blockchain clients will execute transactions independently of the first layer, which is now known as Rollup. Rollups are actually off-chain execution of smart contracts and applications. Therefore, Lazy Ledger does introduce the concept of modular blockchain. Later, when Rollups came along, he further showed how the whole system worked, as Rollups could make the execution layer as scalable as data availability sampling.

MetaStone: Does the launch of Ethereum’s project sharding reduce the cost of Layer 2, does it have an impact on Celestia?

Nick: ETH shards, aka Ethereum, have actually shifted to mimicking the way Celestia is building on the roadmap. Before that, they were building ETH 2.0, aka sharding technology, but in late 2020, they decided to move and follow the way Celestia was built. Over time, they gradually aligned their architecture more and more with Celestia’s model. So, Danksharding is basically implementing different implementations of similar ideas to Celestia.

However, there are several differences, the first being time. Celestia is going to launch in a few months, and Danksharding is still in the design and research phase, it’s hard to know when it’s going to be launched, and I feel like they haven’t even set a date yet, but they do have Proto-Danksharding, which is EIP-4844, but that will only have a one-time small increase in Ethereum’s block space.

Based on the demand we’re seeing for deploying Layer 2, I don’t think that’s nearly enough to provide the required throughput. As a result, Celestia will be launched at a time when people want to deploy Rollups in large numbers. I don’t think Ethereum would be able to scale Rollups without Celestia. And in the long run, when Danksharding launched, the problem was that it was similar to a data availability layer attached to a single layer one, which was the original Ethereum chain.

As a result, Ethereum has a lot of technical debt and baggage that needs to be developed on top of it, and Celestia has the opportunity to start from scratch so that there isn’t so much state bloat. We don’t need to execute, our network is very lightweight, simplified, and Ethereum doesn’t have that luxury, they still need to carry the burden of Ethereum Layer 1, and those are some of the differences I see.

DAS is more trustworthy than DACs

Allowing users to securely own their data and the assets it represents, and dispel the concerns of ordinary users about asset security, can help guide the next billion users into Web3. Therefore, an independent data availability layer will be an integral part of Web3. Data Availability (DA) essentially means that light nodes do not need to store all data and maintain the state of the entire network in a timely manner without participating in consensus.

The current DAS (Data Availability Sampling) and DAC (Data Availability Council) are the two main ways to validate data. The former verifies that a block has been published by downloading a few randomly selected blocks, while the latter confirms that it has received data by signing each update of the state by its quorum.

It is generally accepted in the industry that when an independent data availability layer is a public blockchain, it is superior to an availability committee composed of a group of subjectively conscious people. Because if enough committee members’ private keys are stolen to make off-chain data availability unavailable, then the security of users’ funds and data will be greatly threatened. Nick pointed out that what Celestia is currently doing is making the data availability layer more decentralized, which is equivalent to providing an independent DA public chain with a series of validators, block producers, and consensus mechanisms to increase the level of security.

MetaStone: In the DA marketplace, all DA tiers primarily accept data from Layer 2 and Layer 3, but we know that most Layer 3s are unable to send their data to the DA layer due to data staking, but Polygon will use a bridge to receive this data. I was wondering what you think about this, and what method will Celestia use to receive data from Layer 3?

Nick: What it does is make sure that the bridge verifies the availability of the data on Celestia. As a result, third parties can publish their data to Celestia but publish their status updates to another chain, such as Ethereum Layer 1, Optimism, Polygon, etc. Those on-chain aggregation contracts can use this bridge to verify the availability of data on Celestia. So we’re able to help scale that.

MetaStone: In the current DA market, EigenLabs has also launched an EigenDA. At the same time, EigenLabs borrowed Ethereum’s original distributed nodes to secure other networks and reduce node operations. So, what are your thoughts on this?

Nick: Re-staking is an interesting idea that allows you to use existing funds, such as collateral, to stake a new protocol. But it doesn’t essentially scale the blockchain, it’s just a way for you to launch a new protocol without having to issue new tokens. The problem with EigenDA is that their design isn’t really about data availability. Data availability here refers to the kind of concept that comes to mind when you think of Ethereum, Danksharding, or Celestia. Because EigenDA is just a data availability committee, i.e. a multi-signature account, someone tells you that the data is available, but you can’t verify it yourself. As a result, EigenDA can’t actually be compared with Celestia, and they’re not exactly the same product.

Another issue is that there is no penalty for data retention attacks if they use re-staked ETH or any non-EigenDA tokens to secure EigenDA. A data retention attack is an unattributable glitch that means you can’t prove to a smart contract or any other entity on Ethereum Layer 1 that data is retained. As a result, if someone actually does a data hold, they won’t be able to penalize the re-staked ETH. This way, you can actually carry out attacks on EigenDA at zero cost. So, I think it’s a deep problem in design. That’s how I think of EigenDA.

MetaStone: Some off-chain data availability layers choose to use DACs to protect their data during the process of validating data, while others choose to use DAS. What are your thoughts on DACs and DAS?**

Nick: Blockchains are actually verifiable computers. Therefore, you don’t need to trust someone else, such as a committee. Because the purpose of decentralization is to achieve it by allowing the end user to verify the chain. So, the data availability council isn’t actually a blockchain, because when using a DAC, you have to trust a council by definition. In contrast, data availability sampling is a method of directly verifying the chain by making sampling. So, from a verifiability standpoint, it’s a true blockchain. You don’t need to trust Celestia’s validators, you can verify them yourself. Even if they try to deceive you or collude in committing crimes, they can’t fool you. This is a fundamental difference, very important, and people should be aware of it. This is also what I said earlier, EigenDA is not the same thing as Celestia because it is a DAC, you can’t really compare it.

BlockBeats: Does DAS also have more benefits for adding or removing nodes from the network?**

Nick: Absolutely. One of the superpowers of a network like Celestia is the use of data availability sampling, which means that you can increase the block size as the number of nodes in the network increases, which is very powerful. Because in a monolithic chain, you can only use the same block size, no matter how many people are running nodes. Whereas in Celestia, you can actually increase the block size as more nodes are added and sampling begins.

We want to create a culture where users can run nodes on their wallets or browsers. This means that as more users join the network, the number of nodes increases, so blocks can become larger, providing more block space for new users and new applications. So there’s a positive feedback loop here, where users actually give their own applications the scale to scale.

KZG commitments may be used in the future

The Quantum Gravity Bridge (QGB) is a data availability bridge between Celestia and Ethereum, which is deployed by Celestia on Ethereum, and then Ethereum Layer 2 operators can publish their transmitted data to the Celestia network, and Celestia’s proof-of-stake (PoS) validator will put it into a block. This data is then forwarded from Celestia to Ethereum in the form of data availability proofs. The attestation is the Merkle root of L2 data signed by a Celestia validator, proving that the data is available on Celestia.

The QGB contract verifies the signature on the DA proof from Celestia. Therefore, when the Layer 2 contract on Ethereum updates its state, it does not rely on the delivery data published to Ethereum, but instead checks to see if the correct data is provided on Celestia by querying the DA bridge contract. The contract will give a positive response to any valid proof feedback that was previously forwarded to it, or it will return a negative response. Nick pointed out that Celestia will provide Ethereum Layer 2 with high-throughput data availability that is more secure than other off-chain data availability solutions and has lower fees.

BlockBeats: Do you think for a quantum gravitational bridge, is it more expensive or cheaper relative to the cost of EigenDA?

Nick: One of the problems with EigenDA is that they haven’t released any information about how they actually built it. So, it’s hard to know what it’s going to look like without code. I think that for EigenDA, depending on how they are built, there can be expensive proof costs because you have to generate KZG commitments (Kate-Zaverucha-Goldberg, polynomial commitment schemes) and verify signatures on Ethereum, like you have to verify a bunch of signatures for each batch. So this can actually consume a lot of gas. The good thing about QGB is that we designed it in a way that is specifically designed to minimize gas costs.

First of all, we have batch processing. It’s like having multiple Celestia blocks, they’re all batched into a single block, and then a commitment is generated, signed, and then published to Ethereum. So, instead of passing and validating each block, you only need to do it once in a batch, which significantly reduces the gas cost of validating commitments.

Second, we are also building a zero-knowledge QGB that will further reduce the gas cost of verifying commitments on Ethereum Layer 1 by verifying all of these signatures through zero-knowledge proofs. Because the gas cost of validating commitments on Ethereum Layer 1 is a big overhead for any off-chain DA. And then there’s the actual DA cost of things like paying for data on Celestia and EigenDA, and it’s hard to know how much that will cost right now. I think the cost will be very, very low, in either case, so low that I suspect it won’t be a different factor, unless Celestia is suddenly congested or something else that causes the cost to be very high.

BlockBeats: You mentioned KZG, but why hasn’t Celestia used KZG yet, and what’s the thinking behind it?

Nick: yes, the problem with KZG promises is that they’re still fairly new and very slow to compute. As a result, it would be more expensive to create blocks if KZG commitments were used. Also, as the block size increases, you have to calculate more and more open values, which leads to slower speeds. As a result, Celestia made the very practical decision to use a plain Merkle tree (hash tree) with fraud proofs.

But the problem is that if it becomes practical, we can easily replace it with the KZG promise. Excitingly, a few weeks ago at the SBC (Blockchain Science Conference), Ethereum Foundation researcher Dankrad Feist shared some promising research on KZG hardware acceleration, and we’re monitoring this and would totally consider replacing it if there are any changes and improvements. But KZG adds a lot of complexity, so it’s a challenge.

BlockBeats: I’d like to ask some questions about Rollkit, a modular rollup framework, what role do you think Rollkit will play in the future?

Nick: The first thing people should know is that Celestia is completely neutral. In fact, we’re currently working with almost every rollup SDK to integrate Celestia as a DA option. We started Rollkit when there was no open source rollup framework because there were Layer 2s at the time, but they were all trying to build their own single thing, not trying to build a software SDK for anyone to be able to build their own rollup, which is why we incubated Rollkit.

I think one of the unique things about Rollkit is that it was the first to be designed under the concept that it is not tied to Ethereum and does not involve settlement with smart contracts. As a result, it is better suited to run a Sovereign rollup. Another important aspect is that the Sovereign Rollkit is compatible with ABCI (Application BlockChain Interface), so any Cosmos SDK application or execution environment that is compatible with ABCI can be compatible with it. People have used a lot of different virtual machines and made them compatible with ABCI and then been able to launch them on Rollkit. This is very important because it opens up another ecosystem of projects for building rollups, and another great thing is that the Rollkit team has built a fraud proof system for Cosmos SDK applications. So it’s actually possible to build an optimistic rollup on top of Rollkit, which is very exciting.

BlockBeats: Is there anything you’d like to say to developers or practitioners in China?**

Nick: We’re very excited to have more presence in China, and we know that China has played such an important role in the origins of blockchain and cryptocurrencies from very early on. There are so many talented engineers and users in China, and the Chinese community is full of enthusiasm. So, we are very much looking forward to being able to engage and participate with it, I have lived in Hong Kong for a year and a half, and I have traveled many times around China, I love Chinese culture, I really appreciate the mentality of Chinese, they are full of desire, they have the mentality of builders and the mentality of strivers, which I really like.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)