Tron TRON Industry Weekly: BTC may continue to bottom out below $80,000, and the De-Storage agreement receives $1.4 billion in financing

Reprinted from chaincatcher
04/01/2025·29D1. Forecast
1. Macro-level summary and future forecast
Last week, the Trump administration announced it would impose a 25% tariff on all non-US-made cars, a decision that sparked panic in the market again. The tariff policy may not only lead to a sharp rise in prices of imported cars and parts, but may also trigger retaliatory measures from trading partners, further exacerbating international trade tensions. In the future, investors still need to pay close attention to the progress of trade negotiations and changes in the global economic situation.
2. Market changes and early warnings in the crypto industry
Last week, the cryptocurrency market encountered a significant pullback caused by macro-level fear, after the accumulated rebound gains were greatly retiring in just a few days, mainly due to the renewed uncertainty of the global macroeconomic environment. Looking ahead to this week, the market's focus will be on whether Bitcoin and Ethereum prices can effectively fall below previous lows. This position is not only an important support level on the technical side, but also a key line of defense at the psychological level of the market. On April 2, the United States officially kicked off the imposition of reciprocal tariffs. If this move does not further intensify the market's panic, then the cryptocurrency market may usher in the opportunity to buy at the bottom on the right side in a phased right. However, investors still need to be vigilant at all times and pay close attention to market trends and changes in various related indicators.
3. Industry and track hot spots
Cobo and YZI led the investment, and Hashkey's modular L1 chain abstract platform Particle, which has simplified cross-chain operations and payments, greatly improving user experience and developer efficiency, but also facing the challenges of liquidity and centralized management; focusing on seamless linking the mainstream VM application layer protocol Skate provides an innovative and efficient solution. By providing unified application state, simplifying cross-chain task execution and ensuring security, Skate greatly reduces the complexity of developers and users in a multi-chain environment; Arcium is a fast, flexible and low-cost infrastructure designed to enable access to encrypted computing through blockchain. Walrus, an innovative decentralized storage solution, raised a record $140 million in financing.
2. Market hot tracks and weekly potential projects
1. Potential track performance
**1.1. A brief analysis of the characteristics of the mainstream VM
application layer protocol Skate led by Hashkey**
Skate is an infrastructure layer focused on DAPP, allowing users to seamlessly interact with their native chains by connecting to all virtual machines (EVM, TonVM, SolanaVM). For users, Skate provides applications that can run in their preferred environment. For developers, Skate manages cross-chain complexity and introduces a new paradigm of application that allows applications to be built on all chains and all virtual machines and use a unified application state to serve all chains.
Architecture Overview
Skate's infrastructure consists of three basic layers:
- Skate 's central chain : a central hub that processes all logical operations and stores application state.
- Pre-confirm AVS : AVS deployed on Eigenlayer to facilitate the secure delegation of restaked ETH to Skate's executor network. It acts as the primary source of real data, ensuring that the executor performs the required actions on the target chain.
- Executor Network : A network of executors responsible for performing applications defined operations. Each application has its own set of executors.
As a central chain, Skate maintains and updates the shared state, providing instructions for connected peripheral chains that will only respond to the call data provided by Skate. This process is achieved through our network of executors, where each executor is a registered AVS operator responsible for performing these tasks. If dishonest behavior occurs, we can rely on pre-confirmation AVS as the real data source to punish the operators who violate the rules.
User Process
Skate is mainly driven by intent , each intent encapsulates key information that expresses the actions the user wants to perform, and defines necessary parameters and boundaries. Users simply need to sign intent through their own local wallet and interact only on that chain, creating a user-native environment.
The intent flow is as follows:
-
Source chain **
**The user will initiate the operation by signing intent on the TON/Solana/EVM chain. -
Skate **
**The executor receives the intent and calls the processIntent function. This creates a task that encapsulates the key information required for the execution of the task. At the same time, the system will trigger a TaskSubmitted event.
The AVS validator will actively listen to the TaskSubmitted event and verify the content of each task. Once a consensus is reached in the pre-confirmation AVS, the forwarder will issue the signature required for the task execution. -
Target Chain **
**The executor calls the executeTask function on the Gateway contract .
The Gateway contract will verify that the task has passed AVS verification, that is, confirm that the forwarder's signature is valid before the function defined in the task can be executed.
The calldata of the function call is executed, and the intent is marked as completed.
Comments
Skate provides an innovative and efficient solution for cross-chain operation of decentralized applications. By providing unified application state, simplifying cross-chain task execution and ensuring security, Skate greatly reduces the complexity of developers and users in multi-chain environments. Its flexible architecture and easy integration feature make it have broad application prospects in the multi-chain ecosystem. However, in order to achieve full implementation in a high concurrency and multi-chain ecosystem, Skate still needs to continue to work on performance optimization and cross-chain compatibility.
**1.2. How Arcium, a decentralized encryption computing network
invested by Coinbase, NGC and Long Hash, realizes its vision**
Arcium is a fast, flexible and low-cost infrastructure designed to enable access to encrypted computing through blockchain. Arcium is an encrypted supercomputer that provides large-scale encrypted computing services that enable developers, applications and the industry to compute on fully encrypted data, using a trustless, verifiable and efficient framework. Through secure multi-party computing (MPC) technology, Arcium provides scalable, secure encryption solutions for Web2 and Web3 projects and supports decentralized networks.
Brief description of the architecture
The Arcium network is designed to provide secure distributed confidential computing for a wide range of applications, from artificial intelligence to decentralized finance (DeFi) and beyond. It is based on advanced cryptography technology, including multi-party computing (MPC), enabling trustless and verifiable computing without central authority intervention.
- Multi-party execution environment (MXEs)
MXEs are specialized, isolated environments for defining and safely performing computational tasks. They support parallel processing (because multiple clusters can perform computations for different MXEs at the same time), thereby improving throughput and security.
MXEs are highly configurable, allowing computing customers to define security requirements, encryption schemes, and performance parameters based on their own needs. Although a single compute task is performed in a specific cluster of Arx nodes, multiple clusters can be associated with a single MXE. This ensures that computing tasks can still be performed reliably even if some nodes are offline in the cluster or when the load is too high. By predefined these configurations, customers can customize the environment with high flexibility according to specific use case requirements.
- arxOS
arxOS is a distributed execution engine in the Arcium network, responsible for coordinating the execution of computing tasks and driving Arx nodes and clusters. Each node (similar to the core in a computer) provides computing resources to perform computational tasks defined by MXEs.
- Arcis (Arcium 's developer framework)
Arcis is a Rust-based developer framework that enables developers to build applications on the Arcium infrastructure and supports all Arcium multi-party computing (MPC) protocols. It contains a Rust-based framework and compiler.
- Arx node cluster (running arxOS)
arxOS is a distributed execution engine in the Arcium network that coordinates the execution of computing tasks. Each node (similar to the core in a computer) provides computing resources to perform computational tasks defined by MXEs. Clusters provide customizable trust models that support dishonest majority protocols (originally Cerberus) and "honest but curious" protocols (such as Manticore). Other protocols (including honest majority protocols) will be added in the future to support more use case scenarios.
Chain-level enforcement
The coordination of all state management and computing tasks is processed on-chain through the Solana blockchain, which acts as a consensus layer to coordinate the operations of Arx nodes. This ensures fair reward allocation, execution of network rules, and alignment between nodes for the current state of the network. Tasks are queued in a decentralized memory pool architecture, where components on the chain help determine which compute tasks have the highest priority, identify misbehaviors, and manage execution order.
Nodes use collateral to ensure compliance with network rules. If there is any misconduct or deviation from the protocol, the system will implement a punishment mechanism to punish illegal nodes by reducing slugging and maintain the integrity of the network.
Comments
Here are the key features that make Arcium networks a cutting-edge secure computing solution:
- Trustless, arbitrary encryption computing : The Arcium network implements trustless computing through its multi-party execution environment (MXEs), allowing arbitrary computing on encrypted data without exposing data content.
- Ensure execution : Through a coordinated system based on blockchain, the Arcium network ensures that all MXEs can be executed reliably. Arcium's agreement enforces compliance through pledge and punishment mechanisms. The nodes need to commit to the pledged collateral. Once the collateral deviates from the agreed execution rules, the collateral will be punished, thus ensuring the correct completion of each computing task.
- Verification and Privacy Protection : Arcium provides a verifiable computer system that allows participants to disclose the correctness of audit calculation results and enhance the transparency and reliability of data processing.
- On-chain coordination : The network utilizes the Solana blockchain to manage node scheduling, compensation and performance incentives. Pledge, punishment and other incentive mechanisms are all implemented on the chain to ensure the decentralization and fairness of the system.
- Developer-friendly interface : Arcium provides dual interfaces: one is a web-based graphical interface for non-professional users, and the other is a Solana-compatible SDK for developers to create customized applications. Such a design allows confidential computing to not only provide convenience to ordinary users, but also meets the needs of highly technical developers.
- Multi-chain compatibility : Although initially based on Solana, Arcium network design took into account multi-chain compatibility and could support access to different blockchain platforms.
Through these features, the Arcium Network aims to redefine how sensitive data is processed and shared in a trustless environment, driving the wider application of secure multi-party computing (MPC).
1.3. **What are the characteristics of the modular L1 chain
abstract platform Particle, led by Cobo and YZI, and Hashkey followed up for 2 times?**
Particle Network completely simplifies the user experience of Web3 through wallet abstraction and chain abstraction. Through its wallet abstract SDK, developers can guide users into smart accounts with one click through social login.
In addition, Particle Network's chain abstract technology stack uses Universal Accounts as its flagship product, allowing users to have a unified account and balance on each chain.
Particle Network’s real-time wallet abstract product suite consists of three key technologies:
-
User Onboarding **
**Through the simplified registration process, users can more easily enter the Web3 ecosystem and improve the user experience. -
Account Abstraction **
**Through account abstraction, users ' assets and operations no longer rely on a single chain, improving flexibility and convenience of cross-chain operations. -
Upcoming product: Chain Abstraction **
**Chain abstraction will further strengthen cross-chain capabilities, support users to seamlessly operate and manage assets between multiple blockchains, and create a unified on-chain account experience.
Architecture analysis
Particle Network coordinates and completes cross-chain transactions in a high-performance EVM execution environment through its Universal Accounts and three core functions:
-
Universal Accounts
Provides a unified account status and balance, and users' assets and operations on all chains are managed through a single account. -
Universal Liquidity
Through the cross-chain liquidity pool, ensure that funds between different chains can be transferred and used seamlessly. -
Universal Gas
Simplify user experience by automatically managing the gas fees required for cross-chain transactions.
These three core functions work together to enable Particle Network to unify the interactions on all chains and realize automated cross-chain transfer of funds through atomic cross-chain transactions, thereby helping users achieve their goals without manual intervention.
Universal Accounts **
**Particle Network’s universal account aggregates the token balances on all
chains, allowing users to leverage all on-chain assets in any chain
decentralized applications (dApps) just like using a single wallet.
Universal Accounts implement this function through Universal Liquidity . They can be understood as dedicated smart account implementations deployed and coordinated on all chains. Users can create and manage common accounts by simply connecting to their wallets, and the system will automatically assign administrative permissions to them. The wallet connected to by the user can be generated through Particle Network's Modular Smart Wallet-as-a-Service**, or it can be a normal Web3 wallet, such as MetaMask, UniSat, Keplr, etc.
Developers can easily integrate common account functions in their dApps by implementing the general SDK of Particle Network, empowering cross-chain asset management and operations.
Universal Liquidity **
**General liquidity is a technical architecture that supports aggregation of
balances on all chains. Its core function is coordinated by Particle Network
through atomic cross-chain transactions and exchanges. These atomic
transaction sequences are driven by Bundler nodes, perform user operations
(UserOperations) and complete operations on the target chain.
General liquidity relies on a network of liquidity providers (also known as fillers) that move intermediary tokens (such as USDC and USDT) between chains through token pools. These liquidity providers ensure that assets can flow smoothly across chains.
For example, suppose the user wants to purchase an NFT priced at ETH on the Base chain using USDC. In this scenario:
- Particle Network aggregates the USDC balances of users on multiple chains.
- Users use their own assets to purchase NFTs.
- After confirming the transaction, Particle Network automatically redeems USDC to ETH and purchases NFT.
These additional on-chain operations take only a few seconds of processing time and are transparent to the user without manual intervention. In this way, Particle Network simplifies the management of cross-chain assets, making cross-chain transactions and operations seamless and automated.
Universal Gas **
**Particle Network also solves the fragmentation of fuel tokens through
universal liquidity.
In the past, users had to hold fuel tokens of multiple chains in different wallets to pay for gas fees on different chains, which brought great barriers to users. To solve this problem, Particle Network uses its native Paymaster , allowing users to pay for gas fees using any tokens on any chain. These transactions will eventually be settled on Particle Network's L1 through the chain's native token ( PARTI ).
Users do not need to hold PARTI tokens to use a common account, as their gas tokens are automatically redeemed and used for settlement. This makes cross-chain operation and payment easier, eliminating the need for users to manage multiple gas tokens.
Comments
Advantages:
- Unified management of cross-chain assets: Universal accounts and general liquidity allow users to manage and use assets on different chains without worrying about the complexity of asset dispersion or cross-chain transfer.
- Simplify user experience: Through social login and modular smart wallet as a service, users can easily access Web3, lowering the entry threshold.
- Cross-chain transaction automation: Atomic cross-chain transactions and general fuels make the automatic conversion and payment of assets and gas tokens seamless, improving user operation convenience.
- Developer friendly: Developers can easily integrate cross-chain functions in their own dApps through Particle Network's general SDK, reducing the complexity of cross-chain integration.
Disadvantages:
- Relying on liquidity providers: Liquidity providers (such as cross-chain transfers of USDC and USDT) require extensive participation to ensure stable liquidity. If the liquidity pool is insufficient or the provider's participation is low, it may affect the smoothness of the transaction.
- Centralized risk: Particle Network relies on its native Paymaster to a certain extent to handle gas fee payment and settlement, which may introduce centralized risks and dependencies.
- Compatibility and popularity: Despite supporting multiple wallets (such as MetaMask, Keplr, etc.), compatibility between different chains and wallets may remain a major challenge for user experience, especially for smaller chains or wallet providers.
Overall, Particle Network has greatly improved user experience and developer efficiency by simplifying cross-chain operations and payments, but it also faces the challenges of liquidity and centralized management.
2. Detailed explanation of the project that week
**2.1. Detailed explanation of the innovative decentralized storage
solution led by A16z, which raised a record $140 million in the month.**
Introduction
Walrus, an innovative solution for decentralized big data storage. It combines fast linear decodeable erasure coding, which can be expanded to hundreds of storage nodes, thereby achieving extremely high flexibility with low storage overhead; and uses the new generation of public chain Sui as the control plane to manage from the storage node life cycle to the big data life cycle, to economics and incentive mechanisms, eliminating the need for a complete customized blockchain protocol.
At the heart of Walrus is a new coding protocol called Red Stuff , which adopts an innovative two-dimensional (2D) coding algorithm based on fountain codes. Unlike RS encoding, fountain codes mainly rely on XOR or other very fast operations on large data blocks, avoiding complex mathematical operations. This simplicity enables encoding large files in a single transfer, which significantly speeds up processing. The two-dimensional encoding of Red Stuff enables the recovery of lost segments through bandwidth proportional to the amount of lost data. In addition, Red Stuff combines certified data structures to prevent malicious clients and ensure consistency of stored and retrieved data.
Walrus runs in epoch units, each epoch is managed by a storage node committee. All operations in each epoch can be sharded by blobid, thus achieving high scalability. The system facilitates the blob writing process by encoding the data into the primary and secondary slices, generating Merkle promises, and distributing these fragments to the storage node. The reading process involves collecting and verifying fragments, and the system provides the best effort path and motivation path to deal with potential system failures. To ensure that read and write blobs are not interrupted while handling the natural occurrence of participant turnover in permission systems, Walrus has an efficient committee reconfiguration protocol.
Another key innovation of Walrus is its method of proof of storage, a mechanism to verify that storage nodes actually store the data they claim to hold. Walrus solves the scalability challenges associated with these proofs by incentivizing all storage nodes to hold fragments of all storage files. This complete replication enables a new proof of storage mechanism to challenge storage nodes as a whole, rather than individually for each file. Therefore, the cost of proof file storage increases logarithmicly with the increase in the number of stored files, rather than growing at a linear scale as in many existing systems.
Finally, Walrus also introduced a staking-based economic model that combines reward and punishment mechanisms to align incentives and execute long-term commitments. The system includes a pricing mechanism for storage resources and write operations, and is equipped with a token governance model for parameter adjustment.
Technical analysis
Red Stuff Coding Protocol
The current industry coding protocols achieve low overhead factors and extremely high guarantees, but are still not suitable for long-term deployment. The main challenge is that in a long-running large-scale system, storage nodes often encounter failures, lose their fragments, and need to be replaced. In addition, in a system without permission, even if the storage nodes have sufficient incentives to participate, changes will occur naturally between nodes.
Both of these situations result in a large amount of data that needs to be transmitted over the network, equivalent to the total amount of stored data in order to restore lost fragments for new storage nodes. This is extremely expensive. Therefore, the team hopes that when nodes are replaced, the cost of recovery is only proportional to the amount of data that needs to be recovered and reduces inversely as the number of storage nodes (n) increases.
To achieve this, Red Stuff encodes large data blocks in two-dimensional (2D) encoding. The main dimension is equivalent to the RS encoding used in previous systems. However, in order to efficiently recover fragments, Walrus also encodes on the secondary dimension. Red Stuff is based on linear erasure coding and Twin-code frameworks that provide efficient recovery of erasure coding storage under fault-tolerant settings for environments with trusted writers. The team revamped this framework to suit Byzantine fault-tolerant environments and optimized for a single storage node cluster, which will be described in detail below.
- coding
Our starting point is to split the big data block into f+1 fragment. This is not just encoding the repair fragment, but adding a dimension first during the splitting process:
(a) ****Two-dimensional main code . The file is split into 2f+1 columns and f+1 rows. Each column is encoded as an independent blob containing 2f repair symbols. Then, the extended part of each row is the main fragment of the corresponding node.
(b) Two-dimensional subcoding . The file is split into 2f+1 columns and f+1 rows. Each line is encoded as an independent blob containing f repair symbols. Then, the extended part of each column is the secondary fragment of the corresponding node.
****Figure 2: 2D Encoding/ Red Stuff
The original blob is split into f+1 main fragment (vertical in the figure), and 2f+1 secondary fragment (horizontal in the figure). Figure 2 shows this process. Finally, the file is split into (f + 1)(2f + 1) symbols, which can be visualized in a [f + 1, 2f + 1] matrix.
Given this matrix, a repair symbol is generated on both dimensions. We take each 2f + 1 column (each column size is f + 1) and expand it to n symbols so that the number of rows of the matrix is n. We assign each row as the main fragment of a node (see Figure 2a). This almost tripled the amount of data we need to send. To provide efficient recovery of each fragment, we also extend the initial [f+1, 2f+1] matrix, each row extending from 2f+1 symbols to n symbols (see Figure 2b), and using our encoding scheme. In this way, we create n columns, each column being assigned as a secondary fragment of the corresponding node.
For each fragment (main and secondary fragment), W also calculates the promise of its symbol. For each primary fragment, the promise contains all symbols in the extended row; for each secondary fragment, the promise contains all values in the extended column. In the final step, the client creates a promise list containing these fragments of promises, which serves as a blob promise.
- Write protocol
Red Stuff's write protocol uses the same mode as the RS encoding protocol. Writer W first encodes the blob and creates a fragment pair for each node. A fragment pair i is the pair of the i-th main and secondary fragments. There are a total of n = 3f + 1 fragment pair, which is equivalent to the number of nodes.
Next, W sends the promises of all fragments to each node, with the corresponding fragment pair. Each node checks whether the fragments in the fragment pair are consistent with the promise, recalculates the blob's promise, and replys to signature confirmation. When 2f+1 signature is collected, W generates a certificate and publishes it on the chain to prove that the blob will be available.
In a theoretical asynchronous network model, reliable transmission is assumed so that all correct nodes will eventually receive a fragment pair from an honest writer. However, in actual protocols, the writer may need to stop retransmission. Once 2f+1 signature is collected, retransmission can be safely stopped, which ensures that at least f+1 correct nodes (selected from 2f+1 response nodes) hold the blob's snippet pair.
(a) Node 1 and Node 3 hold two rows and two columns together
In this case, node 1 and node 3 hold two rows and two columns of the file, respectively. The data fragments held by each node are assigned to different rows and columns in two-dimensional encoding, ensuring that data is distributed and redundantly stored among multiple nodes for high availability and fault tolerance.
(b) Each node sends the intersection of its row/column and the column/column of node 4 to node 4 (red). Node 3 needs to encode this line.
In this step, node 1 and node 3 will send the intersection of their rows/columns and columns/columns of node 4 to node 4. Specifically, node 3 needs to encode the rows it holds in order to intersect with the data segment of node 4 and pass to node 4 . In this way, node 4 can receive the complete data fragment and be able to perform recovery or verification work. This process ensures data integrity and redundancy, and even if some nodes fail, other nodes can recover data.
(c) Node 4 uses the f+1 symbol on its column to restore the complete secondary fragment (green). Node 4 then sends the recovered column intersection to the rows of other recovery nodes.
In this step, node 4 uses f+1 symbols on its column to restore the complete secondary fragment. The recovery process is based on the intersection of data to ensure the efficiency of data recovery. When node 4 recovers its secondary segment, it sends the recovered column intersection to other nodes that are recovering, helping these nodes recover their row data. This interactive method ensures the smooth progress of data recovery, and collaboration between multiple nodes can accelerate the recovery process.
(d) Node 4 uses f + 1 symbol on its line and all recovery secondary symbols (green) sent by other honest recovery nodes (these symbols should be at least 2f, plus 1 symbol recovered in the previous step) to restore its main fragment (dark blue).
At this stage, node 4 not only uses f+1 symbols on its row to restore the main fragment, but also needs to use other honest recovery sub symbols sent by the node to help complete the recovery. Through these symbols received from other nodes, node 4 can restore its main segment. To ensure the accuracy of the recovery, node 4 will receive at least 2f + 1 valid secondary symbol (including 1 symbol restored in the previous step). This mechanism enhances fault tolerance and data recovery by integrating data from multiple sources.
- Read protocol
The read protocol is the same as the RS-encoded protocol, and nodes only need to use their main fragment. Reader (R) first requests any node to provide the promise set of that blob and checks whether the returned promise set matches the requested blob commitment via the promise open protocol. Next, R requests all nodes to read the blob promise, which will respond and provide the master fragment they hold (this may be progressive in order to save bandwidth). Each response is checked with the corresponding commitments in the blob's commitment set.
When R collects f+1 correct main fragment, R decodes the blob and recodes it, recalculates the blob commitment, and compares it with the requested blob commitment. If these two promises match (i.e. the same promises posted by W on the chain), R outputs blob B, otherwise, R outputs an error or an incurable indication.
Walrus decentralized secure blob storage
- Write a blob
The Blob process written to Walrus can be illustrated in Figure 4.
At the beginning of this process, the writer (➊) encodes the Blob using Red Stuff , as shown in Figure 2. This process generates a pair of slider, a set of promises to slider, and a blob commitment. The writer hash the Blob promise and combines the file length, encoding type and other metadata to derive a blobid.
The writer (➋) then submits a transaction to the blockchain to obtain sufficient security for the blob storage space in a series of Epochs and registers the blob. The size of the blob and the blob promise will be sent in the transaction, and these data can be used to re-derive the blobid. Blockchain smart contracts need to ensure that there is enough space to store the encoded slider on each node, as well as all metadata related to the Blob commitments. Some payments may be sent along with the transaction to ensure free space, or free space can be used as additional resources and with the request. Our implementation allows both options.
Once the registered transaction is submitted (➌), the writer informs the storage node that they are responsible for storing the blobid's slaves, and sends the transaction, commitment, and the primary and secondary slaves assigned to each storage node together with the proof to these storage nodes, proving that the blobid is consistent with the published blobid. The storage node will verify the promise and return a signature confirmation of the blobid after the promise and the slider pair is successfully stored.
Finally, the writer waits to collect 2f + 1 signature acknowledgement (➍), which constitute a write certificate. This certificate is then posted to the chain (➎), which marks the point of available Blobs in Walrus (PoA) . PoA means that the storage node is obliged to maintain the availability of these slides within the specified Epochs for reading. At this point, the writer can delete the blob from the local storage and can be offline. In addition, writers can use PoA as credentials to prove Blob availability to third-party users and smart contracts.
The node will listen to blockchain events to see if the Blob has reached its PoA. If they do not store the Sliver pairs of that blob, they will perform the recovery process, getting all the promises and Sliver pairs of the blobs until PoA point in time. This ensures that ultimately all correct nodes will hold all blobs's slave pairs.
Summarize
All in all, Walrus' contributions include:
- The problem of asynchronous complete data sharing is defined and Red Stuff is proposed, the first protocol that can effectively solve the problem under Byzantine fault tolerance.
- Walrus, the first permission decentralized storage protocol designed for low replication costs that efficiently recover data lost due to failure or participant turnover is proposed.
- By introducing a staking-based economic model, combining reward and punishment mechanisms to align incentives and execute long-term commitments, the first asynchronous challenge protocol is proposed to achieve efficient storage proofs.
3. Industry data analysis
1. Overall market performance
1.1 Spot BTCÐ ETF
From March 24, 2025 to March 29, 2025, there have been different trends in the flow of Bitcoin (BTC) and Ethereum (ETH) ETFs:
Bitcoin ETF:
- March 24, 2025: Bitcoin ETF ushered in a net inflow of $84.2 million, the seventh consecutive day of positive inflows, with total inflows reaching $869.8 million.
- March 25, 2025: The Bitcoin ETF once again recorded a net inflow of $26.8 million, bringing the cumulative inflow in 8 days to $896.6 million.
- March 26, 2025: Bitcoin ETFs continue to grow with net inflows reaching $89.6 million, marking the ninth consecutive day of inflows with total inflows reaching $986.2 million.
- March 27, 2025: Net inflows of Bitcoin ETFs are $89 million, maintaining a positive inflow trend.
- March 28, 2025: Bitcoin ETF continues to record a net inflow of $89 million, maintaining a continuous positive inflow trend.
Ethereum ETF:
- March 24, 2025: The net inflow of Ethereum ETF was USD 0, ending the previous 13 consecutive days of capital outflows.
- March 25, 2025: The Ethereum ETF experienced a net outflow of $3.3 million, the first outflow since the outflow trend restarted.
- March 26, 2025: Ethereum ETF continues to face a net outflow of $5.9 million, and investor sentiment remains cautious.
- March 27, 2025: Ethereum ETF net outflow of $4.2 million, indicating that the market's panic still exists.
- March 28, 2025: Ethereum ETF continues to experience a net outflow of US$4.2 million, maintaining a negative outflow trend.
ETF, November 1, ET) Ethereum spot ETF total net outflow of 10.925,600 US dollars
1.2. Spot BTC vs ETH Price Trend
BTC
Analysis
After BTC failed to test the wedge upper track ($89,000) last week, it opened a downward trend as expected. This week, users only need to pay attention to three important support levels, the first-line support of the $81,400 first-line support, the second-line support given by the $80,000 integer mark, and the bottom support of the year's lowest point of $76,600. For users waiting for an opportunity to enter the market, the above three support positions can be regarded as suitable points for entering the market in batches.
ETH
Analysis
After failing to stabilize above $2,000, ETH is now close to a pullback to the current year's low of $1,760. The subsequent trend almost depends on BTC's expression. If BTC can stabilize the $80,000 mark and start a rebound, then ETH will most likely form a double bottom pattern above $1,760 and can look up to the first-line resistance of $2,300. On the contrary, if BTC falls below $80,000 again and seeks support at a price of $76,600 or even lower, then ETH is likely to look down to the bottom support at the $1,700 first-tier or even $1,500 second-tier.
1.3. Panic & Greed Index
2. Public chain data
2.1. BTC Layer 2 Summary
Analysis
From March 24 to March 28, 2025, the Bitcoin Layer-2 (L2) ecosystem has undergone some important developments:
Stacks ' sBTC deposit cap increased: Stacks announced that it has completed the cap-2 expansion of sBTC, increasing the deposit cap by 2,000 BTC and a total capacity of 3,000 BTC (approximately $250 million). This upgrade aims to enhance liquidity and support the growth of demand for Bitcoin-backed DeFi applications on the Stacks platform.
Citrea 's testnet milestone: Bitcoin L2 solution Citrea reported an important milestone – its testnet transaction volume exceeded 10 million. The platform also updated the Clementine design, simplified the Zero Knowledge Proof-of-Talk (ZKP) validator, and enhanced security, laying the foundation for the scalability of Bitcoin transactions.
BOB的BitVM桥接启用: BOB(Build on Bitcoin)成功在测试网上启用了BitVM桥接,允许用户通过最小的信任假设将BTC铸造成Yield BTC。这一进展增强了Bitcoin与其他区块链网络之间的互操作性,使得在不妥协安全性的前提下,能够进行更复杂的交易。
Bitlayer的BitVM桥接发布: Bitlayer推出了BitVM桥接,允许用户通过最小信任假设将BTC铸造成Yield BTC。这一创新提高了Bitcoin交易的可扩展性和灵活性,支持Bitcoin生态系统内DeFi应用的发展。
2.2. EVM &non-EVM Layer 1 Summary
Analysis
EVM兼容的Layer 1区块链:
- BNB链的2025年路线图: BNB链公布了2025年愿景,计划扩展到每天100百万次交易,提高安全性以应对矿工可提取价值(MEV)问题,并引入类似EIP-7702的智能钱包解决方案。 该路线图还强调了人工智能(AI)用例的整合,专注于利用宝贵的私人数据并提升开发者工具。
- Polkadot的2025年发展: Polkadot发布了2025年路线图,突出了对EVM和Solidity的支持,旨在增强互操作性和可扩展性。 该计划包括实施多核架构以增加容量,并通过XCM v5升级跨链消息传递。
非EVM Layer 1区块链:
- W Chain主网软启动: W Chain,一个总部位于新加坡的混合区块链网络,宣布其Layer 1主网进入软启动阶段。 在成功的测试网阶段后,W Chain引入了W Chain桥接功能,以增强跨平台兼容性和互操作性。 商业化主网预计将于2025年3月正式上线,并计划推出去中心化交易所(DEX)和大使计划等功能。
- N1区块链投资者支持确认: N1,一个超低延迟的Layer 1区块链,确认了其原始投资者,包括Multicoin Capital和Arthur Hayes将继续支持该项目,预计将在主网发布前启动。 N1旨在为开发者提供不受限制的可扩展性和超低延迟的去中心化应用(DApps)支持,并支持多种编程语言以简化开发。
2.3. EVM Layer 2 Summary
Analysis
在2025年3月24日至3月29日之间,EVM Layer 2 生态系统出现了几项重要发展:
- Polygon zkEVM 主网Beta版上线: 2025年3月27日,Polygon成功推出了zkEVM(零知识以太坊虚拟机)主网Beta版。该Layer 2扩容解决方案通过执行链下计算,提高了以太坊的可扩展性,实现了更快速且低成本的交易。开发者可以无缝地将其以太坊应用迁移到Polygon的zkEVM,因为它完全兼容以太坊的代码库。
- Telos基金会的ZK-EVM开发路线图: Telos基金会公布了基于SNARKtor的ZK-EVM开发路线图。计划包括在2024年第四季度将在Telos测试网上部署硬件加速的zkEVM,随后在2025年第一季度与以太坊主网集成。接下来的阶段旨在集成SNARKtor以提高Layer 1上的验证效率,预计到2025年第四季度将完成全面集成。
四.宏观数据回顾与下周关键数据发布节点
3月28日公布的2月核心PCE物价指数年率录得2.7%(预期2.7%,前值2.6%),连续第三个月高于美联储目标,主要受关税导致的进口成本上升推动。
本周(3月31日-4月4日)重要宏观数据节点包括:
4月1日:美国3月ISM制造业PMI
4月2日:美国3月ADP就业人数
4月3日:美国至3月29日当周初请失业金人数
4月4日:美国3月失业率;美国3月季调后非农就业人口
五. 监管政策
周内,美国SEC结束对Crypto.com和Immutable的调查,特朗普也赦免了BitMex的联合创始人,针对稳定币的专门法案也正式被提上讨论日程,对加密行业的松绑与合规化监管进程正在加快推进。
美国:俄克拉荷马州通过战略比特币储备法案
俄克拉荷马州众议院投票通过战略比特币储备法案。该法案允许该州将10% 的公共资金投资于比特币或任何市值超过5000 亿美元的数字资产。
另外,美国司法部宣布破获了一项正在进行的恐怖主义融资计划,查获约201,400 美元(按当前价值计算)的加密货币,这些加密货币存放在旨在为哈马斯提供资金的钱包和账户中。查获的资金来自哈马斯筹款地址,据称由哈马斯控制,自2024 年10 月以来,这些地址被用于洗钱超过150 万美元的虚拟货币。
巴拿马:公布拟议加密法案
巴拿马公布拟议加密法案,以监管加密货币和促进基于区块链服务的发展。拟议的法案为使用数字资产建立了法律框架,为服务提供商制定了许可要求,并包括符合国际金融标准的严格合规措施。数字资产被承认为一种合法的支付手段,允许个人和企业自由商定在商业和民事合同中使用数字资产。
欧盟:或对加密资产实施100% 资本支持要求
据Cointelegraph 报道,欧盟保险监管机构提议对持有加密资产的保险公司实施100% 资本支持要求,理由是加密资产存在「固有风险和高波动性」。
韩国:拟对Kucoin等17 家境外应用实施访问屏蔽
韩国金融情报分析院(FIU)发布公告称,自3 月25 日起将对17 家未在韩国注册的海外虚拟资产服务提供商(VASP)的Google Play 平台应用实施国内访问限制,包括KuCoin、MEXC 等,这意味着用户无法新安装相关应用,现有用户也无法更新。