image source head

The AI ​​Agent tokens are falling continuously, is it the MCP too hot?

trendx logo

Reprinted from panewslab

03/18/2025·3M

A friend said that the continuous decline in web3 AI Agent targets such as #ai16z and arc are caused by the recent hot MCP protocol? At first glance, I was a little confused. Is there anything to do with WTF? But after thinking about it carefully, I found that there is really a certain logic: the valuation and pricing logic of web3 AI Agent has changed, and the narrative direction and product implementation route need to be adjusted urgently! Let’s talk about my personal opinions:

  1. MCP (Model Context Protocol) is an open source standardized protocol designed to seamlessly connect various AI LLM/Agents to various data sources and tools. It is equivalent to a plug-and-play USB "general" interface, replacing the end-to-end "specific" packaging method in the past.

Simply put, there were originally obvious data silos between AI applications. To achieve interoperability between Agent/LLM, they needed to develop corresponding calling API interfaces. Not only was the complex operation process, but there was also a lack of two-way interaction function. Usually, there were relatively limited model access and permission restrictions.

The emergence of MCP is equivalent to providing a unified framework that allows AI applications to get rid of the data silos of the past and achieve the possibility of "dynamic" access to external data and tools, which can significantly reduce development complexity and integration efficiency, especially in terms of automated task execution, real-time data query, and cross-platform collaboration.

Speaking of this, many people immediately thought that if Manus integrates the MCP open source framework that can promote multi-agent collaboration, would it be invincible?

That's right, Manus + MCP is the key to the impact of the web3 AI Agent this time.

  1. However, it is incredible that both Manus and MCP are frameworks and protocol standards for web2 LLM/Agent, and they solve the problems of data interaction and collaboration between centralized servers. Their permissions and access control also rely on the "active" opening of each server node. In other words, it is just an open source tool attribute.

In theory, it is completely contrary to the central ideas such as "distributed servers, distributed collaboration, distributed incentives" pursued by web3 AI Agent. How can a centralized Italian gun blow up the decentralized bunker?

The reason is that the first stage of web3 AI Agent is too "web2". On the one hand, it is because many teams come from the web2 background and lack a full understanding of the native needs of web3 Native. For example, the ElizaOS framework was originally a package framework that helps developers quickly deploy AI Agent applications, which is precisely integrating platforms such as Twitter, Discord and some "API interfaces" such as OpenAI, Claude, and DeepSeek, and appropriately encapsulate some general frameworks of Memory and Charater to help developers quickly develop and implement AI Agent applications. But if you are serious, what is the difference between this service framework and the open source tools of web2? What are the differentiation advantages?

Well, is the advantage that there is a set of Tokenomics incentive methods? Then use a framework that can be completely replaced by web2 to inspire a group of more AI agents that exist to issue new coins? horrible. . Following this logic, you will probably understand why Manus +MCP can have an impact on the web3 AI Agent?

Since the web3 AI Agent frameworks and services only solve the fast development and application needs similar to web2 AI Agent, but cannot keep up with the innovation speed of web2 in terms of technical services, standards and differentiation advantages, the market/capital revalued and priced the previous batch of web3 AI Agents.

  1. Speaking of this, the general problem must have found the crux of the problem, but how can we break the deadlock? One way: focus on making web3 native solutions, because the operation and incentive architecture of distributed systems are the advantages of absolute differentiation of web3.

Taking distributed cloud computing power, data, algorithms and other service platforms as examples, it seems that the computing power and data aggregated based on idle resources cannot meet the needs of engineering innovation in the short term. However, when a large number of AI LLMs are competing for concentrated computing power to engage in an arms race for performance breakthroughs, a service model with "idle resources, low cost" as a gimmick will naturally make web2 developers and VC groups disdain.

However, when the web2 AI Agent passes the stage of performance innovation, it will inevitably pursue vertical application scenario expansion and segmentation and fine-tuning model optimization. Only then will the advantages of web3 AI resource services be truly revealed.

In fact, when web2 AI, which climbs into the giant position through resource monopoly, reaches a certain stage, it is difficult to retreat and use the idea of ​​surrounding cities in the countryside and break down the scene one by one. At that time, it is time for excess web2 AI developers + web3 AI resources to work together to make efforts.

In fact, in addition to the quick deployment of web2 + multi-agent collaborative communication framework + Tokenomic coin issuance narrative, web3 AI Agent has many innovative directions worth exploring:

For example, equipped with a distributed consensus collaboration framework, considering the characteristics of off-chain computing + on-chain state storage of LLM large model, many adaptable components are required.

1. A decentralized DID authentication system allows the Agent to have a verifiable on-chain identity, which is like the unique address generated by the execution virtual machine for smart contracts, mainly for the continuous tracking and recording of subsequent states;

2. A decentralized Oracle oracle system is mainly responsible for the trusted acquisition and verification of off-chain data. Unlike previous Oracles, this oracle that is adapted to AI Agent may also need to develop a combination architecture including data acquisition layer, decision consensus layer, and execution feedback layer, so that the data required on-chain and off-chain calculations and decisions of the Agent can be reached in real time;

3. A decentralized storage DA system. Since there is uncertainty in the state of the knowledge base when the AI ​​Agent runs, and the reasoning process is relatively temporary, a set of key state libraries and inference paths behind the LLM are recorded and stored in a distributed storage system, and a cost-controllable data proof mechanism is provided to ensure data availability during public chain verification;

4. A set of zero-knowledge proof ZKP privacy computing layer can link privacy computing solutions including TEE time, FHE, etc. to realize real-time privacy computing + data proof verification, allowing the agent to have a wider range of vertical data sources (medical, financial), and then more professional and customized service agents appear on top;

5. A set of cross-chain interoperability protocols is somewhat similar to the framework defined by the MCP open source protocol. The difference is that this Interoperability solution requires a relay and communication scheduling mechanism that adapts to the operation, delivery, and verification of the Agent, which can complete the asset transfer and state synchronization problems of the Agent between different chains, especially including the Agent context and complex states such as Prompt, knowledge base, and Memory;

...

In my opinion, the focus of the real web3 AI Agent should be on how to make the AI ​​Agent's "complex workflow" and the blockchain's "trust verification flow" as fit as possible. As for these incremental solutions, it is possible to be upgraded and iterated from the upgrade and iteration of existing old narrative projects or re-cast projects on the newly formed AI Agent narrative track.

This is the direction that web3 AI Agent should strive to build, and it is the fundamentals of the innovation ecosystem that conforms to the large macro narrative of AI + Crypto. If there is no relevant innovation and development and differentiated competitive barriers, then every turbulent movement in the web2 AI track may cause web3 AI to turn upside down.

more