Manus brings the dawn of AGI, and AI security is also worth pondering

Reprinted from panewslab
03/08/2025·2MAuthor: 0xResearcher
Manus achieved SOTA (State-of-the-Art) results in the GAIA benchmark, showing that its performance surpasses the same-level big model of Open AI. In other words, it can independently complete complex tasks, such as cross-border business negotiations, which involve decomposition of contract terms, strategy prediction, plan generation, and even coordinate legal and financial teams. Compared with traditional systems, Manus's advantages lie in its dynamic target disassembly, cross-modal reasoning, and memory-enhanced learning ability. It can break down large tasks into hundreds of executable subtasks, process multiple types of data at the same time, and use reinforcement learning to continuously improve its own decision-making efficiency and reduce error rates.
While marveling at the rapid development of technology, Manus has once again triggered differences in the circle on the evolutionary path of AI: Will AGI dominate the world in the future or will MAS be coordinated to lead?
This starts with Manus’ design philosophy, which implies two possibilities:
One is the AGI path. By continuously improving the intelligence level of individuals, they are approaching human comprehensive decision-making capabilities.
Another type is the MAS path. As a super coordinator, command thousands of vertical fields Agents to work together.
On the surface, we are discussing different path differences, but in fact we are discussing the underlying contradictions in AI development: how should efficiency and security be balanced? The closer a single intelligence is to AGI, the higher the risk of black boxing of decisions; while multiple agents can diversify risks, they may miss out on key decision windows due to communication delays.
The evolution of Manus invisibly amplifies the inherent risks of AI development. For example, data privacy black hole: in medical scenarios, Manus needs to access patient genomic data in real time; during financial negotiations, it may touch the company's undisclosed financial report information; for example, the algorithmic bias trap, in recruitment negotiations, Manus gives lower-average salary advice to candidates of specific ethnic groups; during legal contract review, the misjudgment rate of emerging industry clauses is nearly half. For example, in order to fight attack vulnerabilities, hackers implant specific voice frequencies to make Manus misjudgment the opponent's quotation range during negotiations.
We have to face a terrible pain point of AI systems: the smarter the system, the wider the attack surface.
However, security is a term that has been constantly mentioned in web3. Various encryption methods have also been derived under the framework of the impossible triangle of V God (blockchain networks cannot achieve security, decentralization and scalability at the same time):
- Zero Trust Security Model: The core concept of the Zero Trust Security Model is "not trusting anyone, always verifying", that is, whether the device is located in the internal network or not, it should not be trusted by default. This model emphasizes strict authentication and authorization for each access request to ensure system security.
- Decentralized Identity (DID): DID is a set of identifier standards that enable entities to be identified in a verifiable and persistent manner without the need for a centralized registry. This implements a new decentralized digital identity model, often compared with self-sovereign identity and is an important part of Web3.
- Fully Homomorphic Encryption (FHE) is an advanced encryption technology that allows arbitrary calculations to be performed on encrypted data without decrypting it. This means that a third party can operate on the ciphertext, and the result obtained is consistent with the result of performing the same operation on the plaintext after decryption. This feature is of great significance for scenarios where computing is required without exposing the original data, such as cloud computing and data outsourcing.
Zero-trust security model and DID have a certain number of projects to overcome difficulties in multiple bull markets. They may have achieved success or been submerged in the wave of encryption. As the youngest encryption method: Fully Homomorphic Encryption (FHE) is also a big killer to solve security problems in the AI era. Fully homomorphic encryption (FHE) is a technology that allows computing on encrypted data.
How to solve it?
First, the data level. All information entered by the user (including biometrics, voice and intonation) is processed in an encrypted state, and even Manus itself cannot decrypt the original data. For example, in medical diagnosis cases, the patient's genomic data is analyzed in ciphertext form throughout the process to avoid biological information leakage.
Algorithm level. Through the "cryptographic model training" implemented by FHE, even developers cannot peek into the decision path of AI.
At the synergy level. Multiple Agent communications use threshold encryption, and a single node is compromised without causing global data leakage. Even in a supply chain attack and defense drill, an attacker cannot obtain a complete business view after infiltrating multiple agents.
Due to technical limitations, web3 security may not have direct contact with most users, but it has indirect interests. In this dark forest, if you do not try your best to arm it, you will never escape the identity of "leeks".
- uPort was released on the Ethereum main network in 2017 and may be the first decentralized identity (DID) project to be released on the main network.
- And in terms of the zero-trust security model, NKN released its main network in 2019.
- Mind Network is the first FHE project to be launched on the main network, and has taken the lead in cooperation with ZAMA, Google, DeepSeek and others.
uPort and NKN are already projects that the editor has never heard of. It seems that security projects are really not paid attention to by speculators. Whether Mind network can escape this curse and become the leader in the security field, let us wait and see.
The future has come. The closer AI is to human intelligence, the more it requires a non-human defense system. The value of FHE is not only to solve current problems, but also pave the way for the strengthening of the AI era. On this steep road to AGI, FHE is not an option, but a necessity for survival.