image source head

Multi-model consensus + decentralized verification: How does Mira Network build an AI trust layer to fight hallucinations and biases?

trendx logo

Reprinted from panewslab

03/21/2025·2M

See the public testnet of Mira network launched yesterday. It is trying to build a layer of trust in AI. So, why does AI need to be trusted? How did Mira solve this problem?

When people discuss AI, they focus more on the powerful aspects of AI capabilities. However, interestingly, there is "illusion" or bias in AI. People don't pay much attention to this matter. What is the "illusion" of AI? Simply put, AI sometimes "makes up" and talks seriously. For example, you ask why the moon is pink? It may give you many seemingly reasonable explanations seriously.

The existence of "illusion" or bias in AI is related to some current AI technology paths. For example, generative AI will output content by predicting "most likely" to achieve coherence and rationality, but sometimes it is impossible to verify authenticity; in addition, the training data itself also contains errors, biases and even fictional content, which will also affect the output of AI. That is to say, the human language patterns learned by AI itself rather than the facts itself

In short, the current probability generation mechanism + data-driven model almost inevitably brings the possibility of AI hallucination.

If this content with bias or hallucination output is just ordinary knowledge or entertainment content, there will be no direct consequences for the time being, and if it occurs in highly rigorous fields such as medical care, law, aviation, and finance, it will directly have significant consequences. Therefore, how to solve AI hallucinations and prejudice is one of the core issues in the evolution of AI. Some use retrieval enhancement generation technology (combined with real-time databases, priority output of verified facts), some introduce human feedback, and correct model errors by manually labeling human supervision.

The Mira project is also trying to solve the problem of AI bias and hallucination, that is, Mira is trying to build a layer of trust in AI, reduce AI bias and hallucination, and improve the reliability of AI. So, from the overall framework, how does Mira reduce AI bias and hallucination and ultimately achieve trustworthy AI?

The core of Mira's implementation of this is to verify AI output through multiple AI model consensus. That is, Mira itself is a verification network that verifies the reliability of AI output, and it leverages the consensus of multiple AI models. In addition, another important thing is to verify the decentralized consensus.

Therefore, the key to the Mira network is decentralized consensus verification. Decentralized consensus verification is good at the field of encryption, and it also takes advantage of the collaboration of multiple models to reduce bias and hallucination through collective verification patterns.

In terms of verification architecture, it requires an independent verification declaration, and the Mira protocol supports the conversion of complex content into independent verification declarations. These statements require node operators to participate in verification. In order to ensure the honesty of node operators, crypto-economic incentives/punishments will be used here to achieve different AI models + scattered node operators to ensure the reliability of the verification results.

Mira's network architecture includes content transformation, distributed verification, and consensus mechanism to achieve verification reliability. In this architecture, content conversion is an important part. The Mira network first breaks down the candidate content (usually submitted by the customer) into different verifiable statements (to ensure that the model can be understood in the same context), which are distributed to the nodes by the system to determine the validity of the statement and summarize the results to reach a consensus. These results and consensus will be returned to the client. In addition, in order to protect customer privacy, candidate content is transformed into declared pairs, which will be given to different nodes in a random shard to prevent information leakage during the verification process.

The node operator is responsible for running the validator model, processing the declaration, and submitting the verification results. Why are node operators willing to participate in the verification of the statement? Because you can get profits. Where does the income come from? From the value created for customers. The purpose of the Mira network is to reduce the error rate (illusion and bias) of AI. Once the goal can be achieved, it can generate value, such as reducing the error rate in the fields of healthcare, law, aviation, finance, etc., which will generate huge value. Therefore, customers are willing to pay. Of course, the sustainability and scale of payment depends on whether the Mira network can continue to bring value to customers (reducing AI error rate). In addition, to prevent opportunistic behaviors from responding randomly, nodes that continue to deviate from consensus will be staked tokens. In short, it is to ensure that node operators honestly participate in verification through gameplay of economic mechanisms.

Overall, Mira provides a new solution to realize the reliability of AI, which is to build a decentralized consensus verification network based on multiple AI models to bring higher reliability to customers' AI services and reduce AI bias and hallucinations to meet customers' needs for higher accuracy and accuracy. And on the basis of providing value to customers, it brings benefits to participants in the Mira network. If summarized in one sentence, it is Mira trying to build a trust layer of AI. This will promote the in-depth application of AI.

Currently, the AI ​​agent frameworks that Mira cooperates with include ai16z, ARC, etc. The public test network of Mira network was launched yesterday. Users can participate in the Mira public test network by using Klok. It is a LLM chat application based on Mira. Using Klok applications can experience verified AI output (you can compare the difference between unverified AI output), and you can also earn Mira points. As for what the future purpose of the points will be, it has not been disclosed yet.

more