image source head

LazAI Research: How the AI ​​economy surpasses DeFi TVL Myth

trendx logo

Reprinted from chaincatcher

05/13/2025·1M

introduction

Decentralized Finance (DeFi) ignited the story of exponential growth through a series of simple and powerful economic primitives, transforming the blockchain network into a global license-free market, completely subverting traditional finance. In the rise of DeFi, several key indicators have become common languages ​​for value: Total locked value (TVL) , annualized rate of return (APY/APR) , and liquidity . These neat indicators inspire participation and trust. For example, DeFi’s TVL (the dollar value of assets locked in the protocol) soared 14 times in 2020, then quadrupled again in 2021, reaching a peak of $112 billion . High yields (some platforms claim APY up to 3000% in the liquidity mining boom ) attract liquidity, while the depth of the liquidity pool marks lower slippage and more efficient markets. In short, TVL tells us “how much money is involved”, APR tells us “how much profit can be earned”, and liquidity indicates “how easy it is to trade assets.” Despite its flaws, these indicators build a multi-billion dollar financial ecosystem from scratch. By converting user engagement into direct financial opportunities, DeFi has created a self-reinforced adoption flywheel that rapidly gains popularity and drives large-scale participation.

Today, AI is at a similar crossroads . But unlike DeFi, the current AI narrative is dominated by large general models trained on massive Internet datasets. These models often have difficulty delivering effective results in segmented areas, professional tasks, or personalized needs. Their "one-size-fits-all" mode is powerful but fragile, and although it is universal but misplaced. This paradigm needs to be changed urgently . The next era of AI should not be defined by the size or versatility of the model, but should focus on bottom-up-small, highly specialized models. This type of customized AI requires a completely new kind of data: high-quality, human-aligned and domain-specific data. But obtaining such data is not as simple as web crawling, it requires proactive and conscious contributions from individuals, domain experts and communities.

To promote this new era of specialized, human-aligned AI, we need to build an incentive flywheel similar to DeFi’s design for finance . This means introducing new AI native primitives to measure data quality, model performance, proxy reliability and alignment incentives—metrics should directly reflect the true value of data as an asset (rather than just input).

This article will explore these new primitives that can form the pillar of AI's native economy. We will explain how AI will flourish if the right economic infrastructure is established (i.e., generating high-quality data, reasonably motivate its creation and use, and focusing on individuals). We will also take platforms such as LazAI as an example to analyze how they can take the lead in building these AI-native frameworks, leading new paradigms of pricing and reward data, and providing impetus for the next leap in AI innovation.

DeFi’s motivational flywheel: TVL, yield and liquidity—a quick review

The rise of DeFi is no accident, and its design makes participation both profitable and transparent. Key indicators such as total locked value (TVL) , annualized rate of return (APY/APR) , and liquidity are not only numbers, but also primitives that align user behavior with network growth. Together, these indicators form a virtuous cycle of attracting users and capital, thereby promoting further innovation.

  • Total locked value (TVL) : TVL measures the total capital deposited into DeFi protocols (such as lending pools, liquidity pools), and becomes synonymous with the "market value" of DeFi projects. TVL's rapid growth is seen as a sign of user trust and protocol health. For example, in the DeFi boom from 2020 to 2021, TVL jumped from less than US$10 billion to more than US$100 billion, and exceeded US$150 billion by 2023, demonstrating the value scale that participants are willing to lock in decentralized applications. High TVL produces gravitational effects : more capital means higher liquidity and stability, attracting more users to seek opportunities. Although critics point out that blind pursuit of TVL may lead to protocols providing unsustainable incentives (essentially "buy" TVL), thus masking inefficiency, early DeFi narratives would lack the specific way to track them without TVL.
  • Annualized rate of return (APY/APR) : The commitment to income will convert participation into tangible opportunities. The DeFi protocol begins to provide amazing APR for liquidity or funding providers. For example, Compound launched COMP tokens in mid-2020, creating a liquidity mining model - rewarding governance tokens to liquidity providers. This innovation triggered a frenzy of activity. Using a platform is no longer just a service, but also an investment. High APY attracts earnings seekers and further pushes up TVL. This reward mechanism drives network growth by directly motivating early adopters with generous returns.
  • Liquidity : In finance, liquidity refers to the ability to transfer assets without causing severe price fluctuations - this is the cornerstone of a healthy market. Liquidity in DeFi is often initiated through liquidity mining programs (users earn tokens for providing liquidity). The deep liquidity of decentralized exchanges and lending pools means that users can trade or borrow low friction, thereby improving the user experience. High liquidity brings higher trading volume and practicality, thereby attracting more liquidity - a classic positive feedback loop. It also supports composability: developers can build new products (derivatives, aggregators, etc.) on the mobile market to promote innovation . As a result, mobility has become the lifeblood of the network, driving the emergence of adoption and emerging services.

Together, these primitives form a powerful motivational flywheel . Participants who lock in assets or provide liquidity to create value are rewarded immediately (through high yield and token incentives), thereby encouraging more participation. This translates individual participation into a wide range of opportunities—users earn profits and governance influence—and these opportunities in turn create network effects that attract thousands of users to join. The results are remarkable: As of 2024, the number of DeFi users exceeded 10 million, and its value has increased by nearly 30 times in a few years. Obviously, large-scale incentive alignment— transforming users into stakeholders —is the key to DeFi’s exponential rise.

The current lack of AI economy

If DeFi shows how bottom-up engagement and incentive alignment can initiate the financial revolution, today’s AI economy still lacks the basic primitives that support similar transformations. Currently, AI is dominated by large general-purpose models trained on massive crawling datasets. These basic models are amazing in scale but are designed to solve all problems and are often not particularly effective in serving anyone. Its "one-size-fits-all" architecture is difficult to adapt to sub-sectors, cultural differences or individual preferences, resulting in fragile output, blind spots, and increasingly disconnected from actual needs.

The next generation of AI will no longer be defined as scale, but also the ability to understand context —that is, the ability to understand and serve specific fields, professional communities, and diverse human perspectives. However, this situational intelligence requires different inputs: high-quality, human-aligned data . And this is exactly what is missing at the moment. There is currently no widely recognized mechanism to measure, identify, value or prioritize such data, nor is there an open process for individuals, communities or fields to contribute their perspectives and improve intelligent systems that increasingly affect their lives. Therefore, value is still concentrated in the hands of a few infrastructure providers, and the masses are disconnected from the upward potential of the AI ​​economy. Only by designing new primitives that can discover, verify and reward high-value contributions (data, feedback, alignment signals) can we unlock the participatory growth cycle on which DeFi relies on to prosper.

In short, we must ask the same question:

How should we measure the value created? How to build a self-reinforced adoption flywheel to drive bottom-up participation in individual-centered data?

To unlock a DeFi-like “AI native economy”, we need to define new primitives that translate participation into AI , thereby catalyzing network effects that have not been seen in this field so far.

AI Native Technology Stack: New Principles for New Economy

We no longer just transfer tokens between wallets, but instead convert data input models and model output into decisions and AI agents into actions . This requires new metrics and primitives to quantify intelligence and alignment , just as DeFi metrics quantify capital. For example, LazAI is building a next-generation blockchain network to solve the AI ​​data alignment problem by introducing new asset standards for AI data, model behavior and proxy interaction.

The following outlines several key primitives that define the economic value of AI on-chain:

  • Verified data (new "liquidity") : Data to AI is like liquidity to DeFi - the lifeblood of the system . In AI, especially big models, it is crucial to have the right data . But the original data may be of poor quality or misleading, and we need high-quality data that is verifiable on-chain. The possible primitive here is " Proof of Data (PoD)/Proof of Data Value (PoDV)" . The concept will measure the value of data contributions, not only based on quantity, but also on quality and its impact on AI performance. Think of it as a counterpart to liquidity mining: Contributors who provide useful data (or tags/feedback) will be rewarded based on the value their data brings. Early designs of such systems have taken shape. For example, a blockchain project's Proof of Data (PoD) consensus treats data as the primary resource for verification (similar to energy in proof of work or capital in proof of stake). In this system, nodes receive rewards based on the quantity, quality and relevance of their contribution data .

Promoting it to the general AI economy, we may see “Total Locked Data Value (TDVL)” as an indicator: an aggregated measure of all valuable data on the network, weighted by verifiability and usefulness. Verified data pools can even trade like liquidity pools—for example, validated medical imaging pools for on-chain diagnostic AI may have quantitative value and utilization. Data traceability (understanding the data source, modifying history) will be a key part of this metric to ensure that the data entered into the AI ​​model is trustworthy and traceable. Essentially, if liquidity is about available capital, verifiable data is about available knowledge . Indicators such as proof of value (PoDV) can capture the useful amount of knowledge locked in the network, while on-chain data anchoring achieved through LazAI's data anchoring token (DAT) makes data liquidity a measurable and incentivized economic layer.

  • Model Performance (a new asset class) : In the AI ​​economy, a trained model (or AI service) becomes an asset itself—even a new asset class that can be seen alongside tokens and NFTs . A well-trained AI model is valuable for the encapsulated intelligence in its weights. But how to characterize and measure this value on a chain? We may need on-chain performance benchmarks or model authentication . For example, the accuracy of a model on a standard data set, or the winning rate in a competitive task, can be recorded on the chain as a performance score. It can be considered as an on-chain "credit rating" or KPI for the AI ​​model . Such scores can be adjusted as the model is fine-tuned or data is updated. Projects such as Oraichain have explored combining AI model API with reliability scores (verifying whether AI output meets expectations through test cases). In AI native DeFi (“AiFi”), staking based on model performance can be envisaged —for example, if the developer believes that its model has excellent performance, it can pledge tokens; if its performance is confirmed by an independent chain audit, it will be rewarded (if the model has poor performance, it will lose staking). This will inspire developers to report truthfully and continuously improve the model. Another idea is tokenized model NFT carrying performance metadata - the "floor price" of the model NFT may reflect its usefulness. Such practices have begun to emerge: some AI markets allow buying and selling models to access to tokens, and protocols such as LayerAI (formerly CryptoGPT) clearly regard data and AI models as emerging asset classes in the global AI economy. In short, DeFi asks "How much money is locked?", and AI-DeFi will ask " How much intelligence is locked?" - Not only refers to computing power (although it is equally important), but also the effectiveness and value of running models in the network. New metrics may include " proof of model quality" or timing indexes for on-chain AI performance improvements.
  • Agent Behavior and Utility (On-chain AI Agent) : The most exciting and challenging new element in AI native blockchain is the autonomous AI Agent running on the chain. They could be transaction bots, data curators, customer service AI or complex DAO governors—essentially software entities that are able to sense, make decisions and act on behalf of users on the network or even act on their own. The DeFi world only has basic "robots"; in the AI ​​blockchain world, agents may become first-class economic entities . This has created a need for metrics around agency behavior, credibility and practicality. We may see mechanisms like “agent utility scores” or reputation systems . Imagine each AI agent (which may be characterized as an NFT or semi-homogenized token (SFT) identity) amass a reputation based on its actions (complete tasks, collaboration, etc.). Such ratings are similar to credit scores or user ratings, but are aimed at AI. Other contracts can determine whether to trust or use the proxy service accordingly. In the iDAO ( individual-centric DAO) concept proposed by LazAI , each agent or user entity has its own on-chain domain and AI assets. It is conceivable that these iDAOs or agents create measurable records.

Some platforms have begun to tokenize AI agents and give on-chain metrics: For example, Rivalz's " Rome protocol " creates NFT-based AI agents (rAgents) , whose latest reputation metrics are recorded on-chain. Users can pledge or lend these agents, and their rewards depend on the performance and impact of the agent in the collective AI "cluster". This is essentially the DeFi for AI agents and demonstrates the importance of proxy utility indicators . In the future, we may discuss “active AI agents” like we discuss active addresses, or “agent economic impact” like we discuss transaction volume.

  • Attention trajectory may become another primitive - recording what the agent is concerned about in the decision-making process (which data, signals). This makes the black box agent more transparent, auditable, and attributes the success or failure of the agent to specific inputs. In short, agency behavior indicators will ensure responsibility and alignment: to allow autonomous agency to be trusted to manage large amounts of funds or critical tasks, its reliability needs to be quantified. High-agent utility scores may become the prerequisite for on-chain AI agents to manage large amounts of funds (similar to traditional finance, high-value credit scores are the threshold for large loans).
  • Usage incentives align with AI indicators : Finally, the AI ​​economy needs to consider how to incentivize beneficial use and alignment. DeFi encourages growth through liquidity mining, early user airdrops or fee refunds ; in AI, simply using growth is not enough, and we need to incentivize the use of improved AI results . At this point, the metrics that are aligned with AI are crucial. For example, human feedback loops (such as user rating AI responses or providing corrections through iDAO, which will be described in detail below) can be recorded and feedback contributors can earn " alignment benefits . " Or imagine " proof of attention" or " proof of participation" to reward users who invest time improving AI (by providing preferred data, corrections, or new use cases). Metrics may be attention trajectories , capturing quality feedback from investing in optimizing AI or human attention power .

Just as DeFi requires block browsers and dashboards (such as DeFi Pulse, DefiLlama) to track TVL and returns, the AI ​​economy also needs new browsers to track these AI centralized indicators - imagine an " AI-llama " dashboard that displays the total aligned data volume , the number of active AI agents , the cumulative AI utility gain , etc. It has similarities with DeFi, but the content is brand new.

Going towards a DeFi-style AI flywheel

We need to build an incentive flywheel for AI —think data as a first-class economic asset , thereby transforming AI development from a closed cause to an open, participatory economy, just as DeFi transforms finance into a user-driven open field of liquidity.

Early explorations in this direction have emerged. For example, projects such as Vana began to reward users for data sharing. Vana Network allows users to contribute personal or community data to DataDAO (decentralized data pool) and earn dataset-specific tokens (redeemable to network native tokens). This is an important step towards monetization of data contributors .

However, reward contribution behavior alone is not enough to reproduce DeFi's explosive flywheel. In DeFi, liquidity providers not only receive rewards for depositing assets, but also provide assets with transparent market value, and the returns reflect actual use (transaction fees, lending interest plus incentive tokens). Similarly, the AI ​​data economy needs to surpass general rewards and directly price data . We may fall into shallow incentives without economic pricing based on data quality, scarcity, or the degree of improvement of the model. Simply distributing token reward participation may encourage quantity rather than quality, or stall when the token lacks actual AI utility pegs. To truly unleash innovation, contributors need to see clear market-driven signals , understand their data value, and earn rewards when the data is actually used in AI systems.

We need an infrastructure that focuses more on direct valuation and reward data to create a data-centralized incentive cycle : the more high-quality data people contribute, the better models, attract more usage and data needs, thereby driving up contributor returns. This will transform AI from a closed competition for big data to an open market with credible, high-quality data .

How do these ideas be reflected in real projects? Take LazAI as an example

  • the project is building the next generation of blockchain networks and basic primitives for the decentralized AI economy.

Introduction to LazAI - Align AI with humans

LazAI is a next-generation blockchain network and protocol designed specifically to solve the problem of AI data alignment. By introducing new asset standards for AI data, model behavior and proxy interaction, it builds the infrastructure of a decentralized AI economy.

LazAl provides one of the most forward-looking approaches to address AI alignment issues by making data verifiable, motivating and programmable on-chain. The following will use LazAI framework as an example to illustrate how Al native blockchain puts the above principles into practice.

Core issues—data misalignment and lack of fair incentives

AI alignment often comes down to the quality of training data, and new data that is aligned with humans in the future needs to be trusted and governed. As the AI ​​industry shifts from centralized general models to contextualized and aligned intelligence, infrastructure must evolve simultaneously. The next AI era will be defined by alignment, accuracy and traceability. LazAI directly hits the data alignment and motivation challenge and proposes a fundamental solution: align data at the source and directly reward the data itself . In other words, ensure that the training data verbally represent human perspective, denoising/debiasing, and rewarding based on data quality, scarcity, or degree of improvement of the model. This is a paradigm transition from patching the model to sorting the data.

LazAI not only introduces primitives, but also proposes a new paradigm for data acquisition, pricing and governance . Its core concepts include data anchoring tokens (DAT) and individual-centered DAO (iDAO) , which jointly realize the pricing, traceability and programmable use of data .

Verified and programmable data – Data anchored tokens (DAT)

To achieve this goal, LazAI introduced a new on-chain primitive - Data Anchor Token (DAT) , a new token standard designed specifically for AI data assetization. Each DAT represents the data anchored on a chain and its lineage information: contributor identity, evolution over time, and usage scenarios. This creates a verifiable history for each piece of data —a version control system similar to a dataset (such as Git), but secured by the blockchain. Because DATs exist on the chain, they are programmable : smart contracts can manage their usage rules. For example, a data contributor may specify that their DAT (such as a set of medical images) is accessible only to specific AI models, or to use under certain conditions (enforce privacy or ethical constraints through code). The incentive mechanism is reflected in the fact that DAT can be traded or pledged - if the data is valuable to the model, the model (or its owner) may pay to obtain access to the DAT. In essence, LazAI builds a data tokenized and traceable market. This directly echoes the "verifiable data" metric discussed earlier: by checking the DAT, you can confirm whether it has been verified, how many models are used, and what model performance improvements it brings. Such data will receive higher valuations. By anchoring data on the chain and linking economic incentives to quality , LazAI ensures that AI is trained on trusted and measurable data . This is to solve the problem through motivational alignment – ​​quality data is rewarded and stand out.

Individual-centered DAO (iDAO) framework

The second key component is LazAI’s concept of iDAO ( individual-centric DAO) , which redefines governance models in the AI ​​economy by placing individuals (rather than organizations) at the heart of decision-making and data ownership. Traditional DAOs usually prioritize collective organizational goals, inadvertently weakening individual will. iDAO subverts this logic. They are personalized governance units that allow individuals, communities, or domain-specific entities to directly own, control, and validate data and models that they contribute to the AI ​​system. iDAO supports customized, aligned AI : as a governance framework, they ensure that models always follow the values ​​or intentions of contributors. From an economic perspective, iDAO also makes AI behaviors community programmable —setting rules limit how a model uses specific data, who can access the model, and how the model output benefits are distributed. For example, an iDAO may stipulate that whenever its AI model is called (such as an API request or task completion), part of the gain will be returned to the DAT holder who contributes the relevant data. This establishes a direct feedback loop between proxy behavior and contributor rewards —similar to the mechanism in DeFi that links liquidity provider income to platform usage. In addition, composable interactions between iDAOs can be achieved through a protocol: one AI agent (iDAO) can invoke the data or model of another iDAO under the negotiation clause.

By establishing these primitives, LazAI’s framework brings the vision of a decentralized AI economy into reality. Data becomes an asset that users can own and profit from, models transform from private silos to collaborative projects , and each participant—from individuals who curate unique data sets to developers who build small professional models—can become stakeholders in the AI ​​value chain. This motivational alignment is expected to replicate the explosive growth of DeFi: People will be more actively engaged when they realize that participation in AI (contributing data or expertise) translates directly into opportunities . As more participants increase, the network effect is launched - more data generates better models, attracts more users, and then generates more data and needs, forming a positive cycle.

Building AI Trust Base: Verified Computing Framework

In this ecosystem, LazAI's Verified Computing Framework is the core layer for building trust. This framework ensures that every generated DAT, every iDAO (individualized autonomous organization) decision-making, and every incentive allocation have a verifiable traceability chain, making data ownership executable, governance processes accountable, and agent behavior auditable. By transforming iDAO and DAT from theoretical concepts into reliable and verifiable systems, the verifiable computing framework implements a paradigm shift in trust—from dependence assumptions to deterministic assumptions based on mathematical verification.

The value realization of decentralized AI economy **
** The establishment of this set of basic elements has enabled the vision of decentralized AI economy to be truly implemented:

  • Data assetization : Users can confirm their rights and hold data assets and obtain profits
  • Model collaboration : AI model transforms from closed islands to open collaboration products
  • Participation in equity : From data contributors to vertical model developers, all participants can become stakeholders in the AI ​​value chain

This incentive-compatible design is expected to replicate the growth momentum of DeFi: enthusiasm for participation will be ignited when users realize that participation (by contributing data or expertise) can be directly translated into economic opportunities. As the size of participants expands, the network effect appears - more high-quality data gives birth to better models, attracting more users to join, and thus generating more data demands, forming a self-reinforced growth flywheel.

Conclusion: A move towards an open AI economy

The history of DeFi shows that the correct primitives can release unprecedented growth. In the upcoming AI native economy, we are standing at a critical point of similar breakthroughs. By defining and implementing new primitives that value data and alignment , we can transform AI development from centralized engineering to decentralized community-driven causes. There are many challenges in this journey: ensuring that economic mechanisms prioritize quality over quantity and avoiding ethical pitfalls to prevent data incentives from undermining privacy or equity. But the direction is clear. Practices such as LazAI's DAT and iDAO are opening up the way to transform the abstract concept of "AI aligned with humans" into a concrete mechanism for ownership and governance .

Just as early DeFi experimentally optimized TVL, liquidity mining and governance, the AI ​​economy will also iterate over its new primitives. In the future, debates and innovations around data value measurement, fair reward distribution, AI agent alignment and benefits will surely emerge. This article only touches on the surface of the incentive model that may promote AI democratization, and hopes to stimulate open discussion and in-depth research: How to design more AI native economic primitives? What unexpected consequences or opportunities may arise? Through the participation of a broad community, we are more likely to build an AI future that is not only technologically advanced, but economically inclusive, aligned with human values.

DeFi's exponential growth is not magic—it is driven by motivational alignment. Today, we have the opportunity to drive an AI revival through similar practices of data and models . By turning participation into opportunities and opportunities into network effects , we can launch a flywheel to reshape value creation and distribution for AI in the digital era.

Let's build this future together - starting with a verifiable dataset, an aligned AI proxy, a new primitive.

more