Vitalik’s new article: How does decentralized accelerationism affect cryptocurrency and AI?

Reprinted from panewslab
01/06/2025·4MOriginal title: "d/acc: one year later"
Written by: Vitalik Buterin, founder of Ethereum
Compiled by: Leek, Foresight News
Abstract: The article focuses on the concept of decentralized acceleration (d/acc) and discusses its application and challenges in technological development, including artificial intelligence security and supervision, its connection with cryptocurrency and public goods funding, etc., emphasizing The importance of d/acc to building a safer and better world and the opportunities and challenges for future development. The author deeply elaborates on the connotation of d/acc, analyzes its role in dealing with artificial intelligence risks by comparing different strategies, and discusses the value of cryptocurrency and the exploration of public goods funding mechanisms. Finally, he looks forward to the challenges of future technological development. Humanity still has the opportunity to build a better world with existing tools and ideas.
Preface
Special thanks to volunteers such as Liraz Siri, Janine Leger and Balvi for their feedback and reviews.
About a year ago, I wrote an article about techno-optimism, in which I laid out my overall enthusiasm for technology and the tremendous benefits it can bring, while also expressing my caution on a few specific issues. These questions focus primarily on superintelligent artificial intelligence and the risk of destruction, or the irreversible loss of human power, if this technology is not built properly.
One of my core points in that article was a philosophy of decentralized, democratic, and differentiated defensive acceleration. It is necessary to accelerate technological development but also to focus differentially on those technologies that improve our ability to defend rather than cause harm, and to promote the decentralization of power rather than concentrating power in the hands of a few elites and avoiding ownership by the representatives of these elites. People decide what is right and wrong, good and evil. The model of defense should be that of democratic Switzerland and the historically quasi-anarchist region of Zomia, rather than the model of lords and castles represented by medieval feudalism.
In the year since then, these concepts and ideas have evolved and matured significantly. I shared these thoughts on 80,000 Hours, an organization focused on career choices, and received a lot of responses, most of them positive, but some critical.
The work itself continues to advance and yield tangible results: we are witnessing progress in the area of verifiable open source vaccines; awareness of the value of healthy indoor air continues to deepen; Community Notes continues to play a positive role; prediction markets as a An information tool has its breakthrough year; zero-knowledge concise non-interactive knowledge argument (ZK - SNARKs are used in government identification and social media (and secure Ethereum wallets through account abstraction); open source imaging tools are used in medicine and brain-computer interfaces (BCI), and more.
In the fall of last year, we ushered in the first important d/acc event: the "d/acc Discovery Day" (d/aDDy) held at Devcon. The event brought together people from all pillars of d/acc (biology, physics, network , information defense, and neurotechnology), the event lasted throughout the day. The people who have been working on these technologies for years are becoming more aware of each other's work, and outsiders are becoming more aware of the larger vision: that the same values that drive Ethereum and cryptocurrencies can be extended to the wider world.
The connotation and denotation of d/acc
Time passes to 2042. You see a piece of news in a media report about a possible new outbreak in your city. You're used to it: People tend to overreact to every animal disease mutation that, in the vast majority of cases, never ends up causing an actual crisis. Two previous potential outbreaks were detected early and successfully nipped in the bud through wastewater monitoring and open source analysis of social media. However, things are different this time, and prediction markets are showing a 60% chance of at least 10,000 cases, which makes you worried.
Just yesterday, the genetic sequence of the virus was determined. A software update for the air tester in your pocket is rolling out soon, giving the tester the ability to detect new viruses (either through a single breath test or after 15 minutes of exposure to indoor air in a room). In the meantime, open source instructions and code to generate the vaccine, using equipment accessible to any modern medical facility around the world, are expected to be released within weeks. Most people are taking no action yet and are relying primarily on widespread air filtration and ventilation measures to keep themselves safe.
You're more cautious because of your own immune issues: the open-source, locally-run personal assistant AI you're using takes into account real-time air test data and CO2 data, in addition to regular tasks like navigation, restaurant and activity recommendations. We only recommend the safest places to you. This data is provided by thousands of participants and devices, and with the help of ZK-SNARKs and differential privacy technology, the risk of the data being leaked or misused for other purposes is minimized (if you intentionally contribute to these datasets, you can also There are other personal assistant AIs that verify that these encryption tools actually work).
Two months later, the epidemic miraculously dissipated: 60% of people seemed to be following basic protocols, which was to wear masks when air testers sounded an alarm and showed the presence of the virus, and to stay home if an individual tested positive. It is this move that is enough to further reduce the transmission rate, which has been significantly reduced due to passive strong air filtration, to below 1. A simulation suggests a disease that could be five times worse than the coronavirus pandemic two decades ago is not having a serious impact today.
d/acc day at Devcon
One extremely positive outcome of the d/acc event at Devcon was that the d/acc concept successfully brought people from different fields together and actually sparked a strong interest in each other's work.
It is not difficult to hold an event with "diversity", but it is not easy to establish a close connection between people with different backgrounds and interests. I still have vivid memories of being forced to watch lengthy operas in middle school and high school that I personally found boring. I knew I was "supposed" to enjoy them because otherwise I would be considered an uneducated computer science slob, but I failed to connect with the opera's content on a deeper level. However, the atmosphere at d/acc day was completely different: it felt like people really enjoyed learning about the various jobs in different fields.
This broad coalition-building is necessary if we aspire to a future brighter than domination, slowdown, and destruction. d/acc appears to have achieved remarkable results in this regard, which alone highlights the value of this concept.
The core idea of d/acc is simple and clear: decentralized, democratic, and differentiated defensive acceleration. Build technology that tips the balance of attack and defense in favor of defense, and do so without relying on handing more power to a central authority. There is an inherent close connection between these two aspects: any kind of decentralized, democratic or free political structure will tend to thrive when defense is easy to implement, and will encounter serious challenges when defense faces many difficulties - —In those cases, the more likely outcome is a period of chaos in which everyone is pitted against each other, culminating in a state of equilibrium dominated by the strongest.
One way to understand the significance of trying to achieve decentralization, defensibility, and acceleration at the same time is to contrast it with the philosophy that arises from abandoning any one of these three aspects.
Charts from last year’s My Techno Optimism
Decentralization is accelerating, but the “differentiated defense” part is ignored
In essence, this is similar to being an effective accelerationist (e/acc) but pursuing decentralization at the same time. There are many people who take this approach, some of whom call themselves d/acc, but they helpfully describe their focus as "offensive." In addition, there are many others who express more moderate enthusiasm for "decentralized artificial intelligence" and similar topics, but in my opinion, they pay a distinct lack of attention to the "defense" aspect.
In my opinion, this approach may avoid the risk of a certain group of people imposing dictatorship on global humanity, but it fails to solve the underlying structural problem: in an environment conducive to offense, there is always a constant possibility of disaster. Risk, or someone will position themselves as the protector and permanently dominate. In the case of AI, it also fails to properly address the risk of humanity as a whole being disempowered vis-à-vis AI.
Differentiated defense accelerates, but ignores "decentralization and democracy"
Accepting centralized control to achieve security objectives will always have a certain appeal to some people, and readers will no doubt be familiar with many of these examples and the drawbacks they bring. Recently, some have expressed concern that extreme centralized control may be the only way to deal with the extreme technologies of the future: Consider, for example, a hypothetical scenario in which “everyone wears a ‘free tag’—a reference to today’s more limited wearable surveillance devices.” Follow-up products, similar to the ankle tags used as an alternative to prison in several countries...encrypted video and audio are continuously uploaded and interpreted by machines in real time." However, there is a degree of problem with centralized control. A relatively benign form of centralized control that is often overlooked, but nonetheless harmful, manifests itself in the resistance to public scrutiny in the biotech sector (e.g., food, vaccines) and the closed-source norms that allow such resistance to go unchallenged.
The risk of this approach is obvious, that is, the center itself often becomes the source of risk. We've seen this during the COVID-19 pandemic, where gain-of-function research funded by multiple major world governments may be at the root of the pandemic, and centralized epistemology led to the World Health Organization refusing for years to acknowledge that COVID-19 is airborne. And mandatory social distancing and vaccine mandates have triggered a political backlash that could last decades. A similar situation is highly likely to occur again in any risk scenario related to artificial intelligence or other risky technologies. In contrast, a decentralized approach will more effectively deal with risks from the center itself.
Decentralize defense, but exclude acceleration
Essentially, this is an attempt to slow technological progress or drive economic decline.
The challenge with this strategy is twofold. First, technological and economic growth in general is extremely beneficial to humanity, and any delay in it will bring incalculable costs. Second, in a non-totalitarian world, stagnation is destabilizing: those who "cheat" the most and can find seemingly reasonable ways to continue to advance will have the upper hand. The decelerationist strategy can work to a certain extent in certain circumstances: European food is healthier than American food, for example, and so is the success of nuclear nonproliferation so far. However, these strategies may not always work.
Through d/acc, we strive to achieve the following goals:
- Staying true to our principles in an increasingly tribal world, rather than just blindly building things - instead, we look to build specific things that will make the world a safer, better place.
- Recognize that exponential technological progress means that the world will become extremely strange, and that humanity's overall "footprint" in the universe will inevitably increase. Our ability to protect vulnerable animals, plants and people from harm must continue to improve, and the only way forward is to move forward.
- Build technology that actually protects us, rather than based on the assumption that good guys (or good AI) are in charge. We do this by building tools that are naturally more effective when used to build and protect than when used to destroy.
Another way to think about d/acc is to return to a framework that informed the European Pirate Party movement of the late 2000s: empowerment.
Our goal is to build a world that preserves human agency and achieves negative freedom, i.e., avoiding the active interference of others (whether private citizens, governments, or superintelligent robots) in our ability to shape our own destiny, while achieving positive freedom, i.e. Make sure we have the knowledge and resources to practice this capability. This echoes a centuries-long tradition of classical liberalism that encompasses Stuart Brand's focus on "instrumental acquisition" and John Stuart Mill's emphasis on the juxtaposition of education and liberty as key elements of human progress— One might also add to Buckminster Fuller's vision that global problem-solving processes be participatory and widely distributed. Given the 21st century technology landscape, we can think of d/acc as a way to achieve these same goals.
The third dimension: coordinated development of survival and prosperity
In my article last year, d/acc focused specifically on defensive technologies: physical defense, biodefense, cyber defense, and information defense. However, decentralized defense alone is not enough to build a great world: we also need a forward-looking and positive vision of what humanity can achieve with new decentralization and security.
Last year's article did contain a positive vision in two respects:
1. When focusing on the challenge of superintelligence, I proposed a path (which is not original to me) on how we can achieve superintelligence without losing power:
- Today, artificial intelligence is built as a tool rather than a highly autonomous agent.
- In the future, tools such as virtual reality, myoelectric technology, and brain-computer interface will be used to establish a closer feedback mechanism between artificial intelligence and humans.
- As time goes by, it gradually moves towards the final outcome, that is, superintelligence is the product of the close integration of machines and humans.
2. When talking about information defense, I also mentioned in passing that in addition to defensive social technologies that aim to help communities maintain cohesion and conduct high-quality discussions in the face of attackers, there are also progressive social technologies. Can help the community make high-quality judgments more easily: Pol.is is an example, and the same is true for prediction markets.
But at the time these two points felt disconnected from d/acc’s core argument: “Here are some ideas about building a more democratic, defense-friendly world at a fundamental level, and by the way, here are some ideas about how we might get there. The irrelevant thoughts of a superintelligence”.
However, I believe that in reality there are some crucial connections between the d/acc technologies labeled above as "defensive" and "progressive." Let’s expand on the d/acc chart from last year’s article by adding this axis to the chart (and relabeling it “Survive and Thrive”) to see what the results look like:
There is a consistent pattern in various fields, that is, the science, ideas, and tools that can help us "survive" in a certain field are closely related to the science, ideas, and tools that can help us "thrive." Here are some specific examples:
- Many recent anti-COVID-19 studies have focused on the persistence of the virus in the body, which is regarded as a key mechanism in the problem of COVID-19. Recently, there have also been suggestions that viral persistence may be a causative factor in Alzheimer's disease—if this holds true, addressing viral persistence in all tissue types may be key to overcoming the challenges of aging.
- Low-cost and tiny imaging tools, such as those being developed by Openwater, have strong potential to treat microthrombosis, viral persistence, cancer, and may also have applications in brain-computer interfaces.
- The concepts driving the construction of social tools suitable for highly adversarial environments (such as Community Notes) and social tools for reasonably cooperative environments (such as Pol.is) are very similar.
- Prediction markets are valuable in both high-cooperation and high-confrontation environments.
- Zero-knowledge proofs and similar technologies perform calculations on data while protecting privacy, which not only increases the amount of data available for useful work such as scientific research, but also enhances privacy protection.
- Solar energy and batteries are significant for driving the next wave of clean economic growth, while also excelling in decentralization and physical resilience.
Beyond this, there are important interdependencies between different subject areas:
- Brain-computer interface is crucial as an information defense and collaboration technology because it enables more sophisticated communication of our thoughts and intentions. A brain-computer interface is more than just a connection between a robot and consciousness: it can also be a consciousness-robot-consciousness interaction. This echoes the value of brain-computer interfaces in the concept of pluralism.
- Many biotechnologies rely on information sharing, and in many cases people are willing to share information only if they are confident that it will only be used for a specific application. This relies on privacy technology (such as zero-knowledge proof, fully homomorphic encryption, obfuscation technology, etc.).
- Collaboration technology can be used to coordinate funding for any other technology area.
Hard questions: AI safety, tight timelines, and regulatory dilemmas
Different people have very different AI timelines. Chart from Zuzalu, Montenegro in 2023.
The most persuasive objections I received to my article last year came from the AI safety community. The argument goes: “Of course, if we had half a century to develop strong AI, we could focus on building all these helpful things. But in reality, it looks like we might only have three years to develop general AI, Three more years to reach superintelligence. So if we don't want the world to plunge into destruction or otherwise get into irreversible trouble, we can't just speed up the development of beneficial technologies, we also have to slow down the development of harmful technologies, which means passing. Strong regulatory measures that could anger the powerful." In my article last year, I did not propose any specific strategies for “slowing the development of harmful technologies,” other than a vague plea not to build risky forms of superintelligence. So here, it’s worth getting right to the question: If we were in the least ideal world, where the risks of AI were extremely high and the timeline was perhaps as short as five years, what regulatory measures would I support?
Reasons to be cautious about new regulations
Last year, the major AI regulatory proposal was California’s SB-1047 bill. SB-1047 requires developers of the most powerful models (those that cost more than $100 million to train, or more than $10 million to fine-tune) to undertake a series of security testing measures before release. Additionally, AI model developers will be held accountable if they fail to exercise sufficient care. Many critics argue that the bill "poses a threat to open source"; I dispute this because the cost threshold means that it only affects the most powerful models: even the Llama3 model is probably below that threshold. Looking back, however, I think the bill had a more serious problem: like most regulations, it was overly adapted to the current situation. The focus on training costs has proven fragile in the face of new technologies: the recent state-of-the-art DeepSeek v3 model cost just $6 million to train, and in newer models like o1, the cost often ranges from training more moved to the reasoning stage.
**Actors Most Likely Responsible for Artificial Intelligence
Superintelligence Destruction Scenarios**
In fact, the actors most likely to be responsible for an AI superintelligence scenario of destruction are the military. As we have witnessed over the past half century in biosecurity (and earlier), militaries are willing to take some horrific actions, and they are extremely fallible. Today, the application of artificial intelligence in the military field is developing rapidly (such as its application in Ukraine and Gaza). And any security regulation passed by a government would by default exempt its own military and companies that work closely with the military.
coping strategies
Nonetheless, these arguments are not a reason to be helpless. Instead, we can use them as a guide and try to craft rules that cause the least amount of these concerns.
Strategy 1: Responsibility
If someone's actions cause harm that is legally actionable in some way, they may be sued. This doesn't solve the problem of risks posed by the military and other "above the law" actors, but it is a very general approach that avoids overfitting and, as such, leans towards libertarianism Economists generally support this approach.
The main accountability objectives considered so far are as follows:
- User: A person who uses artificial intelligence.
- Deployer: The middleman who provides artificial intelligence services to users.
- Developer: A person who builds artificial intelligence.
Placing the responsibility on the user seems to best align with incentives. While the connection between how a model is developed and how it is ultimately used is often unclear, users determine exactly how the AI is used. Holding users accountable creates a powerful pressure to use AI in what I think is the right way: to focus on building mechanical suits for the human mind rather than on creating new self-sustaining intelligent life forms. The former responds to user intent regularly and therefore does not result in catastrophic actions unless the user desires it. The latter carries the greatest risk, that is, it may lose control and trigger the classic "artificial intelligence out of control" scenario. Another benefit of placing liability as close to the end-use end as possible is that it minimizes the risk of liability leading people to take otherwise harmful actions (e.g. closed source, know your customer (KYC) and surveillance, state/corporate collusion Secretly restricting users, such as banks refusing to serve certain customers, excluding large areas of the world).
There is a classic objection to attributing responsibility solely to the user: the user might be an ordinary individual, without much money, or even anonymous, so that no one can actually pay for the catastrophic damage. This view may be exaggerated: even if some users are too small to face liability, the average customer of AI developers is not, so AI developers will still be incentivized to build systems that reassure users that they will not face high liability Risky product. That said, this is still a valid point and needs to be addressed. You need to incentivize someone in the pipeline who has the resources to take appropriate care to do so, and both deployers and developers are easy targets who still have a large impact on the security of the model.
Deployer responsibility seems reasonable. A common concern is that it won't work with open source models, but this seems manageable, especially since the most robust models will most likely be closed source (if it turns out to be open source, then although deployer responsibility may ultimately Not terribly useful, but not too harmful either). The same concerns exist with developer liability (although with open source models there is a certain barrier to fine-tuning the model to do something it's not otherwise allowed to do), but the same counterarguments apply. As a general principle, impose a "tax" on control, essentially saying "you can build something you can't control, or you can build something you can control, but if you build something you can control, then 20% "Control must be used for our purposes." This seems to be a reasonable position for the legal system.
One idea that seems underexplored is to attribute responsibility to other actors in the pipeline who are more likely to have adequate resources. One idea that fits well with the d/acc philosophy is to hold accountable the owner or operator of any device that an AI takes over (for example, through hacking) in the course of performing some catastrophically harmful action. This will create a very broad incentive to work towards making the world's infrastructure, particularly in computing and biology, as secure as possible.
Strategy 2: Global “soft pause” button on industrial-scale hardware
If I were convinced that we needed something "stronger" than liability rules, I would choose this strategy. The goal is to have the ability to reduce the global available computing power by approximately 90% - 99% during a critical period, lasting 1 - 2 years, to buy more preparation time for humans. The value of 1-2 years should not be overestimated: one year of "wartime mode" can easily be worth a hundred years of regular work given complacency. Methods to implement "pause" are already being explored, including specific proposals such as requiring hardware to register and verify location.
A more advanced approach would be to use clever cryptographic means: for example, manufactured industrial-scale (but not consumer-grade) AI hardware could be equipped with a trusted hardware chip that only receives data from major international institutions on a weekly basis, including at least It will only be allowed to continue operating if 3/3 of the signatures of a non-military affiliate). These signatures will be device-agnostic (we could even require zero-knowledge proofs to be published on the blockchain if desired), so it will be all or nothing: there is no practical way to authorize one device to continue functioning without authorizing all others equipment.
This seems to "fit the bill" in terms of maximizing benefits and minimizing risks:
- This is a useful ability: if we receive signs that near-superintelligent AI starts doing things that could cause catastrophic damage, we'll want to make the transition more slowly.
- Until such critical moments arrive, simply having the ability to soft-pause will do little harm to developers.
- Focusing on industrial-scale hardware, and setting a goal of only 90% - 99%, would avoid some of the dystopian approaches of planting spy chips or force-kill switches in consumer-grade laptops, or forcing small countries to take action against their will. Stringent measures.
- Focusing on hardware appears to be highly adaptable to technological changes. We've seen across generations of AI that quality depends heavily on available computing power, especially in early versions of the new paradigm. So reducing the available computing power by a factor of 10 - 100 could easily make the difference between a runaway super-intelligent AI in a rapid-fire battle with humans trying to stop it.
- The inherent hassle of needing to go online to get signatures every week would strongly inhibit the idea of extending this scheme to consumer-grade hardware.
- It can be verified by random checks, and doing it at the hardware level will make it difficult to exempt specific users (approaches based on legally enforced shutdowns rather than technical means do not have this all-or-nothing property, which makes them easier to slip through). to exemptions for the military, etc.).
Hardware regulation is already being strongly considered, albeit usually within the framework of export controls, which essentially has a "we trust one side of us, but not the other side" philosophy. Leopold Aschenbrenner famously argued that the United States should race to gain decisive advantage and then force China to sign an agreement limiting the amount of equipment they could operate. This approach seems risky to me and may combine the pitfalls of multipolar competition and centralization. If we have to restrict people, it seems better to restrict everyone equally and try to actually work together to organize implementation, rather than one party trying to dominate everyone.
d/acc Technology’s role in AI risks
Both strategies (responsibility and hardware pause buttons) have holes, and it's clear that they are only temporary stopgaps: if something can be done on a supercomputer at time T, then on a laptop at time T + 5 years It's probably possible to do it on a computer as well. Therefore, we need more stable measures to buy time. Many d/acc techniques are relevant here. We can look at the role of d/acc technology as follows: If artificial intelligence took over the world, how would it do it?
- It invades our computers → Cyber Defense
- It creates super plague → biodefense
- It convinces us (either we trust it or we don’t trust each other) → Information Defense
As mentioned briefly above, liability rules are a regulatory approach that naturally fits the d/acc philosophy, as they can be very effective in incentivizing these defenses to be adopted and taken seriously around the world. Taiwan has recently been experimenting with imposing liability for false advertising, which could be seen as an example of using liability to encourage information defense. We shouldn't be too keen on imposing liability everywhere, and remember the benefits of ordinary freedoms in enabling small people to participate in innovation without fear of litigation, but where we do want a stronger push for safety, liability can be quite flexible and efficient.
The role of cryptocurrencies in d/acc
Many aspects of d/acc go far beyond typical blockchain topics: biosecurity, brain-computer interfaces, and collaborative discourse tools seem far removed from what crypto folks typically talk about. However, I think there are some important connections between cryptocurrencies and d/acc, specifically:
- d/acc is an extension of the fundamental values of cryptocurrency (decentralization, censorship resistance, an open global economy and society) to other technology areas.
- Because cryptocurrency users are natural early adopters and there is a alignment of values, the cryptocurrency community is a natural early adopter of d/acc technology. The strong emphasis on community (both online and offline, such as events and pop-up events), and the fact that these communities actually do high-stakes things rather than just talk to each other, make cryptocurrency communities particularly interesting for d/acc technology Attractive incubators and proving grounds for technologies that fundamentally operate on groups rather than individuals (such as most information defense and biodefense technologies). Cryptocurrency people just do things together.
- Many cryptocurrency technologies can be used in d/acc topic areas: blockchain for building more robust and decentralized financial, governance and social media infrastructure, zero-knowledge proofs for protecting privacy, etc. Today, many of the largest prediction markets are built on blockchains, and they are gradually becoming more complex, decentralized, and democratic.
- Win-win collaboration opportunities also exist on crypto-adjacent technologies that are useful to cryptocurrency projects and are key to achieving d/acc’s goals: formal verification, computer software and hardware security, and adversarial robust governance technology. These make the Ethereum blockchain, wallets, and decentralized autonomous organizations (DAOs) more secure and robust, and they also achieve important civilizational defense goals, such as reducing our vulnerability to cyberattacks, including those that may come from super-intelligent artificial intelligence ) vulnerability.
Cursive is an application that uses fully homomorphic encryption (FHE), which allows users to identify areas of common interest with other users while protecting privacy. Edge City in Chiang Mai (one of Zuzalu\'s many branches) uses this app.
d/acc and public goods funding
One question that's always interested me is coming up with better mechanisms for funding public goods: projects that are valuable to a very large group of people but don't have a naturally accessible business model. My past work in this area includes my contributions to Quadratic Funding and its contributions to Gitcoin Grants, Retroactive Public Goods Funding (retro PGF), and most recently Deep Funding.
Many people are skeptical of the concept of public goods. This suspicion usually comes from two aspects:
- Public goods have historically been used as a justification for heavy-handed government central planning and intervention in society and the economy.
- A common perception is that public goods funding lacks rigor and operates on the basis of social desirability bias—i.e., what sounds good, rather than what is actually good—and favors insiders who can play the social game.
These are important criticisms, and valid ones. However, I believe that strong decentralized funding of public goods is critical to the d/acc vision, because a key goal of d/acc (minimizing central points of control) itself hinders many traditional business models. It's possible to build successful businesses on open source—several Balvi grantees are doing just that—but in some cases it's difficult enough that important projects require additional ongoing support. So we have to do the hard thing, which is figure out how to fund public goods in a way that addresses the two criticisms above.
The solution to the first problem is basically trusted neutrality and decentralization. Central planning is problematic because it hands control to elites who can become abusive, and because it often over-adapts to current circumstances and becomes increasingly ineffective over time. Quadratic funding and similar mechanisms are precisely about funding public goods in the most credibly neutral and (architecturally and politically) decentralized way possible.
The second question is more challenging. A common criticism of quadratic funding is that it quickly becomes a popularity contest, requiring project funders to expend considerable effort on publicity. Furthermore, projects that are "in front of people's eyes" (such as end-user applications) get funded, while projects that are more behind the scenes (the typical "dependencies maintained by a guy in Nebraska") don't get funded at all Any funding. Optimism retroactive funding relies on a smaller number of expert badge holders; here, the popularity contest effect is reduced, but the social effect of having a close personal relationship with the badge holder is amplified.
Deep Funding is my own latest effort to address this issue. There are two main innovations in deep funding:
- Dependency graph. Instead of asking each juror a global question ("What is project A's value to humanity?"), we ask a local question ("Is project A or project B more valuable to outcome C? How much?") . Humans are notoriously bad at answering global questions: in one famous study, when asked how much they would spend to save N birds, respondents responded to N = 2,000, N = 20,000, and The answers to N = 200,000 are roughly $80. Local problems are easier to deal with. We then combine the local answers into a global answer by maintaining a "dependency graph": for each project, which other projects contribute to its success, and by how much?
- Artificial intelligence as refined human judgment. Jurors are each assigned only a small, random sample of all the questions. There is an open competition where anyone can submit an artificial intelligence model that attempts to efficiently fill all edges in the graph. The final answer is the weighted sum of the models most compatible with the jury's answer. See here for code examples. This approach allows the mechanism to scale to very large scales while requiring the jury to submit only a small number of "bits of information." This reduces the chance of corruption and ensures every bit of information is of high quality: jurors can spend a long time thinking about each question, rather than quickly clicking through hundreds of questions. By using open competitions for AI, we reduce bias from any single AI training and curation process. The open market for artificial intelligence serves as the engine and humans serve as the steering wheel.
But deep funding is just the latest example; there have been other ideas for funding mechanisms for public goods before, and there will be more in the future. allo.expert does a good job of cataloging them. The underlying goal is to create a social tool that can fund public goods with a level of accuracy, fairness, and open access that is at least close to that of market funding of private goods. It doesn't have to be perfect; after all, the market itself is far from perfect. But it should be effective enough that developers working on high-quality open source projects that benefit everyone can continue to do so without feeling the need to make unacceptable compromises.
Today, most of the leading projects in d/acc topic areas: vaccines, brain-computer interfaces, "edge brain-computer interfaces" like wrist electromyography and eye tracking, anti-aging drugs, hardware, etc., are proprietary projects. This has significant disadvantages in terms of ensuring public trust, as we have seen in many of the areas mentioned above. It also directs attention toward competitive dynamics (“Our team must win in this critical industry!”) and away from the larger race to ensure these technologies emerge quickly enough to protect us in a world of super-intelligent AI. For these reasons, strong public goods funding can be a powerful promoter of openness and freedom. This is another way the cryptocurrency community can help d/acc: by making serious efforts to explore these funding mechanisms and make them work well in their own context, preparing them for wider application in open source science and technology.
future
The coming decades bring important challenges. I've been thinking about two challenges lately:
- A wave of powerful new technologies, especially strong artificial intelligence, is rapidly arriving, and with them come important pitfalls that we need to avoid. "Artificial superintelligence" may take five years to arrive, or it may take fifty years. In either case, it's not clear whether the default result is automatically positive, and as described in this and the previous post, there are multiple pitfalls to avoid.
- The world is becoming less and less cooperative. Many powerful actors who previously seemed to act, at least sometimes, on the basis of high principles (cosmopolitanism, freedom, common humanity...etc.) now pursue individual or tribal self-interest more openly and aggressively.
However, each of these challenges has a silver lining. First, we now have very powerful tools to do the rest of our work much faster:
- Current and near-term AI can be used to build other technologies and can be used as an element in governance (such as in deep funding or information finance). It is also very relevant to brain-computer interfaces, which themselves can provide further productivity gains.
- Mass coordination is now possible on a much larger scale than before. The Internet and social media have expanded the scope of coordination, global finance (including cryptocurrencies) has increased its power, now information defense and collaboration tools can increase its quality, and perhaps soon brain-computer interfaces in the form of human-computer-human can increase its depth.
- Formal verification, sandboxing technology (web browsers, Docker, Qubes, GrapheneOS, etc.), secure hardware modules, and other technologies are improving, making better network security possible.
- Writing any kind of software is much easier than it was two years ago.
- Recent fundamental research into understanding how viruses work, particularly the simple understanding that the most important form of transmission is airborne, has shown a clearer path to how to improve biodefenses.
- Recent advances in biotechnology (e.g., CRISPR, advances in bioimaging) are making it easier to use biotechnology of all kinds, whether for defense, longevity, super-well-being, exploring multiple new biological hypotheses, or just doing really cool things get.
- Together, advances in computing and biotechnology are making possible synthetic biological tools that you can use to adapt, monitor, and improve your health. Cyber defense technologies, such as cryptography, make this dimension of personalization more feasible.
Second, now that many of the principles we hold dear are no longer held by a select few of the old forces, they can be recaptured by a broad coalition that welcomes anyone in the world to join. This may be the biggest benefit of the recent political “realignment” around the world, and it’s worth taking advantage of. Cryptocurrencies have done a great job of leveraging this and finding global traction; d/acc can do the same.
Acquisition of tools means that we are able to adapt and improve upon our biology and environment, and the "defense" part of d/acc means that we are able to do this without infringing on the freedom of others to do the same. The principle of liberal pluralism means that there can be great diversity in how we do this, and our commitment to common human goals means that it should be achieved.
We humans are still the brightest stars. The task before us to build a brighter 21st century that protects human survival, freedom, and agency as we reach the stars is a challenging one. But I believe we can do it.