image source head

Can AI bots steal your cryptocurrency? Learn about the rise of digital thieves in one article

trendx logo

Reprinted from chaincatcher

03/18/2025·3M

Original title: Can AI bots steel your crypto? The rise of digital thieves

Original author: Callum Reid

Compiled: 0xdeepseek, ChainCather

In an era of dual-track cryptocurrency and AI technology, digital asset security is facing unprecedented challenges. This article reveals how AI robots can transform the field of encryption into a new type of criminal battlefield with its automated attacks, deep learning and large-scale penetration capabilities - from precision phishing to smart contract vulnerability harvesting, from deep forgery and scams to adaptive malware, attack methods have exceeded the limits of traditional human defense. Faced with this game between algorithms and algorithms, users need not only be wary of the "digital thieves" empowered by AI, but also make good use of AI-driven defense tools. Only by maintaining both technical alertness and security practice can we defend the fortress of wealth in the stormy waves of the crypto world.

TL;DR

  1. AI robots have self-evolution capabilities and can automate the execution of massive encrypted attacks, with attack efficiency far exceeding that of human hackers.
  2. In 2024, AI phishing attacks have caused a single loss of $65 million, and fake airdrop websites can automatically clear user wallets
  3. GPT-3 level AI can directly analyze smart contract vulnerabilities, similar technologies have caused Fei Protocol to be stolen $80 million
  4. AI establishes a prediction model through brute force analysis of password leakage data, and weak password wallet protection time is reduced by 90%.
  5. False CEO video/audio created by deep forgery technology is becoming a new social engineering weapon to induce money transfers
  6. AI-as-a-service tools such as WormGPT have appeared in the black market, and non-technical personnel can also generate customized phishing attacks.
  7. BlackMamba proof-of-concept malware utilizes AI to rewrite code in real time, mainstream security systems are 100% undetectable
  8. The hardware wallet stores the private key offline, effectively defending against 99% of AI remote attacks (such as FTX event verification in 2022)
  9. AI social botnets can manipulate million-level accounts at the same time. Musk's deep fake video fraud case involves more than US$46 million

1. What is an AI robot?

AI robots are self-learning software that automates and continuously optimizes cyber attacks, making them more dangerous than traditional hacking methods.

The core of today’s AI-powered cybercrime is AI robots—these self-learning software programs designed to process massive amounts of data, make independent decisions and perform complex tasks without human intervention. Although these robots have become disruptive forces in industries such as finance, medical care and customer service, they have also become weapons for cybercriminals, especially in the cryptocurrency space.

Unlike traditional hacker methods that rely on manual operations and technical expertise, AI robots can fully automate attacks, adapt to new cryptocurrency security measures, and even optimize strategies over time. This puts them far beyond human hackers who are limited to time, resources, and error-prone processes.

2. Why are AI robots so dangerous?

The biggest threat to AI cybercrime lies in scale. A single hacker has limited ability to try to hack an exchange or trick users into handing over a private key, but AI robots can launch thousands of attacks at the same time and optimize methods in real time.

  • Speed: AI robots can scan millions of blockchain transactions, smart contracts and websites in minutes to identify wallet vulnerabilities (which lead to wallet hacks), DeFi protocols and exchange weaknesses.
  • Scalability: Human scammers can send hundreds of phishing emails, while AI robots can send personalized, well-designed phishing emails to millions of people in the same time.
  • Adaptability: Machine learning makes these robots evolve from every failure, making them harder to detect and intercept.

This automation, adaptability and large-scale attack capability has led to a surge in AI-powered encryption scams, making it unprecedentedly critical to prevent encryption fraud.

In October 2024, Andy Ayrey, the developer of AI robot Truth Terminal, was hacked. The attacker used his account to promote a fraudulent meme coin called Infinite Backrooms (IB), causing the IB market value to soar to $25 million. Within 45 minutes, the criminal sold his position and made a profit of more than $600,000.

3. How do AI robots steal crypto assets?

AI robots not only automate fraud, but also tend to be intelligent, precise and difficult to detect. Here are the types of dangerous AI scams currently used to steal crypto assets:

  1. AI-powered phishing robot

Traditional phishing attacks are not new in the crypto space, but AI doubles its threat. Today’s AI robots can create information that is highly similar to the official communications of platforms such as Coinbase or MetaMask, and collect personal information by leaking databases, social media and even blockchain records, making the scam extremely convincing.

For example, in early 2024, an AI phishing attack against Coinbase users scammed nearly $65 million through fake security alert emails. In addition, after the release of GPT-4, scammers set up a fake OpenAI token airdrop website, inducing users to automatically clear their assets after connecting to their wallets.

These AI-enhanced phishing attacks tend to be spelled without typos or slutty wording, and some even deploy AI customer service robots to cheat private keys or 2FA codes in the name of "verification". In 2022, the Mars Stealer malware can steal private keys from more than 40 types of wallet plug-ins and 2FA applications, and is often spread through phishing links or pirated tools.

  1. AI vulnerability scanning robot

Smart contract vulnerabilities are the gold mine for hackers, and artificial intelligence robots are exploiting these vulnerabilities at an unprecedented speed. These robots constantly scan platforms such as Ethereum or BNB smart chains to find vulnerabilities in newly deployed DeFi projects. Once problems are detected, they are automatically utilized and usually completed within minutes.

Researchers have shown that AI chatbots, such as those powered by GPT-3, can analyze smart contract codes to identify available weaknesses. For example, Zellic co-founder Stephen Tong showed an artificial intelligence chatbot that detected a vulnerability in the “withdrawal” feature of smart contracts, similar to a vulnerability exploited in the Fei Protocol attack that caused $80 million in losses.

  1. AI-enhanced brute-force attacks

Brutal attacks used to take a long time, but artificial intelligence robots made them extremely efficient. By analyzing previous password leaks, these robots can quickly identify patterns that crack passwords and seed phrases, with a record speed. A 2024 study of desktop cryptocurrency wallets, including Sparrow, Etherwall, and Bither, found that weak passwords significantly reduce resistance to brute-force attacks, underscoring the importance of strong and complex passwords to protect digital assets.

  1. Deepfake imitates robots

Imagine you see a video of a trusted crypto influencer or CEO asking you to invest – but that's totally fake. This is the reality of the deep fake scam driven by artificial intelligence. These robots make ultra-realistic videos and recordings, and even trick savvy cryptocurrency holders into transferring funds.

  1. Social Media Botnets

On platforms such as X and Telegram, a large number of artificial intelligence robots spread cryptocurrency scams on a large scale. Botnets such as “Fox8” use ChatGPT to generate hundreds of persuasive posts, hype about scam tokens and respond to users in real time.

In one case, scammers abused Elon Musk and ChatGPT’s names to promote fake cryptocurrency giveaways—with Musk’s in-depth fake videos—to trick people into sending money to scammers.

In 2023, Sophos researchers found that crypto love scammers used ChatGPT to chat with multiple victims simultaneously, making their affectionate messages more persuasive and scalable.

Similarly, Meta reports that the number of malware and phishing links masquerading as ChatGPT or AI tools has risen sharply, often associated with cryptocurrency fraud schemes. In the field of love scams, AI is driving so-called pig killing actions —long-term scams, where scammers cultivate relationships and then lure victims into fake cryptocurrency investments. In 2024, a compelling case occurred in Hong Kong: Police cracked a criminal gang that scammed $46 million from men across Asia through an artificial intelligence-assisted love scam.

4. How AI malware facilitates cybercrime against encrypted users

Artificial intelligence is teaching cybercriminals how to invade encryption platforms so that a group of less-skilled attackers can launch credible attacks. This helps explain why crypto phishing and malware activity is so large—AI tools allow bad people to automatically commit scams and continue to improve according to effective methods.

Artificial intelligence also enhances malware threats and hacking strategies targeting cryptocurrency users. One concern is the malware generated by AI, which wield AI to adapt and evade detection.

In 2023, researchers presented a proof-of-concept program called BlackMamba, a polymorphic keyboard logger that rewrites its code every time it is executed using AI language models such as the technology behind ChatGPT. This means that every time BlackMamba is run, it generates its own new variant in memory, helping it avoid detection by antivirus and endpoint security tools.

In testing, the industry-leading endpoint detection and response system failed to detect such malware made by artificial intelligence. Once activated, it can secretly capture everything the user enters (including cryptocurrency exchange passwords or wallet seed phrases) and send this data to the attacker.

While BlackMamba is just a lab demonstration, it highlights a real threat: criminals can leverage artificial intelligence to create morphing malware targeting cryptocurrency accounts, and it’s harder to capture than traditional viruses.

Even without the peculiar AI malware, threat actors will exploit the popularity of AI to spread classic Trojans. Scammers often set up fake “ChatGPT” containing malware or AI-related applications because they know users may relax their guard because of AI brands. For example, security analysts observed that the fraudulent website impersonating a ChatGPT website with a "Windows Download" button; if clicked, it quietly installs a Trojan that steals cryptocurrency on the victim's machine.

In addition to the malware itself, artificial intelligence has also lowered the technical barriers for hackers. Previously, criminals needed some coding knowledge to make phishing pages or viruses. Now, the underground “Artificial Intelligence as a Service” tool can do most of the work.

Illegal AI chatbots such as WormGPT and FraudGPT have appeared on dark web forums to generate phishing emails, malware codes, and hacking tips on demand. With just a fee, even non-technical criminals can use these AI robots to create compelling scam websites, create new malware variants, and scan for software vulnerabilities.

5. How to protect your cryptocurrency from AI bots

Artificial intelligence-driven threats are becoming increasingly advanced, so strong security measures are crucial to protecting digital assets from automated scams and hacking.

Here are the most effective ways to protect cryptocurrencies from hackers and defend against AI phishing, deep fake scams and vulnerable bots:

  • Using Hardware Wallets: Artificial intelligence-powered malware and phishing attacks are primarily targeted at online (hot) wallets. By using hardware wallets like Ledger or Trezor , you can keep your private keys completely offline, making it nearly impossible for hackers or malicious AI bots to access them remotely. For example, during the 2022 FTX crash, people using hardware wallets avoided huge losses suffered by users who store their funds on exchanges.
  • Enable Multi-Factor Authentication (MFA) and Strong Passwords: AI robots can use deep learning in cybercrime to crack weak passwords, using machine learning algorithms trained for leaked data breaches to predict and exploit vulnerable credentials. To solve this problem, always enable MFA through authenticator applications such as Google Authenticator or Authy, rather than SMS-based code – as we all know, hackers can exploit SIM card swap vulnerabilities to make SMS authentication less secure.
  • Beware of AI-powered phishing scams: AI-generated phishing emails, messages, and false support requests are almost indistinguishable from real requests. Avoid clicking links in emails or direct messages, always verify the website URL manually, and never share private keys or seed phrases, no matter how compelling the request may seem.
  • Carefully verify your identity and avoid deep fake scams: AI-powered deep fake videos and recordings can convincingly impersonate crypto influencers, executives and even people you know. If someone asks for funds or promotes emergency investment opportunities through video or audio, verify their identity through multiple channels before taking action.
  • Stay up to date on the latest blockchain security threats: Pay regular attention to trustworthy blockchain security sources such as Chainalysis or SlowMist.

more