AI Agents need Crypto, not Crypto needs AI

Author: Scarlett Zhang

I increasingly feel that the crypto world is a bit too eager to be recognized by the AI community.

Over the past six months, you’ll notice that the entire crypto scene is trying very hard to associate itself with AI—talking about AI, shifting towards AI, hosting AI events, creating AI demos, changing AI narratives—almost every project seems to be looking for ways to prove some connection to AI.

It feels a lot like:

Kids desperately trying to climb onto the adult table.

But on the other side?

Many genuine AI practitioners have a very nuanced attitude toward crypto. They don’t openly criticize you, nor do they outright reject you; instead, they maintain a very polite distance:

“We’re not opposed to chains, but we don’t want to be too closely tied to crypto for now.”

“Technically interesting, but our clients and investors might mind.”

Translated, it’s basically:

You’re okay, but I don’t really want to get involved in your scene.

Right now, there’s indeed a subtle hierarchy of disdain between these two circles.

And it’s not without reason.

In many AI builders’ minds, AI is a real productivity revolution—technological progress that’s changing work methods, product forms, and information flows.

And crypto? To them, it’s more like an over-financialized industry, narrative-driven, always looking for the next story to prove its importance, while conveniently issuing tokens and harvesting retail investors.

So when the crypto scene suddenly starts talking about AI on a large scale, many in the AI community’s first reaction is:

Are you seriously building products, or just riding another narrative?

Honestly, I completely understand this reaction.

Because over the years, crypto has been very good at packaging “the next big thing.”

DeFi / NFT / GameFi / SocialFi / DePIN / inscriptions—now it’s AI x Crypto.

Each wave, someone sews the latest buzzword onto themselves and tells you the future has arrived.

Over time, this has led to a hard-to-reverse impression from outsiders:

You’re always talking about the future, but people can’t help but wonder: are you really creating value, or just creating hype?

This is also why many in the AI community today naturally feel they stand on a higher ground.

They believe:

AI is solving real problems.

Crypto is still searching for its new legitimacy.

This bias is very real. The hierarchy of disdain does exist.

But lately, I’ve been thinking more and more that the interesting part isn’t why crypto wants to get close to AI.

It’s a more counterintuitive question:

Could it be that, in the end, what truly needs the other more is actually AI?

More precisely:

It’s not that crypto needs AI.

It’s that AI agents need crypto.


This isn’t a question of “AI being smarter,” but rather “can AI move money?”

I’m increasingly convinced of this because many AI agent demos tend to get stuck at the same point.

Recently, everyone has probably seen quite a few demos:

Code-writing, tool-calling, web browsing, multi-step task execution, and even some that can do trading, payments, and automated chain operations.

At first glance, it’s very cool.

But after seeing more, I care about one question more and more:

Is it just “knowing how,” or can it really “do”?

Because the difference between “knowing how” and “doing” isn’t just a matter of product details.

What’s in between is:

Permissions, funds, responsibility, boundaries.

Having an agent help you generate a report is one thing.

Having an agent execute a real transaction is another.

If it messes up the former, you might just think it’s a bit stupid.

If it messes up the latter, money is gone.

So I increasingly feel that AI demos tend to create a false illusion:

It looks like everything is almost ready.

But the hardest part to truly connect is often the last layer.

That is:

Execution.

The real bottleneck for AI agents isn’t in thinking, but in executing actions involving money.

Once an AI agent really starts doing things for you, it will quickly need to buy APIs, rent computing power, call paid services, execute trades, manage budgets, transfer assets, and complete payments across different systems.

In other words, it’s not just about “understanding your intent.”

It needs to participate in economic activities.

And once you reach this layer, the problem changes.


Traditional finance can support automation, but it’s not designed for an “agent world”

Many might ask here:

“Traditional finance can also do all these things.”

I’ve thought about this, and honestly, in many aspects, traditional finance is indeed more mature than crypto.

Risk control, auditing, permission management, responsibility chains, recoverability—these are areas where traditional finance is stronger today.

So, the point of this article isn’t that:

crypto is better than traditional finance.

Nor that without crypto, AI agents can’t work at all.

For internal enterprise agents or platform agents, many tasks can still be handled via bank APIs, corporate payment systems, virtual cards, approval workflows, sub-accounts, platform credits, centralized custody accounts.

All of these can work, and likely will remain mainstream in the short term.

But the core issue is:

These systems are fundamentally built on the premise that:

Agents are not native executors.

They are just automation tools attached to a user, a company, or a platform.

This works fine in many scenarios.

But as agents become more autonomous, cross-platform, cross-border, and need to natively invoke resources and funds across different systems, traditional systems will start to feel increasingly awkward.

So the real question isn’t:

“Can traditional finance support this?”

But rather:

Is it the most natural, scalable, and native structure for an agent-centric world?

An agent’s world needs not just accounts, but a complete execution framework.

This is actually two very different questions.

The key for AI agents isn’t whether they are “legal entities,” but whether they increasingly resemble “execution units.”

You might then think:

“But agents aren’t third-party entities either. They’re not people, nor companies; they’re just software proxies.”

That’s correct.

Strictly speaking, AI agents probably won’t become independent legal entities. Most of the time, they’re more like proxies for users, companies, or platforms.

But even so, they will increasingly resemble execution units that can be assigned budgets, permissions, tasks, and boundaries.

That’s the core.

The reason this hasn’t fully exploded yet is that agents aren’t yet at that level; many things still depend on “people watching over them.”

But if large-scale agents emerge in the future:

  • Helping you trade,
  • Managing procurement,
  • Running operations,
  • Overseeing budgets,
  • Automatically invoking resources across systems,

you’ll encounter a very awkward question:

How should these entities have permissions?

Who owns their accounts?

Who authorizes their payments?

How much can they spend?

Who is responsible if they overreach?

How are they settled when calling services globally?

Traditional finance isn’t incapable of supporting this.

But it will become increasingly awkward.

Because it was never designed with the premise that:

Software execution units will participate massively in economic activities.

Traditional finance isn’t incapable of support, but it’s increasingly unnatural.


When the main actor becomes an agent, the previously vague “self-speaking” crypto concepts become concrete

In the past, many viewed crypto as full of vague buzzwords:

Programmable funds

Programmable identity

Permissionless

Global settlement

Trustless execution

Often, these sound like “nonsense.”

But if you replace the main actor with an AI agent, these concepts suddenly become much less abstract.

Because what an agent truly needs might precisely be:

  • A native, callable form of funds.
  • An execution identity that doesn’t have to be a “company account.”
  • Budget and permissions that can be programmatically constrained.
  • Low-friction global settlement.
  • Native connections between calling actions and asset behaviors.

At this point, looking at wallets from a new perspective:

A wallet isn’t just a “place to store tokens.”

It’s more like:

An execution container with permission boundaries.

Wallets aren’t just “asset storage,” but execution containers for agents.

They hold not only assets.

They can also hold rules:

  • What actions are permitted
  • How much can be spent
  • Which actions can be automated
  • Thresholds requiring manual confirmation
  • Read-only or writable modes
  • On-chain policies vs. off-chain controls

From this perspective, the relationship between AI and wallets becomes very interesting:

AI is responsible for understanding.

Wallets are responsible for constraining.

Agents are responsible for acting.

This forms a complete system.


The real irony: AI’s core issue is trust, while crypto’s biggest deficiency is also trust

If I were to argue from an opponent’s perspective, I might say:

“You just said AI fundamentally lacks trust, so how come your answer points to crypto?”

That’s a fair criticism.

Because, in most people’s minds, crypto isn’t a “naturally trustworthy” system.

Private key management is complex.

On-chain transactions are irreversible.

Phishing and signature theft are common.

Smart contract risks are high.

Responsibility boundaries are often fuzzy.

And after issues occur, there’s often no one to backstop.

So, I don’t mean to say:

crypto has solved trust.

Quite the opposite.

My view is:

AI will force crypto to confront trust head-on.

In the past, crypto could stay at the level of “transfer, use, run.”

But if it truly wants to be the execution layer for AI agents, it must address the hardest lessons:

  • Permission models
  • Security boundaries
  • Responsibility attribution
  • Risk control systems
  • Recoverability
  • Human-in-the-loop confirmation mechanisms

In other words, AI won’t automatically make crypto better.

Instead, AI will expose and force crypto to confront its most vague, lazy, narrative-driven weaknesses.

So I’m not claiming crypto is already the answer.

I’m saying:

If a truly agent-native execution infrastructure emerges in the future, it will likely look more like crypto than today’s traditional accounts.


So the question has probably never been “Will crypto leverage AI to go viral?”

This is also the most annoying recent misconception I see.

Many people, when talking about AI x crypto, automatically think:

Crypto is just riding AI again.

Crypto is trying to tell a new story with AI.

Crypto needs AI to extend its life.

I don’t deny that many projects are doing this, and quite a few indeed.

But if we only stay at this level, we miss a more fundamental layer:

Once AI truly moves into execution, it will inevitably bump into issues of funds, permissions, responsibility, identity, and settlement.

And these issues can’t be solved just by “more powerful models.”

They are fundamentally infrastructure problems.

In other words, as AI develops further, it will increasingly approach the problem domains that crypto is good at handling.

Not because crypto is more advanced than AI.

But because, as AI reaches into the real world, it must confront:

  • How money moves
  • How permissions are granted
  • How responsibility is assigned

And these aren’t problems that prompt engineering alone can solve.

The real missing piece for AI isn’t smarter models, but more trustworthy infrastructure.

I increasingly believe that the hardest part of AI × crypto isn’t intelligence.

It’s trust.

You can create a stunning demo:

  • Swap with a single sentence
  • Bridge with a single sentence
  • Configure assets automatically
  • Execute on-chain actions with a single command

It all sounds very futuristic.

But would users really dare to use it?

Would they dare to try once? Or use it long-term?

And if they do, how is responsibility handled if something goes wrong?

Would products dare to promise it?

Would platforms dare to back it?

Would developers dare to open higher permissions?

In the end, you realize that what truly limits AI agents from entering finance and asset worlds isn’t their intelligence.

It’s:

Whether they can be constrained.

Who defines their boundaries?

Who verifies their actions?

Who can stop them before risks materialize?

Who clarifies responsibility after issues occur?

And how are settlements handled when calling services globally?

Traditional finance isn’t incapable of supporting this.

But it will become increasingly awkward.

Because it was never designed with the premise that:

Software execution units will participate massively in economic activities.

Traditional finance isn’t incapable, but it’s increasingly unnatural.


When the main actor becomes an agent, the vague “self-speaking” crypto concepts become concrete

In the past, many saw crypto as full of vague buzzwords:

Programmable funds

Programmable identity

Permissionless

Global settlement

Trustless execution

Often, these sound like “nonsense.”

But if you replace the main actor with an AI agent, these concepts suddenly become much clearer.

Because what an agent truly needs might be:

  • A native, callable form of funds
  • An execution identity that doesn’t have to be a “company account”
  • Programmatically constrained budgets and permissions
  • Low-friction global settlement
  • Native connections between actions and assets

At this point, looking at wallets from a new perspective:

A wallet isn’t just a “place to store tokens.”

It’s more like:

An execution container with permission boundaries.

Wallets aren’t just “asset storage,” but execution containers for agents.

They hold not only assets.

They can also hold rules:

  • What actions are permitted
  • How much can be spent
  • Which actions can be automated
  • Thresholds requiring manual approval
  • Read-only or writable modes
  • On-chain policies vs. off-chain controls

From this perspective, the relationship between AI and wallets becomes very interesting:

AI is responsible for understanding.

Wallets are responsible for constraining.

Agents are responsible for acting.

This forms a complete system.


The real irony: AI’s core issue is trust, while crypto’s biggest deficiency is also trust

If I argue from an opponent’s perspective, I might say:

“You just said AI fundamentally lacks trust, so how come your answer points to crypto?”

That’s a fair point.

Because, in most people’s minds, crypto isn’t a “naturally trustworthy” system.

Private key management is complex.

On-chain transactions are irreversible.

Phishing and signature theft are common.

Smart contract risks are high.

Responsibility boundaries are often fuzzy.

And after issues occur, there’s often no one to backstop.

So, I don’t mean to say:

crypto has solved trust.

Quite the opposite.

My view is:

AI will force crypto to confront trust directly.

In the past, crypto could stay at the level of “transfer, use, run.”

But if it truly wants to be the execution layer for AI agents, it must address the hardest lessons:

  • Permission models
  • Security boundaries
  • Responsibility attribution
  • Risk management systems
  • Recoverability
  • Human-in-the-loop confirmation mechanisms

In other words, AI won’t automatically improve crypto.

Instead, AI will expose and force crypto to confront its most vague, lazy, narrative-driven weaknesses.

So I’m not claiming crypto is already the answer.

I’m saying:

If a truly agent-native execution infrastructure emerges in the future, it will likely look more like crypto than today’s traditional accounts.


So the question has probably never been “Will crypto leverage AI to go viral?”

This is also the most annoying recent misconception I see.

Many people, when talking about AI x crypto, automatically think:

Crypto is just riding AI again.

Crypto is trying to tell a new story with AI.

Crypto needs AI to extend its life.

I don’t deny that many projects are doing this, and quite a few indeed.

But if we only stay at this level, we miss a more fundamental layer:

Once AI truly moves into execution, it will inevitably bump into issues of funds, permissions, responsibility, identity, and settlement.

And these issues can’t be solved just by “more powerful models.”

They are fundamentally infrastructure problems.

In other words, as AI develops further, it will increasingly approach the problem domains that crypto is good at handling.

Not because crypto is more advanced than AI.

But because, as AI reaches into the real world, it must confront:

  • How money moves
  • How permissions are granted
  • How responsibility is assigned

And these aren’t problems that prompt engineering alone can solve.

The real missing piece for AI isn’t smarter models, but more trustworthy infrastructure.

I increasingly believe that the hardest part of AI × crypto isn’t intelligence.

It’s trust.

You can create a stunning demo:

  • Swap with a single sentence
  • Bridge with a single sentence
  • Configure assets automatically
  • Execute on-chain actions with a single command

It all sounds very futuristic.

But would users really dare to use it?

Would they dare to try once? Or use it long-term?

And if they do, how is responsibility handled if something goes wrong?

Would products dare to promise it?

Would platforms dare to back it?

Would developers dare to open higher permissions?

In the end, you realize that what truly limits AI agents from entering finance and asset worlds isn’t their intelligence.

It’s:

Whether they can be constrained.

Who defines their boundaries?

Who verifies their actions?

Who can stop them before risks materialize?

Who clarifies responsibility after issues occur?

And how are settlements handled when calling services globally?

Traditional finance isn’t incapable of supporting this.

But it will become increasingly awkward.

Because it was never designed with the premise that:

Software execution units will participate massively in economic activities.

Traditional finance isn’t incapable, but it’s increasingly unnatural.


When the main actor becomes an agent, the vague “self-speaking” crypto concepts become concrete

In the past, many saw crypto as full of vague buzzwords:

Programmable funds

Programmable identity

Permissionless

Global settlement

Trustless execution

Often, these sound like “nonsense.”

But if you replace the main actor with an AI agent, these concepts suddenly become much clearer.

Because what an agent truly needs might be:

  • A native, callable form of funds
  • An execution identity that doesn’t have to be a “company account”
  • Programmatically constrained budgets and permissions
  • Low-friction global settlement
  • Native connections between actions and assets

At this point, looking at wallets from a new perspective:

A wallet isn’t just a “place to store tokens.”

It’s more like:

An execution container with permission boundaries.

Wallets aren’t just “asset storage,” but execution containers for agents.

They hold not only assets.

They can also hold rules:

  • What actions are permitted
  • How much can be spent
  • Which actions can be automated
  • Thresholds requiring manual approval
  • Read-only or writable modes
  • On-chain policies vs. off-chain controls

From this perspective, the relationship between AI and wallets becomes very interesting:

AI is responsible for understanding.

Wallets are responsible for constraining.

Agents are responsible for acting.

This forms a complete system.


The real irony: AI’s core issue is trust, while crypto’s biggest deficiency is also trust

If I argue from an opponent’s perspective, I might say:

“You just said AI fundamentally lacks trust, so how come your answer points to crypto?”

That’s a fair point.

Because, in most people’s minds, crypto isn’t a “naturally trustworthy” system.

Private key management is complex.

On-chain transactions are irreversible.

Phishing and signature theft are common.

Smart contract risks are high.

Responsibility boundaries are often fuzzy.

And after issues occur, there’s often no one to backstop.

So, I don’t mean to say:

crypto has solved trust.

Quite the opposite.

My view is:

AI will force crypto to confront trust directly.

In the past, crypto could stay at the level of “transfer, use, run.”

But if it truly wants to be the execution layer for AI agents, it must address the hardest lessons:

  • Permission models
  • Security boundaries
  • Responsibility attribution
  • Risk management systems
  • Recoverability
  • Human-in-the-loop confirmation mechanisms

In other words, AI won’t automatically improve crypto.

Instead, AI will expose and force crypto to confront its most vague, lazy, narrative-driven weaknesses.

So I’m not claiming crypto is already the answer.

I’m saying:

If a truly agent-native execution infrastructure emerges in the future, it will likely look more like crypto than today’s traditional accounts.


So the question has probably never been “Will crypto leverage AI to go viral?”

This is also the most annoying recent misconception I see.

Many people, when talking about AI x crypto, automatically think:

Crypto is just riding AI again.

Crypto is trying to tell a new story with AI.

Crypto needs AI to extend its life.

I don’t deny that many projects are doing this, and quite a few indeed.

But if we only stay at this level, we miss a more fundamental layer:

Once AI truly moves into execution, it will inevitably bump into issues of funds, permissions, responsibility, identity, and settlement.

And these issues can’t be solved just by “more powerful models.”

They are fundamentally infrastructure problems.

In other words, as AI develops further, it will increasingly approach the problem domains that crypto is good at handling.

Not because crypto is more advanced than AI.

But because, as AI reaches into the real world, it must confront:

  • How money moves
  • How permissions are granted
  • How responsibility is assigned

And these aren’t problems that prompt engineering alone can solve.

The real missing piece for AI isn’t smarter models, but more trustworthy infrastructure.

I increasingly believe that the hardest part of AI × crypto isn’t intelligence.

It’s trust.

You can create a stunning demo:

  • Swap with a single sentence
  • Bridge with a single sentence
  • Configure assets automatically
  • Execute on-chain actions with a single command

It all sounds very futuristic.

But would users really dare to use it?

Would they dare to try once? Or use it long-term?

And if they do, how is responsibility handled if something goes wrong?

Would products dare to promise it?

Would platforms dare to back it?

Would developers dare to open higher permissions?

In the end, you realize that what truly limits AI agents from entering finance and asset worlds isn’t their intelligence.

It’s:

Whether they can be constrained.

Who defines their boundaries?

Who verifies their actions?

Who can stop them before risks materialize?

Who clarifies responsibility after issues occur?

And how are settlements handled when calling services globally?

Traditional finance isn’t incapable of supporting this.

But it will become increasingly awkward.

Because it was never designed with the premise that:

Software execution units will participate massively in economic activities.

Traditional finance isn’t incapable, but it’s increasingly unnatural.


When the main actor becomes an agent, the vague “self-speaking” crypto concepts become concrete

In the past, many saw crypto as full of vague buzzwords:

Programmable funds

Programmable identity

Permissionless

Global settlement

Trustless execution

Often, these sound like “nonsense.”

But if you replace the main actor with an AI agent, these concepts suddenly become much clearer.

Because what an agent truly needs might be:

  • A native, callable form of funds
  • An execution identity that doesn’t have to be a “company account”
  • Programmatically constrained budgets and permissions
  • Low-friction global settlement
  • Native connections between actions and assets

At this point, looking at wallets from a new perspective:

A wallet isn’t just a “place to store tokens.”

It’s more like:

An execution container with permission boundaries.

Wallets aren’t just “asset storage,” but execution containers for agents.

They hold not only assets.

They can also hold rules:

  • What actions are permitted
  • How much can be spent
  • Which actions can be automated
  • Thresholds requiring manual approval
  • Read-only or writable modes
  • On-chain policies vs. off-chain controls

From this perspective, the relationship between AI and wallets becomes very interesting:

AI is responsible for understanding.

Wallets are responsible for constraining.

Agents are responsible for acting.

This forms a complete system.


The real irony: AI’s core issue is trust, while crypto’s biggest deficiency is also trust

If I argue from an opponent’s perspective, I might say:

“You just said AI fundamentally lacks trust, so how come your answer points to crypto?”

That’s a fair point.

Because, in most people’s minds, crypto isn’t a “naturally trustworthy” system.

Private key management is complex.

On-chain transactions are irreversible.

Phishing and signature theft are common.

Smart contract risks are high.

Responsibility boundaries are often fuzzy.

And after issues occur, there’s often no one to backstop.

So, I don’t mean to say:

crypto has solved trust.

Quite the opposite.

My view is:

AI will force crypto to confront trust directly.

In the past, crypto could stay at the level of “transfer, use, run.”

But if it truly wants to be the execution layer for AI agents, it must address the hardest lessons:

  • Permission models
  • Security boundaries
  • Responsibility attribution
  • Risk management systems
  • Recoverability
  • Human-in-the-loop confirmation mechanisms

In other words, AI won’t automatically improve crypto.

Instead, AI will expose and force crypto to confront its most vague, lazy, narrative-driven weaknesses.

So I’m not claiming crypto is already the answer.

I’m saying:

If a truly agent-native execution infrastructure emerges in the future, it will likely look more like crypto than today’s traditional accounts.


So the question has probably never been “Will crypto leverage AI to go viral?”

This is also the most annoying recent misconception I see.

Many people, when talking about AI x crypto, automatically think:

Crypto is just riding AI again.

Crypto is trying to tell a new story with AI.

Crypto needs AI to extend its life.

I don’t deny that many projects are doing this, and quite a few indeed.

But if we only stay at this level, we miss a more fundamental layer:

Once AI truly moves into execution, it will inevitably bump into issues of funds, permissions, responsibility, identity, and settlement.

And these issues can’t be solved just by “more powerful models.”

They are fundamentally infrastructure problems.

In other words, as AI develops further, it will increasingly approach the problem domains that crypto is good at handling.

Not because crypto is more advanced than AI.

But because, as AI reaches into the real world, it must confront:

  • How money moves
  • How permissions are granted
  • How responsibility is assigned

And these aren’t problems that prompt engineering alone can solve.

The real missing piece for AI isn’t smarter models, but more trustworthy infrastructure.

I increasingly believe that the hardest part of AI × crypto isn’t intelligence.

It’s trust.

You can create a stunning demo:

  • Swap with a single sentence
  • Bridge with a single sentence
  • Configure assets automatically
  • Execute on-chain actions with a single command

It all sounds very futuristic.

But would users really dare to use it?

Would they dare to try once? Or use it long-term?

And if they do, how is responsibility handled if something goes wrong?

Would products dare to promise it?

Would platforms dare to back it?

Would developers dare to open higher permissions?

In the end, you realize that what truly limits AI agents from entering finance and asset worlds isn’t their intelligence.

It’s:

Whether they can be constrained.

Who defines their boundaries?

Who verifies their actions?

Who can stop them before risks materialize?

Who clarifies responsibility after issues occur?

And how are settlements handled when calling services globally?

Traditional finance isn’t incapable of supporting this.

But it will become increasingly awkward.

Because it was never designed with the premise that:

Software execution units will participate massively in economic activities.

Traditional finance isn’t incapable, but it’s increasingly unnatural.


When the main actor becomes an agent, the vague “self-speaking” crypto concepts become concrete

In the past, many saw crypto as full of vague buzzwords:

Programmable funds

Programmable identity

Permissionless

Global settlement

Trustless execution

Often, these sound like “nonsense.”

But if you replace the main actor with an AI agent, these concepts suddenly become much clearer.

Because what an agent truly needs might be:

  • A native, callable form of funds
  • An execution identity that doesn’t have to be a “company account”
  • Programmatically constrained budgets and permissions
  • Low-friction global settlement
  • Native connections between actions and assets

At this point, looking at wallets from a new perspective:

A wallet isn’t just a “place to store tokens.”

It’s more like:

An execution container with permission boundaries.

Wallets aren’t just “asset storage,” but execution containers for agents.

They hold not only assets.

They can also hold rules:

  • What actions are permitted
  • How much can be spent
  • Which actions can be automated
  • Thresholds requiring manual approval
  • Read-only or writable modes
  • On-chain policies vs. off-chain controls

From this perspective, the relationship between AI and wallets becomes very interesting:

AI is responsible for understanding.

Wallets are responsible for constraining.

Agents are responsible for acting.

This forms a complete system.


The real irony: AI’s core issue is trust, while crypto’s biggest deficiency is also trust

If I argue from an opponent’s perspective, I might say:

“You just said AI fundamentally lacks trust, so how come your answer points to crypto?”

That’s a fair point.

Because, in most people’s minds, crypto isn’t a “naturally trustworthy” system.

Private key management is complex.

On-chain transactions are irreversible.

Phishing and signature theft are common.

Smart contract risks are high.

Responsibility boundaries are often fuzzy.

And after issues occur, there’s often no one to backstop.

So, I don’t mean to say:

crypto has solved trust.

Quite the opposite.

My view is:

AI will force crypto to confront trust directly.

In the past, crypto could stay at the level of “transfer, use, run.”

But if it truly wants to be the execution layer for AI agents, it must address the hardest lessons:

  • Permission models
  • Security boundaries
  • Responsibility attribution
  • Risk management systems
  • Recoverability
  • Human-in-the-loop confirmation mechanisms

In other words, AI won’t automatically improve crypto.

Instead, AI will expose and force crypto to confront its most vague, lazy, narrative-driven weaknesses.

So I’m not claiming crypto is already the answer.

I’m saying:

If a truly agent-native execution infrastructure emerges in the future, it will likely look more like crypto than today’s traditional accounts.


So the question has probably never been “Will crypto leverage AI to go viral?”

This is also the most annoying recent misconception I see.

Many people, when talking about AI x crypto, automatically think:

Crypto is just riding AI again.

Crypto is trying to tell a new story with AI.

Crypto needs AI to extend its life.

I don’t deny that many projects are doing this, and quite a few indeed.

But if we only stay at this level, we miss a more fundamental layer:

Once AI truly moves into execution, it will inevitably bump into issues of funds, permissions, responsibility, identity, and settlement.

And these issues can’t be solved just by “more powerful models.”

They are fundamentally infrastructure problems.

In other words, as AI develops further, it will increasingly approach the problem domains that crypto is good at handling.

Not because crypto is more advanced than AI.

But because, as AI reaches into the real world, it must confront:

  • How money moves
  • How permissions are granted
  • How responsibility is assigned

And these aren’t problems that prompt engineering alone can solve.

The real missing piece for AI isn’t smarter models, but more trustworthy infrastructure.

I increasingly believe that the hardest part of AI × crypto isn’t intelligence.

It’s trust.

You can create a stunning demo:

  • Swap with a single sentence
  • Bridge with a single sentence
  • Configure assets automatically
  • Execute on-chain actions with a single command

It all sounds very futuristic.

But would users really dare to use it?

Would they dare to try once? Or use it long-term?

And if they do, how is responsibility handled if something goes wrong?

Would products dare to promise it?

Would platforms dare to back it?

Would developers dare to open higher permissions?

In the end, you realize that what truly limits AI agents from entering finance and asset worlds isn’t their intelligence.

It’s:

Whether they can be constrained.

Who defines their boundaries?

Who verifies their actions?

Who can stop them before risks materialize?

Who clarifies responsibility after issues occur?

And how are settlements handled when calling services globally?

Traditional finance isn’t incapable of supporting this.

But it will become increasingly awkward.

Because it was never designed with the premise that:

Software execution units will participate massively in economic activities.

Traditional finance isn’t incapable, but it’s increasingly unnatural.


When the main actor becomes an agent, the vague “self-speaking” crypto concepts become concrete

In the past, many saw crypto as full of vague buzzwords:

Programmable funds

Programmable identity

Permissionless

Global settlement

Trustless execution

Often, these sound like “nonsense.”

But if you replace the main actor with an AI agent, these concepts suddenly become much clearer.

Because what an agent truly needs might be:

  • A native, callable form of funds
  • An execution identity that doesn’t have to be a “company account”
  • Programmatically constrained budgets and permissions
  • Low-friction global settlement
  • Native connections between actions and assets

At this point, looking at wallets from a new perspective:

A wallet isn’t just a “place to store tokens.”

It’s more like:

An execution container with permission boundaries.

Wallets aren’t just “asset storage,” but execution containers for agents.

They hold not only assets.

They can also hold rules:

  • What actions are permitted
  • How much can be spent
  • Which actions can be automated
  • Thresholds requiring manual approval
  • Read-only or writable modes
  • On-chain policies vs. off-chain controls

From this perspective, the relationship between AI and wallets becomes very interesting:

AI is responsible for understanding.

Wallets are responsible for constraining.

Agents are responsible for acting.

This forms a complete system.


The real irony: AI’s core issue is trust, while crypto’s biggest deficiency is also trust

If I argue from an opponent’s perspective, I might say:

“You just said AI fundamentally lacks trust, so how come your answer points to crypto?”

That’s a fair point.

Because, in most people’s minds, crypto isn’t a “naturally trustworthy” system.

Private key management is complex.

On-chain transactions are irreversible.

Phishing and signature theft are common.

Smart contract risks are high.

Responsibility boundaries are often fuzzy.

And after issues occur, there’s often no one to backstop.

So, I don’t mean to say:

crypto has solved trust.

Quite the opposite.

My view is:

AI will force crypto to confront trust directly.

In the past, crypto could stay at the level of “transfer, use, run.”

But if it truly wants to be the execution layer for AI agents, it must address the hardest lessons:

  • Permission models
  • Security boundaries
  • Responsibility attribution
  • Risk management systems
  • Recoverability
  • Human-in-the-loop confirmation mechanisms

In other words, AI won’t automatically improve crypto.

Instead, AI will expose and force crypto to confront its most vague, lazy, narrative-driven weaknesses.

So I’m not claiming crypto is already the answer.

I’m saying:

If a truly agent-native execution infrastructure emerges in the future, it will likely look more like crypto than today’s traditional accounts.


So the question has probably never been “Will crypto leverage AI to go viral?”

This is also the most annoying recent misconception I see.

Many people, when talking about AI x crypto, automatically think:

Crypto is just riding AI again.

Crypto is trying to tell a new story with AI.

Crypto needs AI to extend its life.

I don’t deny that many projects are doing this, and quite a few indeed.

But if we only stay at this level, we miss a more fundamental layer:

Once AI truly moves into execution, it will inevitably bump into issues of funds, permissions, responsibility, identity, and settlement.

And these issues can’t be solved just by “more powerful models.”

They are fundamentally infrastructure problems.

In other words, as AI develops further, it will increasingly approach the problem domains that crypto is good at handling.

Not because crypto is more advanced than AI.

But because, as AI reaches into the real world, it must confront:

  • How money moves
  • How permissions are granted
  • How responsibility is assigned

And these aren’t problems that prompt engineering alone can solve.

The real missing piece for AI isn’t smarter models, but more trustworthy infrastructure.

I increasingly believe that the hardest part of AI × crypto isn’t intelligence.

It’s trust.

You can create a stunning demo:

  • Swap with a single sentence
  • Bridge with a single sentence
  • Configure assets automatically
  • Execute on-chain actions with a single command

It all sounds very futuristic.

But would users really dare to use it?

Would they dare to try once? Or use it long-term?

And if they do, how is responsibility handled if something goes wrong?

Would products dare to promise it?

Would platforms dare to back it?

Would developers dare to open higher permissions?

In the end, you realize that what truly limits AI agents from entering finance and asset worlds isn’t their intelligence.

It’s:

Whether they can be constrained.

Who defines their boundaries?

Who verifies their actions?

Who can stop them before risks materialize?

Who clarifies responsibility after issues occur?

And how are settlements handled when calling services globally?

Traditional finance isn’t incapable of supporting this.

But it will become increasingly awkward.

Because it was never designed with the premise that:

Software execution units will participate massively in economic activities.

Traditional finance isn’t incapable, but it’s increasingly unnatural.


When the main actor becomes an agent, the vague “self-speaking” crypto concepts become concrete

In the past, many saw crypto as full of vague buzzwords:

Programmable funds

Programmable identity

Permissionless

Global settlement

Trustless execution

Often, these sound like “nonsense.”

But if you replace the main actor with an AI agent, these concepts suddenly become much clearer.

Because what an agent truly needs might be:

  • A native, callable form of funds
  • An execution identity that doesn’t have to be a “company account”
  • Programmatically constrained budgets and
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin