The core technology of AI agents has a fatal flaw... issuing LangChain "LangGrinch" alert

robot
Abstract generation in progress

A serious security vulnerability has been discovered in the core library “LangChain Core,” which plays a central role in AI agent operations. The issue has been named “LangGrinch,” allowing attackers to steal sensitive information within AI systems. This vulnerability poses a long-term threat to the security foundations of numerous AI applications and is warning the entire industry.

AI security startup Cyata Security disclosed this vulnerability, assigned the identifier CVE-2025-68664, and rated it as high risk with a CVSS score of (CVSS)9.3. The core of the problem lies in the internal auxiliary functions contained in LangChain Core, which may misjudge user input as trusted objects during serialization and deserialization processes. Attackers can exploit “prompt injection” techniques to manipulate structured outputs generated by the proxy, inserting internal marker keys that are subsequently treated as trusted objects.

LangChain Core serves as a hub among many AI agent frameworks, with tens of millions of downloads in the past 30 days. Overall, its total downloads have exceeded 847 million. Considering applications connected to the entire LangChain ecosystem, experts believe the impact of this vulnerability will be widespread.

Cyata security researcher Yarden Porat explained, “What makes this vulnerability particularly unique is that it is not just a simple deserialization issue but occurs within the serialization pathway itself. The process of storing, streaming, or subsequently restoring structured data generated by AI prompts exposes a new attack surface.” Cyata confirmed that under a single prompt, there are 12 explicit attack paths that can lead to different scenarios.

Once an attack is launched, it can cause remote HTTP requests to leak environment variables containing high-value information such as cloud credentials, database access URLs, vector database information, LLM API keys, and more. Of particular concern is that this vulnerability is a structural flaw inherent to LangChain Core itself, requiring no third-party tools or external integrations. Cyata warns that this represents a “threat existing within the ecosystem pipeline layer.”

Security patches to fix this issue have been released alongside LangChain Core versions 1.2.5 and 0.3.81. Cyata notified the LangChain operations team prior to public disclosure, and it is reported that the team responded immediately and took measures to strengthen long-term security.

Shahar Tal, co-founder and CEO of Cyata, emphasized, “As AI systems are being deployed across industrial sites, the question of what permissions the system will ultimately consume has become a security issue more critical than code execution itself. In architectures based on proxy identity recognition, minimizing permissions and impact radius must be fundamental design principles.”

This incident is expected to serve as an opportunity for the AI industry—whose focus is gradually shifting from manual intervention to proxy-based automation—to reflect on fundamental security design principles.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)