🎉 Share Your 2025 Year-End Summary & Win $10,000 Sharing Rewards!
Reflect on your year with Gate and share your report on Square for a chance to win $10,000!
👇 How to Join:
1️⃣ Click to check your Year-End Summary: https://www.gate.com/competition/your-year-in-review-2025
2️⃣ After viewing, share it on social media or Gate Square using the "Share" button
3️⃣ Invite friends to like, comment, and share. More interactions, higher chances of winning!
🎁 Generous Prizes:
1️⃣ Daily Lucky Winner: 1 winner per day gets $30 GT, a branded hoodie, and a Gate × Red Bull tumbler
2️⃣ Lucky Share Draw: 10
The core technology of AI agents has a fatal flaw... issuing LangChain "LangGrinch" alert
A serious security vulnerability has been discovered in the core library “LangChain Core,” which plays a central role in AI agent operations. The issue has been named “LangGrinch,” allowing attackers to steal sensitive information within AI systems. This vulnerability poses a long-term threat to the security foundations of numerous AI applications and is warning the entire industry.
AI security startup Cyata Security disclosed this vulnerability, assigned the identifier CVE-2025-68664, and rated it as high risk with a CVSS score of (CVSS)9.3. The core of the problem lies in the internal auxiliary functions contained in LangChain Core, which may misjudge user input as trusted objects during serialization and deserialization processes. Attackers can exploit “prompt injection” techniques to manipulate structured outputs generated by the proxy, inserting internal marker keys that are subsequently treated as trusted objects.
LangChain Core serves as a hub among many AI agent frameworks, with tens of millions of downloads in the past 30 days. Overall, its total downloads have exceeded 847 million. Considering applications connected to the entire LangChain ecosystem, experts believe the impact of this vulnerability will be widespread.
Cyata security researcher Yarden Porat explained, “What makes this vulnerability particularly unique is that it is not just a simple deserialization issue but occurs within the serialization pathway itself. The process of storing, streaming, or subsequently restoring structured data generated by AI prompts exposes a new attack surface.” Cyata confirmed that under a single prompt, there are 12 explicit attack paths that can lead to different scenarios.
Once an attack is launched, it can cause remote HTTP requests to leak environment variables containing high-value information such as cloud credentials, database access URLs, vector database information, LLM API keys, and more. Of particular concern is that this vulnerability is a structural flaw inherent to LangChain Core itself, requiring no third-party tools or external integrations. Cyata warns that this represents a “threat existing within the ecosystem pipeline layer.”
Security patches to fix this issue have been released alongside LangChain Core versions 1.2.5 and 0.3.81. Cyata notified the LangChain operations team prior to public disclosure, and it is reported that the team responded immediately and took measures to strengthen long-term security.
Shahar Tal, co-founder and CEO of Cyata, emphasized, “As AI systems are being deployed across industrial sites, the question of what permissions the system will ultimately consume has become a security issue more critical than code execution itself. In architectures based on proxy identity recognition, minimizing permissions and impact radius must be fundamental design principles.”
This incident is expected to serve as an opportunity for the AI industry—whose focus is gradually shifting from manual intervention to proxy-based automation—to reflect on fundamental security design principles.