Critical LangChain Core Vulnerability Exposes Secrets Through Serialization Injection

Critical LangChain Core Vulnerability

A critical vulnerability has been identified in LangChain Core that an attacker could abuse to exfiltrate sensitive secrets and influence large language model (LLM) behavior via prompt injection.

LangChain Core (i.e., langchain-core) is a core Python package in the LangChain ecosystem, exposing the fundamental interfaces and model-agnostic abstractions that developers use to build LLM-powered applications.

The issue, tracked as CVE-2025-68664, has a CVSS score of 9.3 out of 10.0. Security researcher Yarden Porat reported the problem on December 4, 2025. The vulnerability has been nicknamed LangGrinch.

“A serialization injection vulnerability exists in LangChain’s dumps() and dumpd() functions,” the project maintainers said in an advisory. “The functions do not escape dictionaries with ‘lc’ keys when serializing free-form dictionaries.”

“The ‘lc’ key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.”

According to Cyata researcher Porat, the core issue is that the two functions fail to escape user-controlled dictionaries that include “lc” keys. The “lc” marker denotes LangChain objects in the framework’s internal serialization format.

“So once an attacker is able to make a LangChain orchestration loop serialize and later deserialize content including an ‘lc’ key, they would instantiate an unsafe arbitrary object, potentially triggering many attacker-friendly paths,” Porat said.

The impact can include secret extraction from environment variables when deserialization runs with “secrets_from_env=True” (historically enabled by default), instantiation of classes from pre-approved trusted namespaces such as langchain_core, langchain, and langchain_community, and in some cases escalation to arbitrary code execution through Jinja2 templates.

In addition, the escaping flaw allows attackers to inject LangChain object structures through user-controllable fields such as metadata, additional_kwargs, or response_metadata via prompt injection.

The LangChain fix ships stricter defaults for load() and loads() using an allowlist parameter “allowed_objects” that lets users define which classes are permitted for serialization/deserialization. Jinja2 templates are now blocked by default, and the “secrets_from_env” option defaults to “False,” disabling automatic secret loading from environment variables.

The following versions of langchain-core are affected by CVE-2025-68664 –

  • >= 1.0.0, < 1.2.5 (Fixed in 1.2.5)
  • < 0.3.81 (Fixed in 0.3.81)

There is also a related serialization injection issue in LangChain.js arising from the same failure to correctly escape objects with “lc” keys, which similarly allows secret theft and prompt injection. This bug is cataloged as CVE-2025-68665 (CVSS score: 8.6).

It affects the following npm packages –

  • @langchain/core >= 1.0.0, < 1.1.8 (Fixed in 1.1.8)
  • @langchain/core < 0.3.80 (Fixed in 0.3.80)
  • langchain >= 1.0.0, < 1.2.3 (Fixed in 1.2.3)
  • langchain < 0.3.37 (Fixed in 0.3.37)

Given the severity and exploitability, teams operating LangChain-based workloads should upgrade to the patched releases without delay and update their security playbooks and baselines accordingly.

“The most common attack vector is through LLM response fields like additional_kwargs or response_metadata, which can be controlled via prompt injection and then serialized/deserialized in streaming operations,” Porat said. “This is exactly the kind of ‘AI meets classic security’ intersection where organizations get caught off guard. LLM output is an untrusted input.”

Reference: View article

All Right Reserved by Jutsu Inc. | 2024