Commissioned, Curated and Published by Russ. Researched and written with AI.


What’s New

DateUpdate
27 Mar 2026Three CVEs disclosed today by Cyera researcher Vladimir Tokarev hit LangChain and LangGraph simultaneously: a path traversal flaw reading arbitrary files (CVE-2026-34070, CVSS 7.5), a serialization injection bug that leaks environment variable sec…

Changelog

DateSummary
27 Mar 2026Initial publication covering CVE-2026-34070, CVE-2025-68664, and CVE-2025-67644 with remediation guidance.

LangChain, LangChain-Core, and LangGraph were downloaded a combined 84 million times last week. That number matters because today three security vulnerabilities were disclosed across those packages - flaws that, in combination, could expose files from your server, pull secrets from your environment, and dump conversation history from your database. The attack surfaces are different, the severity varies, but the common thread is that these frameworks sit deep inside production AI stacks that often run with elevated access to everything the application touches.

The researcher is Vladimir Tokarev from Cyera. The three CVEs are CVE-2026-34070, CVE-2025-68664, and CVE-2025-67644. Patches are available. Here is what each one actually does.

CVE-2026-34070: Arbitrary File Read via Prompt Loading

The first flaw sits in langchain_core/prompts/loading.py and carries a CVSS score of 7.5. The prompt-loading API accepts a prompt template and, in the affected versions, performs no path validation on that input. Supply a crafted path and the framework will read arbitrary files from the host filesystem and return their contents.

What makes this worth taking seriously is the deployment pattern. LangChain applications frequently run in environments with broad file access - containers mounted to host paths, systems where .env files, API keys, and private certificates sit alongside application code. A path traversal flaw in a prompt-loading function that accepts user-influenced input is not a theoretical risk in that environment; it is a direct read path to credentials.

The attack chain is straightforward: if any part of your application passes user-controlled input into LangChain’s prompt-loading API, an attacker can direct that input to read files they should not have access to. That covers chatbot interfaces, document processing pipelines, and any agent that builds prompts from user-supplied content.

CVE-2025-68664: Serialization Injection and Secret Extraction

This one is more complex and has been circulating under the name “LangGrinch” since its initial Python disclosure. The vulnerability lives in LangChain’s serialization layer - specifically in the Serializable.toJSON() method and its counterpart in LangChain.js.

The core issue: LangChain uses an internal lc key structure to mark serialized objects. The serializer failed to escape user-controlled data containing those same key patterns. When kwargs fields like additional_kwargs, metadata, or response_metadata contained a structure like {"lc": 1, "type": "secret", "id": ["OPENAI_API_KEY"]}, the deserializer treated it as a legitimate LangChain secret reference rather than inert user data.

With secretsFromEnv enabled (which had no explicit default, effectively making it true), that injected structure would cause the framework to load the named environment variable and return its value during deserialization. An attacker who controls the serialized data can extract any environment variable accessible to the process.

The attack path that makes this especially dangerous: LLM response fields - specifically additional_kwargs - can be influenced via prompt injection. That creates a two-step chain. First, an attacker uses prompt injection to plant a crafted lc payload into an LLM response. Second, when that response is serialized and then deserialized (common in streaming pipelines and LangGraph checkpointing), the injected structure resolves the target environment variable and returns it.

The fix changes secretsFromEnv to default to false and introduces an escape mechanism in toJSON() that wraps user-controlled lc-keyed objects before they can be misinterpreted. It also adds a maxDepth parameter to guard against denial-of-service via deeply nested structures.

Any application that serializes LLM responses and later deserializes them is in scope. That covers a significant portion of production LangChain deployments.

CVE-2025-67644: SQL Injection in LangGraph SQLite Checkpointer

The third flaw is SQL injection, and it affects langgraph-checkpoint-sqlite versions 3.0.0 and below. LangGraph’s SQLite checkpoint backend - used to persist graph state across agent steps - exposes list() and alist() methods that accept metadata filter parameters. The filter keys were passed directly into SQL queries without sanitisation.

An attacker who can supply metadata filter keys to checkpoint search operations can inject arbitrary SQL and read any data in the checkpoint database. That database contains conversation history, agent state, intermediate reasoning steps, and potentially session context from every conversation the application has handled. The CVSS score is 7.3.

The scope condition matters here: the vulnerability requires the ability to influence metadata filter keys, not just filter values. Applications that accept untrusted filter keys in checkpoint queries are directly exposed. Applications that use LangGraph with fixed, internal filter keys are at reduced risk.

The fix is in langgraph-checkpoint-sqlite 3.0.1. The affected package is separate from langgraph itself, which means a targeted upgrade is sufficient.

The JsonPlusSerializer RCE

Separate from the three CVEs above, a remote code execution vulnerability (GHSA-wwqv-p2pp-99h5) was patched in langgraph-checkpoint 3.0.0 and deserves a mention. The default serializer uses msgpack, but falls back to a JSON mode when Unicode surrogate values cause serialization to fail. In that JSON fallback mode, a constructor-style format (lc == 2, type == "constructor") was supported for deserializing custom objects. The deserializer would call arbitrary functions specified by the attacker via the id field - including os.system.

If an attacker can cause your application to persist a payload containing Unicode surrogates and inject a constructor-format object, arbitrary Python code executes the next time that checkpoint is loaded. The published proof of concept demonstrates this with a two-call sequence: first call writes the payload, second call triggers execution.

This is fixed by an allowlist for constructor deserialization in langgraph-checkpoint 3.0.0, alongside deprecation of the JSON fallback serialization path.

Why the Blast Radius Is Different Here

Most library vulnerabilities require the attacker to reach the vulnerable function. With LangChain and LangGraph, the vulnerable functions are often directly in the path of user input by design. Prompt loading, checkpoint serialization, and conversation history are core functionality - not edge cases. And these frameworks typically run inside applications that have been given broad access to databases, file systems, and API credentials specifically so they can operate as agents.

The 84 million weekly download figure is not just a scale number. It reflects how deeply embedded these packages are. Many applications depend on LangChain through intermediate libraries without the application author knowing. That is the same dynamic that made the Log4Shell blast radius so difficult to bound: the vulnerable component is a transitive dependency in a large fraction of the affected surface.

The prompt injection angle on CVE-2025-68664 adds another layer. It means the attack vector is not just “supply bad data to the API” - it is “convince the LLM to put bad data in its response.” That is a vector that does not require direct API access or authenticated access. Any endpoint that processes user input with a LangChain pipeline and then serializes the result is potentially reachable.

What Engineers Need to Do

Immediate checks:

  • Identify which version of langchain-core, langchain, langgraph-checkpoint, and langgraph-checkpoint-sqlite your applications are running. Run pip show langchain-core langgraph-checkpoint langgraph-checkpoint-sqlite or check your lock files.

  • Patch to the fixed versions: langgraph-checkpoint >= 3.0.0 (addresses the JsonPlusSerializer RCE), langgraph-checkpoint-sqlite >= 3.0.1 (addresses CVE-2025-67644). The LangChain-Core patch for CVE-2025-68664 changes secretsFromEnv to default false - check the release notes for your target version.

  • For CVE-2026-34070, review any code path that passes user-controlled input to LangChain’s prompt-loading API. Update to the patched LangChain version and audit for untrusted input reaching that function.

Structural changes worth making regardless:

Never store secrets as environment variables accessible to AI agent processes if those processes handle user input. Use a secrets manager and inject credentials only to the specific functions that need them. The attack surface for CVE-2025-68664 only exists because the targeted environment variables were reachable.

For LangGraph applications using SQLite checkpointing: if you accept user-supplied filter keys for checkpoint queries, validate and allowlist them before they reach the database layer. The fix in 3.0.1 patches the library, but the principle - treat metadata keys as untrusted input - applies beyond this specific CVE.

Review your checkpoint data retention policy. Conversation history and agent state stored in LangGraph checkpoints may contain sensitive data. If that data is being held indefinitely in a SQLite file with broad process access, the exposure from CVE-2025-67644 is larger than just the query injection.

Supply chain hygiene:

If you are not directly using LangChain but your stack does - through an agent framework, a vector database client, or an orchestration tool - you may still be exposed. Run a dependency audit and surface transitive LangChain dependencies. The relevant packages to look for are langchain-core, langgraph-checkpoint, and langgraph-checkpoint-sqlite.

These are not theoretical flaws being disclosed years after discovery. The Cyera research was published today. The patches exist, but the window between patch and deployment across 84 million weekly download instances will be measured in weeks, not hours. Prioritise accordingly.