Safety vulnerability ID: 54895
The information on this page was manually curated by our Cybersecurity Intelligence Team.
LangChain 0.0.142 includes a fix for CVE-2023-29374: The LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.
https://github.com/hwchase17/langchain/pull/1119
https://github.com/langchain-ai/langchain/commit/5ca7ce77cd536991d04f476e420446a3b21d2a7b
Latest version: 0.3.14
Building applications with LLMs through composability
In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method. See CVE-2023-29374.
MISC:https://github.com/hwchase17/langchain/issues/1026: https://github.com/hwchase17/langchain/issues/1026
MISC:https://github.com/hwchase17/langchain/issues/814: https://github.com/hwchase17/langchain/issues/814
MISC:https://github.com/hwchase17/langchain/pull/1119: https://github.com/hwchase17/langchain/pull/1119
MISC:https://twitter.com/rharang/status/1641899743608463365/photo/1: https://twitter.com/rharang/status/1641899743608463365/photo/1
Scan your Python project for dependency vulnerabilities in two minutes
Scan your application