Skip to main content
LangSmith lets you create dataset examples with file attachments—like images, audio files, or documents—and use them in your prompts and evaluators when running evaluations with multimodal content. While you can include multimodal data in your examples by base64 encoding it, this approach is inefficient—the encoded data takes up more space than the original binary files, resulting in slower transfers to and from LangSmith. Using attachments instead provides two key benefits:
  • Faster upload and download speeds due to more efficient binary file transfers.
  • Enhanced visualization of different file types in the LangSmith UI.
This guide covers how to create examples with attachments, build multimodal prompts and evaluators that use those attachments, and run evaluations with multimodal content—select the UI or SDK following tabs to get started. Choose your preferred method:

1. Create examples with attachments

You can add examples with attachments to a dataset in a few different ways.

From existing runs

When adding runs to a LangSmith dataset, attachments can be selectively propagated from the source run to the destination example. To learn more, please see this guide.Add trace with attachments to dataset

From scratch

You can create examples with attachments directly from the LangSmith UI. Click the + Example button in the Examples tab of the dataset UI. Then upload attachments using the “Upload Files” button:Create example with attachmentsOnce uploaded, you can view examples with attachments in the LangSmith UI. Each attachment will be rendered with a preview for easy inspection. Attachments with examples

2. Create a multimodal prompt

The LangSmith UI allows you to include attachments in your prompts when evaluating multimodal models:First, click the file icon in the message where you want to add multimodal content. Next, add a template variable for the attachment(s) you want to include for each example.
  • If you want to include a specifc attachment, you can use the suggested variable name, such as {{attachment.file_name}}, this will map the file with file_name in the attachment list to pass it to the evaluator
  • If you want to include all attachments, use the {{attachments}}` variable. Adding multimodal variable

3. Define custom evaluators

You can create evaluators that use multimodal content from your dataset examples.Since your dataset already has examples with attachments (added in step 1), you can reference them directly in your evaluator. To do so:
  1. Select + Evaluator from the dataset page.
  2. In the Template variables editor, add a variable for the attachment(s) to include:
    • If you want to include a specifc attachment, you can use the suggested variable name, such as {{attachment.file_name}}, this will map the file with file_name in the attachment list to pass it to the evaluator.
    • If you want to include all attachments, use the {{attachments}}` variable.
    Create evaluator modal with an audio attachment selected for output variable.
The evaluator can then use these attachments along with the model’s outputs to judge quality. For example, you could create an evaluator that:
  • Checks if an image description matches the actual image content.
  • Verifies if a transcription accurately reflects the audio.
  • Validates if extracted text from a PDF is correct.
You can also create text-only evaluators that don’t use attachments but evaluate the model’s text output:
  • OCR → text correction: Use a vision model to extract text from a document, then evaluate the accuracy of the extracted output.
  • Speech-to-text → transcription quality: Use a voice model to transcribe audio to text, then evaluate the transcription against your reference.
If your traces contain base64-encoded multimodal content in their inputs or outputs (for example, if you followed the log multimodal traces guide), you don’t need attachments to evaluate them. Use standard variable mapping—such as {{input}} or {{output}}—in your evaluator prompt, and the base64 content will be passed correctly to the LLM evaluator for visualization and evaluation.
For more information on defining custom evaluators, see the LLM as Judge guide.

4. Update examples with attachments

Attachments are limited to 20MB in size in the UI.
When editing an example in the UI, you can:
  • Upload new attachments
  • Rename and delete attachments
  • Reset attachments to their previous state using the quick reset button
Changes are not saved until you click submit.Attachment editing