LangChain's LangSmith now offers ephemeral, isolated code execution environments for agents via a single SDK call, currently in Private Preview.
LangChain launched LangSmith Sandboxes in Private Preview — secure, ephemeral environments that let AI agents execute untrusted code without risk to host infrastructure. Developers can spin up a sandbox with one line of Python or JavaScript using the existing LangSmith SDK. The feature supports custom Docker images, sandbox templates with configurable CPU/memory, and full execution tracing integrated into LangSmith's existing observability stack. It's already powering LangChain's own Open SWE project and integrates natively with LangChain's Deep Agents framework.
This eliminates one of the most painful infrastructure problems in agentic systems: safe code execution. Previously, you'd hand-roll container orchestration, network lockdown, resource limits, and output piping yourself — now it's a single SDK call if you're already on LangSmith. The tracing integration is the real differentiator: every process and network call inside the VM gets logged alongside your agent runs, giving you debugging visibility that custom setups rarely provide.
If you're building a coding or data analysis agent this week, join the waitlist and test LangSmith Sandboxes against your current container-based execution setup — measure setup time and lines of infrastructure code eliminated as your baseline comparison metric.
Open the LangSmith docs sandbox section and paste this into a Python environment with the SDK installed: `from langsmith import Client; client = Client(); sandbox = client.create_sandbox()` — check if your API key grants waitlist access and inspect the sandbox object returned.
Tags
Signals by role
Also today
Tools mentioned