实践
1 - 快速开始
参考 https://e2b.dev/docs/quickstart
准备
用本地 linux 机器测试, 我用的是 linux mint 22,底层是 ubuntu 22.04.
帐号和key
用 github 账号登录 https://e2b.dev/
创建新的 key, 保存下来, 设置环境变量:
export E2B_API_KEY=e2b_d65e814d15c37858xxxxxxxx
安装 python sdk
参考: https://pypi.org/project/e2b/
pip install e2b-code-interpreter python-dotenv
安装 nodejs sdk
参考: https://www.npmjs.com/package/e2b
npm i @e2b/code-interpreter dotenv
简单运行
python 运行
mkdir -p ~/work/code/e2b/quickstart-python
cd ~/work/code/e2b/quickstart-python
新建一个 python 文件:
vi e2b-quickstart.py
内容如下:
from dotenv import load_dotenv
load_dotenv()
from e2b_code_interpreter import Sandbox
sbx = Sandbox() # By default the sandbox is alive for 5 minutes
execution = sbx.run_code("print('hello world')") # Execute Python inside the sandbox
print(execution.logs)
files = sbx.files.list("/")
print(files)
执行:
python e2b-quickstart.py
输出为:
Logs(stdout: ['hello world\n'], stderr: [])
[EntryInfo(name='.e2b', type=<FileType.FILE: 'file'>, path='/.e2b'), EntryInfo(name='bin', type=<FileType.FILE: 'file'>, path='/bin'), EntryInfo(name='boot', type=<FileType.DIR: 'dir'>, path='/boot'), EntryInfo(name='code', type=<FileType.DIR: 'dir'>, path='/code'), EntryInfo(name='dev', type=<FileType.DIR: 'dir'>, path='/dev'), EntryInfo(name='etc', type=<FileType.DIR: 'dir'>, path='/etc'), EntryInfo(name='home', type=<FileType.DIR: 'dir'>, path='/home'), EntryInfo(name='ijava-1.3.0.zip', type=<FileType.FILE: 'file'>, path='/ijava-1.3.0.zip'), EntryInfo(name='install.py', type=<FileType.FILE: 'file'>, path='/install.py'), EntryInfo(name='java', type=<FileType.DIR: 'dir'>, path='/java'), EntryInfo(name='lib', type=<FileType.FILE: 'file'>, path='/lib'), EntryInfo(name='lib64', type=<FileType.FILE: 'file'>, path='/lib64'), EntryInfo(name='lost+found', type=<FileType.DIR: 'dir'>, path='/lost+found'), EntryInfo(name='media', type=<FileType.DIR: 'dir'>, path='/media'), EntryInfo(name='mnt', type=<FileType.DIR: 'dir'>, path='/mnt'), EntryInfo(name='opt', type=<FileType.DIR: 'dir'>, path='/opt'), EntryInfo(name='proc', type=<FileType.DIR: 'dir'>, path='/proc'), EntryInfo(name='r-4.4.2_1_amd64.deb', type=<FileType.FILE: 'file'>, path='/r-4.4.2_1_amd64.deb'), EntryInfo(name='requirements.txt', type=<FileType.FILE: 'file'>, path='/requirements.txt'), EntryInfo(name='root', type=<FileType.DIR: 'dir'>, path='/root'), EntryInfo(name='run', type=<FileType.DIR: 'dir'>, path='/run'), EntryInfo(name='sbin', type=<FileType.FILE: 'file'>, path='/sbin'), EntryInfo(name='srv', type=<FileType.DIR: 'dir'>, path='/srv'), EntryInfo(name='swap', type=<FileType.DIR: 'dir'>, path='/swap'), EntryInfo(name='sys', type=<FileType.DIR: 'dir'>, path='/sys'), EntryInfo(name='tmp', type=<FileType.DIR: 'dir'>, path='/tmp'), EntryInfo(name='usr', type=<FileType.DIR: 'dir'>, path='/usr'), EntryInfo(name='var', type=<FileType.DIR: 'dir'>, path='/var')]
nodejs 运行
mkdir -p ~/work/code/e2b/quickstart-nodejs
cd ~/work/code/e2b/quickstart-nodejs
新建一个 nodejs 文件:
vi e2b-quickstart.ts
内容如下:
import 'dotenv/config'
import { Sandbox } from '@e2b/code-interpreter'
const sbx = await Sandbox.create() // By default the sandbox is alive for 5 minutes
const execution = await sbx.runCode('print("hello world")') // Execute Python inside the sandbox
console.log(execution.logs)
const files = await sbx.files.list('/')
console.log(files)
执行:
npx tsx ./e2b-quickstart.ts
报错:
npx tsx ./index.ts
node:internal/modules/run_main:123
triggerUncaughtException(
^
Error: Transform failed with 3 errors:
/home/sky/work/code/e2b/quickstart-nodejs/index.ts:4:12: ERROR: Top-level await is currently not supported with the "cjs" output format
/home/sky/work/code/e2b/quickstart-nodejs/index.ts:5:18: ERROR: Top-level await is currently not supported with the "cjs" output format
/home/sky/work/code/e2b/quickstart-nodejs/index.ts:8:14: ERROR: Top-level await is currently not supported with the "cjs" output format
at failureErrorWithLog (/home/sky/.npm/_npx/fd45a72a545557e9/node_modules/esbuild/lib/main.js:1463:15)
at /home/sky/.npm/_npx/fd45a72a545557e9/node_modules/esbuild/lib/main.js:734:50
这个错误是因为在使用 CommonJS (cjs) 格式的模块时,在顶层使用了 await,而 CommonJS 不支持顶层 await。
修改目录下的 package.json, 添加 "type" : "module":
{
"type" : "module",
"dependencies": {
}
}
再次执行:
npx tsx ./e2b-quickstart.ts
输出为:
{ stdout: [ 'hello world\n' ], stderr: [] }
[
{ name: '.e2b', type: 'file', path: '/.e2b' },
{ name: 'bin', type: 'file', path: '/bin' },
{ name: 'boot', type: 'dir', path: '/boot' },
{ name: 'code', type: 'dir', path: '/code' },
{ name: 'dev', type: 'dir', path: '/dev' },
{ name: 'etc', type: 'dir', path: '/etc' },
{ name: 'home', type: 'dir', path: '/home' },
{ name: 'ijava-1.3.0.zip', type: 'file', path: '/ijava-1.3.0.zip' },
{ name: 'install.py', type: 'file', path: '/install.py' },
{ name: 'java', type: 'dir', path: '/java' },
{ name: 'lib', type: 'file', path: '/lib' },
{ name: 'lib64', type: 'file', path: '/lib64' },
{ name: 'lost+found', type: 'dir', path: '/lost+found' },
{ name: 'media', type: 'dir', path: '/media' },
{ name: 'mnt', type: 'dir', path: '/mnt' },
{ name: 'opt', type: 'dir', path: '/opt' },
{ name: 'proc', type: 'dir', path: '/proc' },
{
name: 'r-4.4.2_1_amd64.deb',
type: 'file',
path: '/r-4.4.2_1_amd64.deb'
},
{ name: 'requirements.txt', type: 'file', path: '/requirements.txt' },
{ name: 'root', type: 'dir', path: '/root' },
{ name: 'run', type: 'dir', path: '/run' },
{ name: 'sbin', type: 'file', path: '/sbin' },
{ name: 'srv', type: 'dir', path: '/srv' },
{ name: 'swap', type: 'dir', path: '/swap' },
{ name: 'sys', type: 'dir', path: '/sys' },
{ name: 'tmp', type: 'dir', path: '/tmp' },
{ name: 'usr', type: 'dir', path: '/usr' },
{ name: 'var', type: 'dir', path: '/var' }
]
2.1 - OpenAI
准备工作
pip install openai e2b-code-interpreter
准备 openai 的 key, 设置环境变量:
export OPENROUTER_API_KEY="sk-or-v1-066c495243xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export OPENROUTER_BASE_URL="https://openrouter.ai/api/v1"
export OPENAI_API_KEY=$OPENROUTER_API_KEY
export OPENAI_BASE_URL=$OPENROUTER_BASE_URL
因为在大陆不方便直接使用 openai,所以我一般用 openrouter 来调用 openai 的 api。上面那个 key 是 openrouter 的 key。
简单调用
mkdir -p ~/work/code/e2b/connect-llm/openai-python
cd ~/work/code/e2b/connect-llm/openai-python
新建一个 python 文件:
vi connect-llm-openai.py
内容为:
# pip install openai e2b-code-interpreter
from openai import OpenAI
from e2b_code_interpreter import Sandbox
# Create OpenAI client
client = OpenAI()
system = "You are a helpful assistant that can execute python code in a Jupyter notebook. Only respond with the code to be executed and nothing else. Strip backticks in code blocks."
prompt = "Calculate how many r's are in the word 'strawberry'"
# Send messages to OpenAI API
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": system},
{"role": "user", "content": prompt}
]
)
# Extract the code from the response
code = response.choices[0].message.content
# Execute code in E2B Sandbox
if code:
# print codo conteent for debug
print(code)
with Sandbox() as sandbox:
execution = sandbox.run_code(code)
result = execution.text
print(result)
运行:
python connect-llm-openai.py
输出为:
3
分析:
-
对 openAI 的调用很简单,就是直接调用 openai 的 chat api。
不过在 system 中指明要 openai 返回可以用来计算结果的可执行代码,而不是直接给结果。
Only respond with the code to be executed and nothing else. Strip backticks in code blocks.
只返回要执行的代码,不要返回其他内容。去掉代码块中的反引号。
为此,我增加了 print 代码内容,方便调试。可以看到返回的代码内容是这样的:
word = 'strawberry' count_r = word.count('r') count_r可以看到,返回的是纯粹的 python 代码,没有反引号。反引号是什么? 修改一下输入,把
Strip backticks in code blocks.改成Deep backticks in code blocks., 然后再运行,输出为:
可以看到,返回的内容中,有用于表示代码块的三个反引号。这是 markdown 中表示代码块的语法,但这个会导致 e2b 执行失败:最后一行是 None,表示执行失败。
-
e2b sandbox 相关的代码就三行,极其的简练,一句废话都没有:
# 1. 创建 sandbox with Sandbox() as sandbox: # 2. 执行代码 execution = sandbox.run_code(code) # 3. 获取执行结果 result = execution.texte2b 的 sandbox 就是用来执行 openAI 的 chat API 生成并返回的可执行代码,然后给出结果。
对比普通的让 OpenAI 直接返回结果的调用:
@startuml
hide footbox
participant client
participant OpenAI
client -> OpenAI: chat()
note right
User: Calculate how many r's are in the word 'strawberry'
end note
OpenAI --> client: response result
note right: 3
上面的例子中,OpenAI 不直接返回结果,而是返回一段可执行代码。然后我们用 e2b 的 sandbox 执行返回的可执行代码来计算结果:
@startuml
hide footbox
participant client
participant OpenAI
client -> OpenAI: chat()
note right
System: Only respond with the code to be executed and nothing else.
User: Calculate how many r's are in the word 'strawberry'
end note
OpenAI --> client: response with code
note right
word = 'strawberry'
count_r = word.count('r')
count_r
end note
create control sandbox
client -> sandbox: create_sandbox()
sandbox -> sandbox: run_code(code)
note right
word = 'strawberry'
count_r = word.count('r')
count_r
end note
client <-- sandbox: code_execution_result
note right: 3
@enduml
函数调用
稍微复杂一点的例子,就是使用 AI 的函数调用/Function calling。
mkdir -p ~/work/code/e2b/connect-llm/openai-python
cd ~/work/code/e2b/connect-llm/openai-python
新建一个 python 文件:
vi connect-llm-openai-function-call.py
内容为:
# pip install openai e2b-code-interpreter
import json
from openai import OpenAI
from e2b_code_interpreter import Sandbox
# Create OpenAI client
client = OpenAI()
model = "gpt-4o"
# Define the messages
messages = [
{
"role": "user",
"content": "Calculate how many r's are in the word 'strawberry'"
}
]
# Define the tools
tools = [{
"type": "function",
"function": {
"name": "execute_python",
"description": "Execute python code in a Jupyter notebook cell and return result",
"parameters": {
"type": "object",
"properties": {
"code": {
"type": "string",
"description": "The python code to execute in a single cell"
}
},
"required": ["code"]
}
}
}]
# Generate text with OpenAI
response = client.chat.completions.create(
model=model,
messages=messages,
tools=tools,
)
# Append the response message to the messages list
response_message = response.choices[0].message
messages.append(response_message)
# Execute the tool if it's called by the model
if response_message.tool_calls:
for tool_call in response_message.tool_calls:
if tool_call.function.name == "execute_python":
# Create a sandbox and execute the code
with Sandbox() as sandbox:
code = json.loads(tool_call.function.arguments)['code']
execution = sandbox.run_code(code)
result = execution.text
# Send the result back to the model
messages.append({
"role": "tool",
"name": "execute_python",
"content": result,
"tool_call_id": tool_call.id,
})
# Generate the final response
final_response = client.chat.completions.create(
model=model,
messages=messages
)
print(final_response.choices[0].message.content)
运行:
python connect-llm-openai-function-call.py
输出为:
The word "strawberry" contains 3 'r's.
分析:
-
这次对 openAI 的调用要复杂一些,用到了 OpenAI 的 chat API 的 tools 功能。
# 定义了一个工具 tools = [{ # 工具的类型是 function "type": "function", "function": { # 工具的名称 "name": "execute_python", # 工具的描述 "description": "Execute python code in a Jupyter notebook cell and return result", # 工具的参数 "parameters": { "type": "object", "properties": { "code": { "type": "string", "description": "The python code to execute in a single cell" } }, "required": ["code"] } } }]这是告诉 openai: 我这里有一个名为 execute_python 的 function,你可以把它当成工具来用,如果要执行 python 代码,可以调用这个工具。
-
调用 openAI,user 设置依然是计算 “strawberry” 中 “r” 的个数,但要求 openai 使用我们定义的工具,也就是能够执行 python 代码的 execute_python 函数。
messages = [ { "role": "user", "content": "Calculate how many r's are in the word 'strawberry'" } ] ...... response = client.chat.completions.create( model=model, messages=messages, tools=tools, )通过增加日志,打印出 response 的内容:
ChatCompletion(id='gen-1749302174-f823d4hLS2G40WQwXec9', choices=[Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content='', refusal=None, role='assistant', audio=None, function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_ulBxOTZOSx5uaWGXv1ZpMIJs', function=Function(arguments='{"code":"word = \'strawberry\'\\nr_count = word.count(\'r\')\\nr_count"}', name='execute_python'), type='function', index=0)], reasoning=None), native_finish_reason='tool_calls')], created=1749302174, model='openai/gpt-4o', object='chat.completion', service_tier=None, system_fingerprint='fp_5d58a6052a', usage=CompletionUsage(completion_tokens=32, prompt_tokens=75, total_tokens=107, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=0, rejected_prediction_tokens=None), prompt_tokens_details=PromptTokensDetails(audio_tokens=None, cached_tokens=0)), provider='Azure')打印出 response_message 的内容为:
ChatCompletionMessage(content='', refusal=None, role='assistant', audio=None, function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_lSfGvAFu230WAm0eQIdkHfbi', function=Function(arguments='{"code":"word = \'strawberry\'\\ncount_r = word.count(\'r\')\\ncount_r"}', name='execute_python'), type='function', index=0)], reasoning=None)tool_calls 数组中有一个 ChatCompletionMessageToolCall 对象,定义了一个 function: name 是 execute_python,参数是 “code”, 内容为 “word = 'strawberry'\ncount_r = word.count('r')\ncount_r”,这是一段可执行的 python 代码。
-
根据 openai 的返回结果,我们可以知道,openai 会根据我们定义的工具,生成一段可执行的 python 代码,并返回给我们。
这里解析 openai 返回的 tool_calls 数组,并得到其中的可执行代码:
if response_message.tool_calls: for tool_call in response_message.tool_calls: if tool_call.function.name == "execute_python": ...... code = json.loads(tool_call.function.arguments)['code'] -
然后我们用 e2b 的 sandbox 执行返回的可执行代码来计算结果:
with Sandbox() as sandbox: execution = sandbox.run_code(code) result = execution.text这里的 result 打印出来的结果是 “3”,和上一个例子上类似。
-
不同的是,这次的结果不直接返回给用户,而是通过 tool 返回给 openai 的模型:
# Send the result back to the model messages.append({ "role": "tool", "name": "execute_python", "content": result, "tool_call_id": tool_call.id, })通过增加日志,打印出 messages 的内容:
[{'role': 'user', 'content': "Calculate how many r's are in the word 'strawberry'"}, ChatCompletionMessage(content='', refusal=None, role='assistant', audio=None, function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_phNLl8JrmZOmjj1WE4hhYAMK', function=Function(arguments='{"code":"word = \'strawberry\'\\ncount_r = word.count(\'r\')\\ncount_r"}', name='execute_python'), type='function', index=0)], reasoning=None), {'role': 'tool', 'name': 'execute_python', 'content': '3', 'tool_call_id': 'call_phNLl8JrmZOmjj1WE4hhYAMK'}] -
再次调用 openai 的 chat API,这次调用时,openai 会根据我们返回的 tool 结果,生成最终的响应:
final_response = client.chat.completions.create( model=model, messages=messages )
在 function calling 的例子中,我们会告知 openai 我们有一个工具 execute_python,它可以执行 python 代码并返回结果。然后 openai 会根据我们的请求,生成一段可执行的 python 代码,并返回给我们表示它希望通过 function calling 的方式调用我们定义的 execute_python 工具。在得到需要执行的代码后,我们用 e2b 的 sandbox 执行返回的可执行代码来计算结果,并把结果返回给 openai 的模型。最后,openai 的模型会根据我们返回的结果,生成最终的响应。流程图如下:
@startuml
hide footbox
participant client
participant OpenAI
client -> OpenAI: chat()
note right
User: Calculate how many r's are in the word 'strawberry'
Tools: execute_python(code)
end note
group Function_Calling
OpenAI --> client: execute_python(code)
note right
tool_calls:
function name: execute_python
arguments code:
word = 'strawberry'
count_r = word.count('r')
count_r
end note
client -> client: parse to get code
create control sandbox
client -> sandbox: create_sandbox()
sandbox -> sandbox: run_code(code)
note right
word = 'strawberry'
count_r = word.count('r')
count_r
end note
client <-- sandbox: code_execution_result
note right: 3
client -> client: append result to messages
client -> OpenAI: chat()
note right
User: Calculate how many r's are in the word 'strawberry'
Tool: execute_python() with result=3
end note
end
OpenAI --> client:
note right
The word "strawberry" contains 3 'r's.
end note
@enduml
如果我们仅仅关注 function calling 的实现,并忽略 e2b 的 sandbox 的执行,那么 openai 执行 function calling 的子流程图可以简化为:
@startuml
hide footbox
participant client
participant OpenAI
group Function_Calling
OpenAI --> client: execute_python(code)
note right
tool_calls:
function name: execute_python
arguments code:
word = 'strawberry'
count_r = word.count('r')
count_r
end note
client -> OpenAI: chat()
note right
User: Calculate how many r's are in the word 'strawberry'
Tool: execute_python() with result=3
end note
end
@enduml
受限于 openai 的模型,它不能主动发起调用,只能在 client chat 调用的 response 里面表明它需要通过 function calling 的方式来调用我们定义的工具,然后 client 进行配合,再次发起一次新的 chat 请求,以便把 function calling 的结果返回给 openai 的模型。
这里相当于 openai 以 chat response 的形式执行了一次 function calling request,然后 client 以第二次 chat request 的形式对 function calling 进行了 response。
逻辑上的交互流程图如下:
@startuml
hide footbox
participant client
participant OpenAI
client -> OpenAI: chat request
group Function_Calling
OpenAI -> client: funciton calling request
note right
chat response with funciton calling
end note
client --> OpenAI: funciton calling response
note right
new chat request with funciton calling result
end note
end
OpenAI --> client: chat response
@enduml
3 - 上传文件
参考: https://e2b.dev/docs/quickstart/upload-download-files
python 实现
mkdir -p ~/work/code/e2b/upload-files
cd ~/work/code/e2b/upload-files
touch ./upload.txt
# write some content to upload.txt
cat <<EOF > ./upload.txt
第一行
第二行
EOF
vi upload-files.py
内容为:
from e2b_code_interpreter import Sandbox
sbx = Sandbox()
# Read local file relative to the current working directory
with open("./upload.txt", "rb") as file:
# Upload file to the sandbox to absolute path '/home/user/upload.txt'
sbx.files.write("/home/user/upload.txt", file)
# Download file from the sandbox to absolute path '/home/user/my-file'
content = sbx.files.read('/home/user/upload.txt')
print(content)
# Write file to local path relative to the current working directory
with open('./download.txt', 'w') as file:
file.write(content)
print("done, check ./download.txt")
执行:
python upload-files.py
小结
只有简单的读写单个文件的 api,多个文件只能一个一个来,也不支持目录操作。
4 - 安装自定义包
参考: https://e2b.dev/docs/quickstart/install-custom-packages
准备工作
安装 Docker
后面构建模板时需要使用 docker 构建镜像。
安装 cli
安装 e2b 的 cli:
npm i -g @e2b/cli
检查是否安装成功:
$ e2b --version
1.4.3
$ e2b -h
Usage: e2b [options] [command]
Create sandbox templates from Dockerfiles by running e2b
template build then use our SDKs to create sandboxes from these templates.
Visit E2B docs (https://e2b.dev/docs) to learn how to create sandbox templates and start sandboxes.
Options:
-V, --version display E2B CLI version
-h, --help display help for command
Commands:
auth authentication commands
template|tpl manage sandbox templates
sandbox|sbx work with sandboxes
help [command] display help for command
登录 e2b
e2b auth login
会自动打开浏览器要求登录,因为我之前登录过,所以直接跳过,页面显示:
Successfully linked
You can close this page and start using CLI.
终端显示:
$ e2b auth login
Attempting to log in...
Logged in as aoxiaojian@gmail.com with selected team aoxiaojian@gmail.com
安装自定义包
初始化 sandbox 模板
mkdir -p ~/work/code/e2b/install-custom-packages
cd ~/work/code/e2b/install-custom-packages
e2b template init
输出为:
Created ./e2b.Dockerfile
打开这个文件,可以看到默认的模板内容:
# You can use most Debian-based base images
FROM ubuntu:22.04
# Install dependencies and customize sandbox
仅仅是一个 ubuntu 22.04 的镜像,没有任何其他内容。
指定需要的包
vi ./e2b.Dockerfile
修改内容,指定需要的包:
FROM e2bdev/code-interpreter:latest
RUN pip install cowsay
注意:基础镜像必须使用 e2bdev/code-interpreter:latest 。
构建模板
# cd ~/work/code/e2b/install-custom-packages
e2b template build -c "/root/.jupyter/start-up.sh"
输出为:
Found sandbox template j5zqgb7g5aeyugxrtfz1 <-> ./e2b.toml
Found ./e2b.Dockerfile that will be used to build the sandbox template.
Requested build for the sandbox template j5zqgb7g5aeyugxrtfz1
Login Succeeded
Building docker image with the following command:
docker build -f e2b.Dockerfile --pull --platform linux/amd64 -t docker.e2b.app/e2b/custom-envs/j5zqgb7g5aeyugxrtfz1:884051f3-0c77-4d45-8de7-fd1eff03203f .
[+] Building 9.8s (6/6) FINISHED docker:default
=> [internal] load build definition from e2b.Dockerfile 0.0s
=> => transferring dockerfile: 103B 0.0s
=> [internal] load metadata for docker.io/e2bdev/code-interpreter:latest 9.7s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [1/2] FROM docker.io/e2bdev/code-interpreter:latest@sha256:b8d0cdae882fb0f3a76c71de0f6 0.0s
=> CACHED [2/2] RUN pip install cowsay 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:b1802722693655ebe55a3332ea5e6b2862c40e0a788afce3b9047be0362554 0.0s
=> => naming to docker.e2b.app/e2b/custom-envs/j5zqgb7g5aeyugxrtfz1:884051f3-0c77-4d45-8d 0.0s
> Docker image built.
Pushing docker image with the following command:
docker push docker.e2b.app/e2b/custom-envs/j5zqgb7g5aeyugxrtfz1:884051f3-0c77-4d45-8de7-fd1eff03203f
The push refers to repository [docker.e2b.app/e2b/custom-envs/j5zqgb7g5aeyugxrtfz1]
bdf722f13640: Preparing
......
884051f3-0c77-4d45-8de7-fd1eff03203f: digest: sha256:12c6337ace9ec9003cd5d844478c9dfd5fd0abd27370df00b3c60f835802bd5b size: 8493
> Docker image pushed.
Triggering build...
> Triggered build for the sandbox template j5zqgb7g5aeyugxrtfz1 with build ID: 884051f3-0c77-4d45-8de7-fd1eff03203f
Waiting for build to finish...
[2025-06-09T02:55:10Z] Starting postprocessing
[2025-06-09T02:55:10Z] Requesting Docker Image
[2025-06-09T02:55:10Z] Docker image size: 1.5 GB
[2025-06-09T02:55:10Z] Setting up system files
[2025-06-09T02:55:11Z] Creating file system and pulling Docker image
[2025-06-09T02:56:11Z] ...
[2025-06-09T02:56:11Z] Filesystem cleanup
[2025-06-09T02:56:11Z] Provisioning sandbox template
[2025-06-09T02:56:12Z] Starting provisioning script
[2025-06-09T02:56:12Z] Making configuration immutable
[2025-06-09T02:56:12Z] Checking presence of the following packages: systemd systemd-sysv openssh-server sudo chrony linuxptp
[2025-06-09T02:56:12Z] Package systemd is missing, will install it.
[2025-06-09T02:56:12Z] Package systemd-sysv is missing, will install it.
[2025-06-09T02:56:12Z] Package openssh-server is missing, will install it.
[2025-06-09T02:56:12Z] Package chrony is missing, will install it.
[2025-06-09T02:56:12Z] Package linuxptp is missing, will install it.
[2025-06-09T02:56:12Z] Missing packages detected, installing: systemd systemd-sysv openssh-server chrony linuxptp
[2025-06-09T02:56:15Z] Selecting previously unselected package libargon2-1:amd64.
......
[2025-06-09T02:56:48Z] Start command is running
[2025-06-09T02:56:48Z] Pausing sandbox template
[2025-06-09T02:56:53Z] ...
[2025-06-09T02:56:58Z] ...
[2025-06-09T02:57:01Z] Uploading template
[2025-06-09T02:57:06Z] ...
[2025-06-09T02:57:09Z] ...
[2025-06-09T02:57:09Z] Postprocessing finished. Took 1m59s. Cleaning up...
✅ Building sandbox template j5zqgb7g5aeyugxrtfz1 finished.
┌───────────────────────────────────────────────────────────────────────────────────────────────┐
│ │
│ You can now use the template to create custom sandboxes. │
│ Learn more on https://e2b.dev/docs │
│ │
└───────────────────────────────────────────────────────────────────────────────────────────────┘
───────────────────────────────────────── Python SDK ──────────────────────────────────────────
from e2b import Sandbox, AsyncSandbox
# Create sync sandbox
sandbox = Sandbox("j5zqgb7g5aeyugxrtfz1")
# Create async sandbox
sandbox = await AsyncSandbox.create("j5zqgb7g5aeyugxrtfz1")
─────────────────────────────────────────── JS SDK ────────────────────────────────────────────
import { Sandbox } from 'e2b'
// Create sandbox
const sandbox = await Sandbox.create('j5zqgb7g5aeyugxrtfz1')
───────────────────────────────────────────────────────────────────────────────────────────────
使用自定义模板
vi use-custom-template.py
内容为:
from e2b_code_interpreter import Sandbox
sbx = Sandbox(template='j5zqgb7g5aeyugxrtfz1')
print(sbx.run_code("""
import cowsay
cowsay.cow("Hello from Python!")
"""))
执行:
python use-custom-template.py
输出为:
Execution(Results: [], Logs: Logs(stdout: [' __________________\n| Hello from Python! |\n ==================\n \\\n \\\n ^__^\n (oo)\\_______\n (__)\\ )\\/\\\n ||----w |\n || ||\n'], stderr: []), Error: None)
在运行时安装包
如果包无法提前在模板中安装,可以在运行时安装。
mkdir -p ~/work/code/e2b/install-custom-packages-on-runtime
cd ~/work/code/e2b/install-custom-packages-on-runtime
vi install-custom-packages-on-runtime.py
内容为:
from e2b_code_interpreter import Sandbox
sbx = Sandbox()
sbx.commands.run("pip install cowsay") # This will install the cowsay package
sbx.commands.run("cowsay -t aaa")
sbx.run_code("""
import cowsay
cowsay.cow("Hello, world!")
""")
执行:
python install-custom-packages-on-runtime.py
5 - 用 AI 分析数据
参考: https://e2b.dev/docs/code-interpreting/analyze-data-with-ai
准备
mkdir -p ~/work/code/e2b/analyze-data-with-ai
cd ~/work/code/e2b/analyze-data-with-ai
下载:
https://www.kaggle.com/datasets/muqarrishzaib/tmdb-10000-movies-dataset
解压并重命名为 dataset.csv , 并移动到当前目录.
安装 python 依赖:
pip install e2b-code-interpreter anthropic python-dotenv
使用 AI 分析数据
vi analyze-data-with-ai.py
内容为:
import sys
import os
import base64
from dotenv import load_dotenv
load_dotenv()
from e2b_code_interpreter import Sandbox
from openai import OpenAI
# Create sandbox
sbx = Sandbox()
# Upload the dataset to the sandbox
with open("./dataset.csv", "rb") as f:
dataset_path_in_sandbox = sbx.files.write("dataset.csv", f)
def run_ai_generated_code(ai_generated_code: str):
print('Running the code in the sandbox....')
execution = sbx.run_code(ai_generated_code)
print('Code execution finished!')
# First let's check if the code ran successfully.
if execution.error:
print('AI-generated code had an error.')
print(execution.error.name)
print(execution.error.value)
print(execution.error.traceback)
sys.exit(1)
# Iterate over all the results and specifically check for png files that will represent the chart.
result_idx = 0
for result in execution.results:
if result.png:
# Save the png to a file
# The png is in base64 format.
with open(f'chart-{result_idx}.png', 'wb') as f:
f.write(base64.b64decode(result.png))
print(f'Chart saved to chart-{result_idx}.png')
result_idx += 1
prompt = f"""
I have a CSV file about movies. It has about 10k rows. It's saved in the sandbox at {dataset_path_in_sandbox.path}.
These are the columns:
- 'id': number, id of the movie
- 'original_language': string like "eng", "es", "ko", etc
- 'original_title': string that's name of the movie in the original language
- 'overview': string about the movie
- 'popularity': float, from 0 to 9137.939. It's not normalized at all and there are outliers
- 'release_date': date in the format yyyy-mm-dd
- 'title': string that's the name of the movie in english
- 'vote_average': float number between 0 and 10 that's representing viewers voting average
- 'vote_count': int for how many viewers voted
I want to better understand how the vote average has changed over the years.
Write Python code that analyzes the dataset based on my request and produces right chart accordingly"""
client = OpenAI()
print("Waiting for model response...")
msg = client.messages.create(
model="gpt-4o",
max_tokens=1024,
messages=[
{"role": "user", "content": prompt}
],
tools=[
{
"name": "run_python_code",
"description": "Run Python code",
"input_schema": {
"type": "object",
"properties": {
"code": { "type": "string", "description": "The Python code to run" },
},
"required": ["code"]
}
}
]
)
for content_block in msg.content:
if content_block.type == "tool_use":
if content_block.name == "run_python_code":
code = content_block.input["code"]
print("Will run following code in the sandbox", code)
# Execute the code in the sandbox
run_ai_generated_code(code)
执行:
python analyze-data-with-ai.py
输出为: