LoginJoin GenScrap

#python

関連タグ

#python が付いた公開スクラップ 2 件

E2B example in Python
chatgpt

step-by-step E2B example in Python that shows stateful execution, installing packages, uploading a file, and doing a quick SQLite query—all inside a sandbox. --- ## 0) Install & set your key ```bash pip install e2b-code-interpreter python-dotenv export E2B_API_KEY="e2b_***" ``` E2B’s Python package is `e2b-code-interpreter`, and the SDK reads your `E2B_API_KEY` from env. ([PyPI][1]) --- ## 1) Minimal stateful sandbox script ```python # e2b_step_by_step.py import os from e2b_code_interpreter import Sandbox def main(): # Spins up an isolated VM ("sandbox"); auto-shuts down when the block exits with Sandbox() as sbx: # --- A) Stateful Python: variables persist across calls --- sbx.run_code("x = 41") out = sbx.run_code("x += 1; x") # reuses x print("x =", out.text) # -> 42 # --- B) Shell: install a package inside the sandbox --- sbx.commands.run("pip install --quiet pandas") # ok to pip-install at runtime # --- C) Upload a CSV into the sandbox filesystem --- csv = "name,age\nTaro,30\nHanako,28\n" sbx.files.write("/home/user/people.csv", csv) # --- D) Analyze the CSV in Python (pandas) --- out = sbx.run_code(r''' import pandas as pd df = pd.read_csv("/home/user/people.csv") df["age"].mean() ''') print("mean age:", out.text) # --- E) Quick SQLite session (persists objects across cells) --- sbx.run_code(r''' import sqlite3 conn = sqlite3.connect("/home/user/demo.db") cur = conn.cursor() cur.execute("CREATE TABLE IF NOT EXISTS t(a INT)") cur.executemany("INSERT INTO t(a) VALUES (?)", [(1,), (2,), (3,)]) conn.commit() ''') out = sbx.run_code(r''' cur.execute("SELECT sum(a) FROM t") cur.fetchone()[0] ''') print("sum =", out.text) # --- F) Peek at files (via Python, no shell needed) --- out = sbx.run_code('import os; print(os.listdir("/home/user"))') print("files in /home/user:\n", "".join(out.logs.stdout)) if __name__ == "__main__": main() ``` **Why this works / what to know** * `Sandbox()` starts an isolated cloud VM where you can **run code repeatedly and reuse variables** (`run_code` shares state across calls). The returned `Execution` has `.text` (last expression) and `.logs.stdout` for prints. ([e2b.dev][2]) * You can **run shell commands** like `pip install …` via `sandbox.commands.run(...)`. ([Hugging Face][3]) * You can **upload files** into the sandbox with `sandbox.files.write(path, data)` (string/bytes/IO). ([e2b.dev][4]) * By default, a sandbox has a short idle timeout (\~5 minutes) unless you keep using it. ([e2b.dev][5]) --- ## 2) (Optional) Pause & resume the same sandbox later If you want the *exact* process memory and filesystem to persist (even running kernels), E2B has **persistence** (public beta): ```python from e2b_code_interpreter import Sandbox sbx = Sandbox() # create sbx.beta_pause() # save full state (ID: sbx.sandbox_id) same = Sandbox.connect(sbx.sandbox_id) # resume later and continue ``` This preserves files **and memory** (variables, processes) between sessions. ([e2b.dev][6]) --- ## 3) Useful docs you’ll likely reference * **Quickstart** (API key, first sandbox). ([e2b.dev][7]) * **Python SDK: `Sandbox.run_code`** (stateful cells; result fields). ([e2b.dev][2]) * **Filesystem read/write** (Python `files.write`, `files.read`, etc.). ([e2b.dev][4]) * **Install custom packages / runtime installs** (`pip install` in sandbox). ([e2b.dev][8]) * **Commands API (shell in sandbox)**. ([e2b.dev][9]) * **Cookbook examples** (more end-to-end Python demos). ([GitHub][10]) If you want, I can tailor this to your exact use case (e.g., connecting to Postgres/MySQL from inside the sandbox, or wiring this into your LLM/tool-calling flow). [1]: https://pypi.org/project/e2b-code-interpreter/?utm_source=chatgpt.com "e2b-code-interpreter" [2]: https://e2b.dev/docs/sdk-reference/code-interpreter-python-sdk/v1.2.1/sandbox "E2B - Code Interpreting for AI apps" [3]: https://huggingface.co/docs/smolagents/en/tutorials/secure_code_execution?utm_source=chatgpt.com "Secure code execution" [4]: https://e2b.dev/docs/sdk-reference/python-sdk/v1.5.2/sandbox_sync?utm_source=chatgpt.com "E2B - Code Interpreting for AI apps" [5]: https://e2b.dev/docs/quickstart "E2B - Code Interpreting for AI apps" [6]: https://e2b.dev/docs/sandbox/persistence "E2B - Code Interpreting for AI apps" [7]: https://e2b.dev/docs/quickstart?utm_source=chatgpt.com "Running your first Sandbox" [8]: https://e2b.dev/docs/quickstart/install-custom-packages?utm_source=chatgpt.com "Install custom packages" [9]: https://e2b.dev/docs/commands?utm_source=chatgpt.com "Running commands in sandbox" [10]: https://github.com/e2b-dev/e2b-cookbook?utm_source=chatgpt.com "e2b-dev/e2b-cookbook: Examples of using E2B"

2025/8/30表示
What is dill?
manual

Got it 👍 — let’s go step by step and make `dill` clear and practical. --- # 🔎 What is `dill`? * `dill` is a **Python library** that extends the standard `pickle` module. * `pickle` can serialize (“save”) Python objects into a file and later **deserialize** (“load”) them back. * But `pickle` has **limitations**: it cannot handle things like: * user-defined functions * lambdas * classes defined inside functions * some iterators, generators, closures 👉 `dill` overcomes many of these limitations. It can serialize almost anything in Python—including entire **interactive sessions**. --- # 🚀 How to Install ```bash pip install dill ``` --- # 🧪 Basic Usage ```python import dill x = 42 y = [1, 2, 3] def greet(name): return f"Hello {name}!" # ---- Save session ---- dill.dump_session("session.pkl") # (Restart Python...) # ---- Load session ---- dill.load_session("session.pkl") print(x) # 42 print(y) # [1, 2, 3] print(greet("AI")) # Hello AI! ``` ⚡ Notice: Even the function `greet` is restored — this wouldn’t work with `pickle`. --- # 🧩 What Can `dill` Save? ✅ Numbers, strings, lists, dicts (like pickle) ✅ Functions (including lambdas and closures) ✅ Classes, methods, and instances ✅ Generators, iterators ✅ Entire interactive sessions ❌ What it may still struggle with: * Open file handles, sockets, database connections (not serializable in principle) * OS-level resources (threads, processes) --- # 🧰 Core APIs 1. **Save one object** ```python dill.dump(obj, open("file.pkl", "wb")) ``` 2. **Load one object** ```python obj = dill.load(open("file.pkl", "rb")) ``` 3. **Save entire session (all variables)** ```python dill.dump_session("session.pkl") ``` 4. **Load entire session** ```python dill.load_session("session.pkl") ``` --- # 🆚 `pickle` vs `dill` | Feature | `pickle` | `dill` | | ------------------- | -------- | ------ | | Save basic types | ✅ | ✅ | | Save custom classes | ✅ | ✅ | | Save functions | ❌ | ✅ | | Save lambdas | ❌ | ✅ | | Save closures | ❌ | ✅ | | Save entire session | ❌ | ✅ | So: * If you just need **simple data** (numbers, arrays, dicts), `pickle` is fine. * If you want to checkpoint your **whole working environment** (like Jupyter does), `dill` is the tool. --- # ⚡ Practical Tip You can make a quick utility file `session_utils.py`: ```python import dill def save_session(filename="session.pkl"): dill.dump_session(filename) def load_session(filename="session.pkl"): dill.load_session(filename) ``` Then in any script: ```python from session_utils import save_session, load_session x = 123 save_session("my.pkl") # Later... load_session("my.pkl") print(x) # 123 ``` --- 👉 Would you like me to show you how to use `dill` specifically for a **step-by-step experiment workflow** (like Jupyter Notebook, where you checkpoint after each cell), or just for a one-shot "save everything at once" style?

2025/8/25表示

自分のタグ付きコンテンツを作成しませんか?

始める