Welcome to GenScrap

Your personal knowledge management platform for AI conversations. Organize, share, and discover insights like never before.

|

Create articles instantly, even without an account

Organize Knowledge
Create scrapbooks to organize your AI conversations and insights
AI Integration
Import conversations from ChatGPT, Claude, Gemini, and more
Visual Discovery
Pinterest-style masonry layout for beautiful content browsing
Share & Discover
Share your knowledge publicly or keep it private

Recent Posts

diskcache

about 1 month ago

いい選択です 👍 `diskcache` は **ディスクにキャッシュを保存できるライブラリ** で、メモリを圧迫せずに大量のデータをキャッシュできます。しかも API がシンプルで、Webアプリや機械学習の前処理結果キャッシュなどにもよく使われます。 --- ## インストール ```bash pip install diskcache ``` --- ## 基本の使い方 ### キー・バリューのキャッシュ ```python import diskcache as dc cache = dc.Cache("./mycache") # 値を保存 cache["x"] = 42 # 値を取得 print(cache["x"]) # 42 # デフォルト値付き取得 print(cache.get("y", "default")) # default ``` --- ### 関数の結果をキャッシュ(デコレータ) ```python import time import diskcache as dc cache = dc.Cache("./mycache") @cache.memoize() def slow_function(x): time.sleep(2) return x * x print(slow_function(5)) # 初回は2秒かかる print(slow_function(5)) # 2回目はキャッシュから即時取得 ``` --- ### TTL(有効期限つきキャッシュ) ```python cache.set("key", "value", expire=10) # 10秒で消える print(cache.get("key")) # value ``` --- ### Webアプリ (Flask) での例 ```python from flask import Flask import diskcache as dc app = Flask(__name__) cache = dc.Cache("./flask_cache") @app.route("/square/<int:n>") @cache.memoize(expire=60) # 60秒キャッシュ def square(n): return str(n * n) if __name__ == "__main__": app.run(debug=True) ``` --- ## 特徴まとめ * **速い**(C実装が多い)。 * **永続化**:プロセスが終了してもキャッシュ残る。 * **柔軟**:TTL、サイズ制限、memoizeデコレータ対応。 * **スレッド/プロセス安全**なので並列処理にも強い。 --- 👉 `diskcache` をどんな用途で使いたいですか? (例:APIレスポンスのキャッシュ、データ前処理結果の保存、Webアプリの応答高速化など) それに合わせてベストプラクティスを提案できます。

Read more
Best mcp server development sdk?

about 1 month ago

If by “MCP server” you mean a server implementing the Model Context Protocol (MCP) to allow LLMs / AI agents to interact with external tools/data sources, here are some of the best SDKs & frameworks — trade-offs, strengths & caveats — to help you choose one. If you meant something else by “MCP server,” happy to adjust. --- ## What is MCP The Model Context Protocol (MCP) is an open protocol by Anthropic to standardize how large language models (LLMs) can integrate with external tools, data sources, file systems, prompts, etc. ([ウィキペディア][1]) There are official SDKs in many languages, reference server implementations, and a growing ecosystem of tools. ([Model Context Protocol][2]) --- ## What to Look for in a Good MCP Server SDK Before picking one, consider: | Criterion | Why it matters | | ---------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------- | | **Language ecosystem / community** | You’ll want one in a language you and your team are comfortable with; also availability of examples, integrations, debugging, etc. | | **Feature completeness** | Tools, prompt and resource exposure, transports (stdio, HTTP, SSE, etc.), good docs. | | **Security / sandboxing / permission control** | MCP servers often give access to file systems, external APIs, etc. You need to control what an agent can do. | | **Performance & latency** | Some tasks (web automation, file ops) need low latency; transport overheads matter. | | **Ease of deployment** | How easy is it to host, package, maintain (Docker, cloud, etc.)? | | **Interoperability** | Ability to connect to existing tools, integrate with LLM agents / clients, interface cleanly with other services. | --- ## Official SDKs & Languages Anthropic maintains official SDKs that support server and client building. Languages include: * **TypeScript** ([Model Context Protocol][2]) * **Python** ([Model Context Protocol][2]) * **Go** ([Model Context Protocol][2]) * **Kotlin / Swift / Java / C# / Ruby / Rust / PHP** ([Model Context Protocol][2]) These SDKs implement the core MCP protocol features such as: * Exposing “tools, resources, prompts” via MCP servers. ([Model Context Protocol][2]) * Building MCP clients to connect to servers. ([Model Context Protocol][2]) * Supporting different transports (local, remote) and ensuring protocol compliance. ([Model Context Protocol][2]) So using one of these “official” SDKs is usually the safest bet for compatibility & future support. --- ## Popular / Recommended Implementations & Servers Depending on what your MCP server needs to do, some reference / community servers are more mature or better suited. Some examples: | Use-Case | Good MCP Server / Implementation | | ------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | **Filesystem operations (read/write, project file context, etc.)** | Filesystem MCP servers (often official reference ones) are widely used. ([GitHub][3]) | | **Git / GitHub integration** | Git / GitHub MCP server tools are well supported. Useful for code review, CI, repo introspection. ([Digma][4]) | | **Browser / Web automation (UI testing / scraping)** | Puppeteer MCP, Playwright MCP are good choices. ([Digma][4]) | | **Memory / context preservation across sessions** | Memory Bank MCP, Knowledge Graph Memory MCP. ([Digma][4]) | | **Domain-specific / API / data integrations** | If you need database access (SQL, NoSQL), file storage, cloud services etc., look for MCP servers that already support those tools, or build using SDKs. ([LogRocket Blog][5]) | --- ## Pros & Cons **Pros:** * Easy extensibility: you can expose new “tools” or “resources” as MCP endpoints. * Standardization: once clients & servers follow MCP, tools become interoperable. * Rapid prototyping: you can spin up servers for local file access, APIs, etc. fairly quickly. **Cons / risks:** * **Security risks** — giving AI agents access to powerful tools (filesystem, OS commands, network) can be dangerous if permissions are too open. Must sandbox / control carefully. * **Complexity** as the number of tools/tools-APIs grows: tool discovery, versioning, schema drift, etc. * **Latency / overhead** in some scenarios (esp remote transport). * **Maintenance burden**, especially if you build custom servers. --- ## My Take: Best SDKs If I were to pick “best” SDKs / approach, here are what I’d recommend depending on scale / team size / urgency: * For a small team/prototype: **Python SDK** — very mature, many libraries/tools, quick to experiment. * For integration into existing large codebases or production, or if you need high performance/type safety: **TypeScript** / **Go**. Especially if you have a Node.js or cloud-native stack. * If your target is mobile / iOS / Android, using Kotlin or Swift SDKs may make sense. --- If you tell me your constraints (language(s), deployment environment, what tools you need, security/privacy concerns), I can recommend the *best specific MCP SDK + server implementation* for *your use-case*. Do you want me to do that? [1]: https://en.wikipedia.org/wiki/Model_Context_Protocol?utm_source=chatgpt.com "Model Context Protocol" [2]: https://modelcontextprotocol.io/docs/sdk?utm_source=chatgpt.com "SDKs" [3]: https://github.com/wong2/awesome-mcp-servers?utm_source=chatgpt.com "wong2/awesome-mcp-servers" [4]: https://digma.ai/15-best-mcp-servers/?utm_source=chatgpt.com "15 Best MCP servers for developers in May 2025" [5]: https://blog.logrocket.com/top-15-mcp-servers-ai-projects/?utm_source=chatgpt.com "The top 15 MCP servers for your AI projects"

Read more
# [2508.20722] rStar2-Agent: Agentic Reasoning Technical Report

about 2 months ago

# [2508.20722] rStar2-Agent: Agentic Reasoning Technical Report **URL:** https://www.arxiv.org/abs/2508.20722 **Captured:** 2025/9/6 17:39:22 --- Computer Science > Computation and Language [Submitted on 28 Aug 2025] rStar2-Agent: Agentic Reasoning Technical Report Ning Shang, Yifei Liu, Yi Zhu, Li Lyna Zhang, Weijiang Xu, Xinyu Guan, Buze Zhang, Bingcheng Dong, Xudong Zhou, Bowen Zhang, Ying Xin, Ziming Miao, Scarlett Li, Fan Yang, Mao Yang We introduce rStar2-Agent, a 14B math reasoning model trained with agentic reinforcement learning to achieve frontier-level performance. Beyond current long CoT, the model demonstrates advanced cognitive behaviors, such as thinking carefully before using Python coding tools and reflecting on code execution feedback to autonomously explore, verify, and refine intermediate steps in complex problem-solving. This capability is enabled through three key innovations that makes agentic RL effective at scale: (i) an efficient RL infrastructure with a reliable Python code environment that supports high-throughput execution and mitigates the high rollout costs, enabling training on limited GPU resources (64 MI300X GPUs); (ii) GRPO-RoC, an agentic RL algorithm with a Resample-on-Correct rollout strategy that addresses the inherent environment noises from coding tools, allowing the model to reason more effectively in a code environment; (iii) An efficient agent training recipe that starts with non-reasoning SFT and progresses through multi-RL stages, yielding advanced cognitive abilities with minimal compute cost. To this end, rStar2-Agent boosts a pre-trained 14B model to state of the art in only 510 RL steps within one week, achieving average pass@1 scores of 80.6% on AIME24 and 69.8% on AIME25, surpassing DeepSeek-R1 (671B) with significantly shorter responses. Beyond mathematics, rStar2-Agent-14B also demonstrates strong generalization to alignment, scientific reasoning, and agentic tool-use tasks. Code and training recipes are available at this https URL. Subjects: Computation and Language (cs.CL) Cite as: arXiv:2508.20722 [cs.CL] (or arXiv:2508.20722v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2508.20722 Focus to learn more Submission history From: Li Lyna Zhang [view email] [v1] Thu, 28 Aug 2025 12:45:25 UTC (1,217 KB) Access Paper: View PDF HTML (experimental) TeX Source Other Formats view license Current browse context: cs.CL < prev | next > new | recent | 2025-08 Change to browse by: cs References & Citations NASA ADS Google Scholar Semantic Scholar Export BibTeX Citation Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Demos Related Papers About arXivLabs Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)

Read more
Daytona Sandbox:開発環境の新たな可能性

about 2 months ago

# Daytona Sandbox:開発環境の新たな可能性 ## Daytona Sandboxとは Daytona Sandboxは、開発者がクラウド上で瞬時に開発環境を構築・共有できる革新的なプラットフォームです。従来のローカル開発環境の制約を取り払い、どこからでもアクセス可能な統一された開発体験を提供します。 ## 主な特徴 ### 1. 瞬時の環境構築 Gitリポジトリから数秒で完全な開発環境を立ち上げることができます。依存関係の解決やツールのセットアップは自動化されており、「動かない」というストレスから解放されます。 ### 2. チーム間での共有 作成した環境をURLで簡単に共有できるため、コードレビューやペアプログラミング、トラブルシューティングが格段に効率化されます。 ### 3. 一貫性の保証 開発者全員が同じ環境で作業するため、「私の環境では動く」問題が根本的に解決されます。 ## 開発ワークフローへの影響 Daytona Sandboxは特に以下のシーンで威力を発揮します: - **新しいプロジェクトの立ち上げ**:環境構築の時間を大幅短縮 - **オンボーディング**:新メンバーが即座に開発に参加可能 - **リモート開発**:デバイスに依存しない開発体験 ## まとめ Daytona Sandboxは開発環境の民主化を推し進める重要なツールです。クラウドネイティブな開発体験により、私たちはコードを書くことに集中し、環境の問題に悩まされることなく、より創造的な開発活動に時間を投資できるようになります。 現代の開発チームにとって、Daytona Sandboxのような標準化されたクラウド開発環境は、もはや選択肢ではなく必需品と言えるでしょう。

Read more
E2B example in Python

about 2 months ago

step-by-step E2B example in Python that shows stateful execution, installing packages, uploading a file, and doing a quick SQLite query—all inside a sandbox. --- ## 0) Install & set your key ```bash pip install e2b-code-interpreter python-dotenv export E2B_API_KEY="e2b_***" ``` E2B’s Python package is `e2b-code-interpreter`, and the SDK reads your `E2B_API_KEY` from env. ([PyPI][1]) --- ## 1) Minimal stateful sandbox script ```python # e2b_step_by_step.py import os from e2b_code_interpreter import Sandbox def main(): # Spins up an isolated VM ("sandbox"); auto-shuts down when the block exits with Sandbox() as sbx: # --- A) Stateful Python: variables persist across calls --- sbx.run_code("x = 41") out = sbx.run_code("x += 1; x") # reuses x print("x =", out.text) # -> 42 # --- B) Shell: install a package inside the sandbox --- sbx.commands.run("pip install --quiet pandas") # ok to pip-install at runtime # --- C) Upload a CSV into the sandbox filesystem --- csv = "name,age\nTaro,30\nHanako,28\n" sbx.files.write("/home/user/people.csv", csv) # --- D) Analyze the CSV in Python (pandas) --- out = sbx.run_code(r''' import pandas as pd df = pd.read_csv("/home/user/people.csv") df["age"].mean() ''') print("mean age:", out.text) # --- E) Quick SQLite session (persists objects across cells) --- sbx.run_code(r''' import sqlite3 conn = sqlite3.connect("/home/user/demo.db") cur = conn.cursor() cur.execute("CREATE TABLE IF NOT EXISTS t(a INT)") cur.executemany("INSERT INTO t(a) VALUES (?)", [(1,), (2,), (3,)]) conn.commit() ''') out = sbx.run_code(r''' cur.execute("SELECT sum(a) FROM t") cur.fetchone()[0] ''') print("sum =", out.text) # --- F) Peek at files (via Python, no shell needed) --- out = sbx.run_code('import os; print(os.listdir("/home/user"))') print("files in /home/user:\n", "".join(out.logs.stdout)) if __name__ == "__main__": main() ``` **Why this works / what to know** * `Sandbox()` starts an isolated cloud VM where you can **run code repeatedly and reuse variables** (`run_code` shares state across calls). The returned `Execution` has `.text` (last expression) and `.logs.stdout` for prints. ([e2b.dev][2]) * You can **run shell commands** like `pip install …` via `sandbox.commands.run(...)`. ([Hugging Face][3]) * You can **upload files** into the sandbox with `sandbox.files.write(path, data)` (string/bytes/IO). ([e2b.dev][4]) * By default, a sandbox has a short idle timeout (\~5 minutes) unless you keep using it. ([e2b.dev][5]) --- ## 2) (Optional) Pause & resume the same sandbox later If you want the *exact* process memory and filesystem to persist (even running kernels), E2B has **persistence** (public beta): ```python from e2b_code_interpreter import Sandbox sbx = Sandbox() # create sbx.beta_pause() # save full state (ID: sbx.sandbox_id) same = Sandbox.connect(sbx.sandbox_id) # resume later and continue ``` This preserves files **and memory** (variables, processes) between sessions. ([e2b.dev][6]) --- ## 3) Useful docs you’ll likely reference * **Quickstart** (API key, first sandbox). ([e2b.dev][7]) * **Python SDK: `Sandbox.run_code`** (stateful cells; result fields). ([e2b.dev][2]) * **Filesystem read/write** (Python `files.write`, `files.read`, etc.). ([e2b.dev][4]) * **Install custom packages / runtime installs** (`pip install` in sandbox). ([e2b.dev][8]) * **Commands API (shell in sandbox)**. ([e2b.dev][9]) * **Cookbook examples** (more end-to-end Python demos). ([GitHub][10]) If you want, I can tailor this to your exact use case (e.g., connecting to Postgres/MySQL from inside the sandbox, or wiring this into your LLM/tool-calling flow). [1]: https://pypi.org/project/e2b-code-interpreter/?utm_source=chatgpt.com "e2b-code-interpreter" [2]: https://e2b.dev/docs/sdk-reference/code-interpreter-python-sdk/v1.2.1/sandbox "E2B - Code Interpreting for AI apps" [3]: https://huggingface.co/docs/smolagents/en/tutorials/secure_code_execution?utm_source=chatgpt.com "Secure code execution" [4]: https://e2b.dev/docs/sdk-reference/python-sdk/v1.5.2/sandbox_sync?utm_source=chatgpt.com "E2B - Code Interpreting for AI apps" [5]: https://e2b.dev/docs/quickstart "E2B - Code Interpreting for AI apps" [6]: https://e2b.dev/docs/sandbox/persistence "E2B - Code Interpreting for AI apps" [7]: https://e2b.dev/docs/quickstart?utm_source=chatgpt.com "Running your first Sandbox" [8]: https://e2b.dev/docs/quickstart/install-custom-packages?utm_source=chatgpt.com "Install custom packages" [9]: https://e2b.dev/docs/commands?utm_source=chatgpt.com "Running commands in sandbox" [10]: https://github.com/e2b-dev/e2b-cookbook?utm_source=chatgpt.com "e2b-dev/e2b-cookbook: Examples of using E2B"

Read more
# Agentic workflow patterns - AWS Prescriptive Guidance

2 months ago

Agentic workflow patterns integrate modular software agents with structured large language model (LLM) workflows, enabling autonomous reasoning and action. While inspired by traditional serverless and event-driven architectures, these patterns shift core logic from static code to LLM-augmented agents, providing enhanced adaptability and contextual decision-making. This evolution transforms conventional cloud architectures from deterministic systems to ones capable of dynamic interpretation and intelligent augmentation, while maintaining fundamental principles of scalability and responsiveness. ###### In this section - [From event-driven to cognition-augmented systems](https://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/from-event-driven-to-cognition-augmented-systems.html) - [Prompt chaining saga patterns](https://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/prompt-chaining-saga-patterns.html) - [Routing dynamic dispatch patterns](https://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/routing-dynamic-dispatch-patterns.html) - [Parallelization and scatter-gather patterns](https://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/parallelization-and-scatter-gather-patterns.html) - [Saga orchestration patterns](https://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/saga-orchestration-patterns.html) - [Evaluator reflect-refine loop patterns](https://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/evaluator-reflect-refine-loop-patterns.html) - [Designing agentic workflows on AWS](https://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/designing-agentic-workflows-on-aws.html) - [Conclusion](https://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/conclusion.html)

Read more

Ready to organize your AI knowledge?

Join GenScrap today and transform how you manage your digital insights.

Create Your Account