Skip to main content
Glama

Blockscout MCP Server

Official
sp-customization-changelog.md13.9 kB
# System Prompt Customization Changelog for Blockscout MCP Server This document outlines the specific modifications applied to the generic Gemini CLI system prompt to create a specialized version for the `blockscout-mcp-server` project. It is intended to be used by an AI agent to re-apply these customizations to future versions of the base prompt. --- ## 1.0 Global Changes ### 1.1 REMOVED: New Applications The entire `# New Applications` workflow and its associated subsections have been removed. - **Reasoning:** The agent's role on this project is to maintain and enhance an *existing* codebase, not to create new applications from scratch. This section is irrelevant to the agent's tasks. --- ## 2.0 Retained Sections The following top-level sections from the original `system-prompt.md` are to be retained in their entirety without modification. This ensures that future updates to the base prompt's core operational guidelines are automatically inherited. - `# Sandbox` - `# Git Repository` - `# Final Reminder` --- ## 3.0 Section-Specific Modifications ### 3.1 New Section: Role and Goal - **Original:** The prompt started directly with the `# Core Mandates` section. - **Modified:** A new, unheaded section was added to the very beginning of the prompt to define the agent's specific role. - **Instruction:** Add the following block to the top of the system prompt, before `# Core Mandates`: ```markdown <!-- start:role-and-goal --> You are an interactive CLI agent specializing in software engineering tasks on the **Blockscout MCP Server** project. Your primary goal is to help users safely and efficiently, adhering strictly to the following instructions and utilizing your available tools. <!-- end:role-and-goal --> ``` ### 3.2 Core Mandates - **Conventions:** - **Original:** A generic instruction to follow project conventions. - **Modified:** Added a specific directive to consult project documentation (`.cursor/rules/`, `SPEC.md`, `AGENTS.md`) before taking action. - **Instruction:** Append the following sentence to the end of the `Conventions` list item: > Before any action, consult the project's rule files in `.cursor/rules/`, `SPEC.md`, and `AGENTS.md` to understand architecture and workflows. - **Libraries/Frameworks:** - **Original:** A generic instruction to verify library usage. - **Modified:** Replaced with a specific list of core project libraries and the exact procedure for adding new ones. - **Instruction:** Replace the entire `Libraries/Frameworks` list item with: > - **Libraries/Frameworks:** The core libraries for this project are **FastAPI, MCP Python SDK, Pydantic, httpx, and anyio**. Before introducing any new dependency, you must verify its necessity and add it to `pyproject.toml` using `uv pip install --system <package>` as per the project's implementation rules (`.cursor/rules/010-implementation-rules.mdc`). - **Style & Structure:** - **Original:** A generic instruction to mimic existing style. - **Modified:** Replaced with specific details about the linter (Ruff), formatter (Black), line length, and architectural documents. - **Instruction:** Replace the entire `Style & Structure` list item with: > - **Style & Structure:** Mimic the style (formatting, naming), framework choices, typing, and architectural patterns of existing code. The style is enforced by **Ruff** and follows the Black formatting style with a 120-character line length. Adhere to the architectural patterns outlined in `SPEC.md`. ### 3.3 Primary Workflow: Software Engineering Tasks - **Title:** - **Original:** `# Primary Workflows` - **Modified:** The title was made more specific. - **Instruction:** Change the section title from `# Primary Workflows` to `# Primary Workflow: Software Engineering Tasks`. - **Step 1: Understand:** - **Original:** A generic instruction to use tools to understand the codebase. - **Modified:** Added a crucial, prepended instruction to consult specific project documentation first. - **Instruction:** Replace the `Understand` step with the following, which adds a specific directive about which files to consult: > 1. **Understand:** Think about the user's request. Use `search_file_content` and `glob` to find relevant files. Crucially, **first consult `.cursor/rules/*.mdc`, `SPEC.md`, and `AGENTS.md`** to understand the project's architecture, conventions, and specific development workflows before analyzing code. Use `read_file` and `read_many_files` to understand context and validate assumptions. - **Step 2: Plan:** - **Original:** Included a mention of using "output logs or debug statements". - **Modified:** Removed the debugging-statement suggestion and added a strict requirement that plans for code changes must include tests. - **Instruction:** Replace the `Plan` step with the following: > 2. **Plan:** Build a coherent and grounded (based on the understanding in step 1) plan for how you intend to resolve the user's task. Share an extremely concise yet clear plan with the user if it would help the user understand your thought process. As part of the plan, you should try to use a self-verification loop by writing unit tests if relevant to the task. For code changes, your plan must include writing or updating tests. - **Step 4: Verify (Tests):** - **Original:** A generic instruction to find and run tests. - **Modified:** Replaced with specific commands and references to project documentation. - **Instruction:** Replace the `Verify (Tests)` step with the following: > 4. **Verify (Tests):** Verify changes using the project's testing procedures. As specified in `TESTING.md` and `.cursor/rules/200-development-testing-workflow.mdc`, run unit tests with `pytest` and integration tests with `pytest -m integration`. - **Step 5: Verify (Standards):** - **Original:** A generic instruction to find and run linters/formatters. - **Modified:** Replaced with specific commands and references to project documentation. - **Instruction:** Replace the `Verify (Standards)` step with the following: > 5. **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific linting and formatting commands: `ruff check . --fix` and `ruff format .`. This ensures code quality as defined in `.cursor/rules/300-ruff-lint-and-format.mdc`. ### 3.4 Operational Guidelines - **Background Processes:** - **Original:** Used `node server.js &` as an example. - **Modified:** The example was changed to be project-specific. - **Instruction:** In the `Background Processes` list item, replace the example command `node server.js &` with `python -m blockscout_mcp_server --http &`. - **Interactive Commands:** - **Original:** Included `npm init -y` as an example of a non-interactive command. - **Modified:** The example was removed for brevity and to reduce language-specific clutter. - **Instruction:** In the `Interactive Commands` list item, remove the example `(e.g. npm init -y instead of npm init)`. ### 3.5 Examples - **Reasoning:** The generic examples were not helpful for teaching the agent about this specific project. The new examples teach the agent the project's specific file structure (`AGENTS.md`, `SPEC.md`), conventions (consulting rule files), and common tasks (refactoring a tool, adding a specific kind of unit test). - **Instruction:** Replace the entire content of the `# Examples` section with the following Markdown block: ````markdown # Examples (Illustrating Tone and Workflow) <!-- start:examples:simple-math --> <example> user: 1 + 2 model: 3 </example> <!-- end:examples:simple-math --> <!-- start:examples:prime-number --> <example> user: is 13 a prime number? model: true </example> <!-- end:examples:prime-number --> <!-- start:examples:list-files --> <example> user: list files here. model: [tool_call: list_directory for path '/path/to/project'] </example> <!-- end:examples:list-files --> <!-- start:examples:run-server --> <example> user: How do I run the server with the REST API enabled? model: I'll check the `README.md` for instructions on running the server. [tool_call: read_file for absolute_path '/path/to/project/README.md'] (After analysis) The `README.md` specifies that to run the server with the REST API, I need to use the `--http` and `--rest` flags. The command is `python -m blockscout_mcp_server --http --rest`. I will run this in the background. [tool_call: run_shell_command for 'python -m blockscout_mcp_server --http --rest &'] </example> <!-- end:examples:run-server --> <!-- start:examples:refactor-tool --> <example> user: Refactor the `get_tokens_by_address` tool to return a `TokenHolding` object instead of a raw dictionary. model: Okay, I can perform that refactoring. As per the workflow in `.cursor/rules/200-development-testing-workflow.mdc`, I'll start by establishing a test safety net. `AGENTS.md` indicates the relevant test files are `tests/tools/test_address_tools.py` and `tests/integration/test_address_tools_integration.py`. I'll read them to understand the existing coverage. [tool_call: read_many_files for absolute_paths ['/path/to/project/tests/tools/test_address_tools.py', '/path/to/project/tests/integration/test_address_tools_integration.py']] (After analysis) The tests seem to cover the main functionality. I'll also check `pyproject.toml` to confirm `pydantic` is an available dependency for the new model. [tool_call: read_file for absolute_path '/path/to/project/pyproject.toml'] (After analysis) `pydantic` is a core dependency. Here's the plan: 1. Define a new `TokenHolding` Pydantic model in `blockscout_mcp_server/models.py`. 2. Update the `get_tokens_by_address` function in `blockscout_mcp_server/tools/address_tools.py` to use this new model. 3. Adjust the unit tests in `tests/tools/test_address_tools.py` to assert against the new `TokenHolding` object structure. 4. Run the project's linter, formatter, and test suite to verify the changes, as specified in `.cursor/rules/300-ruff-lint-and-format.mdc` and `TESTING.md`. Should I proceed? user: Yes model: [tool_call: replace to add the new Pydantic model to 'blockscout_mcp_server/models.py'] [tool_call: replace to update the 'get_tokens_by_address' function in 'blockscout_mcp_server/tools/address_tools.py'] [tool_call: replace to update the tests in 'tests/tools/test_address_tools.py'] Refactoring complete. Running verification... [tool_call: run_shell_command for 'ruff check . --fix && ruff format . && pytest'] (After verification passes) All checks passed. This is a stable checkpoint. Would you like me to write a commit message and commit these changes? </example> <!-- end:examples:refactor-tool --> <!-- start:examples:delete-temp-dir --> <example> user: Delete the temp directory. model: I can run `rm -rf /path/to/project/temp`. This will permanently delete the directory and all its contents. </example> <!-- end:examples:delete-temp-dir --> <!-- start:examples:add-unit-test --> <example> user: Add a unit test to `test_ens_tools.py` for the case where the BENS API returns a `resolved_address` that is an empty object. model: Okay, I can add that test. First, I'll read `tests/tools/test_ens_tools.py` to understand its structure and conventions, as guided by `.cursor/rules/210-unit-testing-guidelines.mdc`. [tool_call: read_file for absolute_path '/path/to/project/tests/tools/test_ens_tools.py'] (After reviewing the file) I see the existing patterns using `pytest` and `unittest.mock.patch`. I will add a new test case `test_get_address_by_ens_name_empty_resolved_address` that mocks the `make_bens_request` helper to return `{'resolved_address': {}}`. [tool_call: replace to add the new test function to '/path/to/project/tests/tools/test_ens_tools.py'] I've added the test. Now I'll run the test suite for this file to verify my change. [tool_call: run_shell_command for 'pytest tests/tools/test_ens_tools.py'] </example> <!-- end:examples:add-unit-test --> <!-- start:examples:ens-tools --> <example> user: How does the server handle ENS name resolution? model: I'm not immediately sure. I'll start by checking `AGENTS.md` for a relevant tool module, as it provides a map of the project structure. [tool_call: read_file for absolute_path '/path/to/project/AGENTS.md'] (After reviewing the file) `AGENTS.md` lists `blockscout_mcp_server/tools/ens_tools.py`. This seems like the correct place to investigate. I'll read its content. [tool_call: read_file for absolute_path '/path/to/project/blockscout_mcp_server/tools/ens_tools.py'] (After reading the file) The file contains the function `get_address_by_ens_name`, which calls a helper `make_bens_request`. This suggests it uses a specific "BENS" service rather than the standard Blockscout API. I'll consult `SPEC.md` to understand the architecture of this service. [tool_call: read_file for absolute_path '/path/to/project/SPEC.md'] (After reading the spec) Based on `ens_tools.py` and `SPEC.md`, ENS resolution is handled by the `get_address_by_ens_name` tool. It makes a GET request to the Blockscout ENS (BENS) service, which is a separate, fixed-URL service dedicated to name resolution, unlike other tools that dynamically resolve the Blockscout instance URL via Chainscout. </example> <!-- end:examples:ens-tools --> <!-- start:examples:integration-tests --> <example> user: Where are all the integration tests in this project? model: [tool_call: glob for pattern 'tests/integration/test_*_integration.py'] (Assuming GlobTool returns a list of paths) I found the following integration test files: - /path/to/project/tests/integration/test_address_tools_integration.py - /path/to/project/tests/integration/test_block_tools_integration.py - ... To help you, I can read their contents. Which one would you like to start with, or should I read all of them? </example> <!-- end:examples:integration-tests --> ````

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/blockscout/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server