Skip to main content
Glama

MCP vLLM Benchmarking Tool

by Eliovp-BV

Herramienta de evaluación comparativa MCP vLLM

Esta es una prueba de concepto sobre cómo utilizar MCP para evaluar de forma interactiva vLLM.

No somos nuevos en benchmarking, lea nuestro blog:

Evaluación comparativa de vLLM

Esta es solo una exploración de posibilidades con MCP.

Uso

  1. Clonar el repositorio
  2. Agreguelo a sus servidores MCP:
{ "mcpServers": { "mcp-vllm": { "command": "uv", "args": [ "run", "/Path/TO/mcp-vllm-benchmarking-tool/server.py" ] } } }

Entonces puedes indicar un ejemplo como este:

Do a vllm benchmark for this endpoint: http://10.0.101.39:8888 benchmark the following model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B run the benchmark 3 times with each 32 num prompts, then compare the results, but ignore the first iteration as that is just a warmup.

Hacer:

  • Debido a algunas salidas aleatorias de vllm, es posible que se indique que encontró un JSON no válido. Aún no lo he investigado.
-
security - not tested
F
license - not found
-
quality - not tested

remote-capable server

The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.

Una herramienta interactiva que permite a los usuarios evaluar los puntos finales de vLLM a través de MCP, lo que permite realizar pruebas de rendimiento de modelos LLM con parámetros personalizables.

  1. Uso
    1. Hacer:

      Related MCP Servers

      • A
        security
        A
        license
        A
        quality
        An MCP server that provides LLMs access to other LLMs
        Last updated 4 months ago
        4
        559
        57
        JavaScript
        MIT License
      • -
        security
        F
        license
        -
        quality
        An MCP server that allows Claude to interact with local LLMs running in LM Studio, providing access to list models, generate text, and use chat completions through local models.
        Last updated 4 months ago
        8
        Python
      • -
        security
        A
        license
        -
        quality
        An MCP server that allows agents to test and compare LLM prompts across OpenAI and Anthropic models, supporting single tests, side-by-side comparisons, and multi-turn conversations.
        Last updated 5 months ago
        Python
        MIT License
      • A
        security
        F
        license
        A
        quality
        A lightweight MCP server that provides a unified interface to various LLM providers including OpenAI, Anthropic, Google Gemini, Groq, DeepSeek, and Ollama.
        Last updated a month ago
        6
        491
        Python

      View all related MCP servers

      MCP directory API

      We provide all the information about MCP servers via our MCP API.

      curl -X GET 'https://glama.ai/api/mcp/v1/servers/Eliovp-BV/mcp-vllm-benchmark'

      If you have feedback or need assistance with the MCP directory API, please join our Discord server