Skip to main content
Glama

Ontology MCP

by bigdata-coss

Ontology MCP

Ontology MCP is a Model Context Protocol (MCP) server that connects GraphDB's SPARQL endpoints and Ollama models to Claude. This tool allows Claude to query and manipulate ontology data and leverage various AI models.

Ontology MCP Overview

Key Features

SPARQL related functions

  • Execute SPARQL query ( mcp_sparql_execute_query )

  • Execute SPARQL update query ( mcp_sparql_update )

  • List repositories ( mcp_sparql_list_repositories )

  • Query the graph list ( mcp_sparql_list_graphs )

  • Get resource information ( mcp_sparql_get_resource_info )

Features related to the Ollama model

  • Run the model ( mcp_ollama_run )

  • Check model information ( mcp_ollama_show )

  • Download model ( mcp_ollama_pull )

  • Get model list ( mcp_ollama_list )

  • Delete model ( mcp_ollama_rm )

  • Chat completion ( mcp_ollama_chat_completion )

  • Check container status ( mcp_ollama_status )

OpenAI related features

  • Chat completed ( mcp_openai_chat )

  • Create image ( mcp_openai_image )

  • Text-to-speech ( mcp_openai_tts )

  • Speech-to-text ( mcp_openai_transcribe )

  • Generate embedding ( mcp_openai_embedding )

Google Gemini related features

  • Generate text ( mcp_gemini_generate_text )

  • Chat completion ( mcp_gemini_chat_completion )

  • Get model list ( mcp_gemini_list_models )

  • ~~Generate images ( mcp_gemini_generate_images ) - Using Imagen model~~ (currently disabled)

  • ~~Generate videos ( mcp_gemini_generate_videos ) - Using Veo models~~ (currently disabled)

  • ~~Generate multimodal content ( mcp_gemini_generate_multimodal_content )~~ (currently disabled)

Note : Gemini's image creation, video creation, and multimodal content creation features are currently disabled due to API compatibility issues.

Supported Gemini Models

Model transformation

input

output of power

Optimization Goal

Gemini 2.5 Flash Preview

gemini-2.5-flash-preview-04-17

Audio, images, video, text

Text

Adaptive thinking, cost-effectiveness

Gemini 2.5 Pro Preview

gemini-2.5-pro-preview-03-25

Audio, images, video, text

Text

Enhanced thinking and reasoning, multimodal understanding, advanced coding

Gemini 2.0 Flash

gemini-2.0-flash

Audio, images, video, text

Text, images (experimental), audio (coming soon)

Next-generation capabilities, speed, thinking, real-time streaming, multimodal creation

Gemini 2.0 Flash-Lite

gemini-2.0-flash-lite

Audio, images, video, text

Text

Cost-effective and low latency

Gemini 1.5 Flash

gemini-1.5-flash

Audio, images, video, text

Text

Fast and versatile performance for a variety of tasks

Gemini 1.5 Flash-8B

gemini-1.5-flash-8b

Audio, images, video, text

Text

High volume and low intelligence tasks

Gemini 1.5 Pro

gemini-1.5-pro

Audio, images, video, text

Text

Complex reasoning tasks that require more intelligence

Gemini embedding

gemini-embedding-exp

Text

Text embedding

Measuring the relevance of text strings

Imagen 3

imagen-3.0-generate-002

Text

image

Google's most advanced image generation model

Veo 2

veo-2.0-generate-001

Text, Images

video

Create high-quality videos

Gemini 2.0 Flash Live

gemini-2.0-flash-live-001

Audio, video, text

Text, Audio

Low-latency, two-way voice and video interaction

HTTP request functions

  • Execute HTTP requests ( mcp_http_request ) - communicate with external APIs using various HTTP methods such as GET, POST, PUT, DELETE, etc.

Get started

1. Clone the repository

git clone https://github.com/bigdata-coss/agent_mcp.git cd agent_mcp

2. Run the GraphDB Docker container

Start the GraphDB server by running the following command from the project root directory:

docker-compose up -d

The GraphDB web interface runs at http://localhost:7200 .

3. Build and run the MCP server

# 의존성 설치 npm install # 프로젝트 빌드 npm run build # 서버 실행 (테스트용, Claude Desktop에서는 필요 없음) node build/index.js

4. Import RDF data

Go to the GraphDB web interface ( http://localhost:7200 ) and do the following:

  1. Create a repository:

    • “Setup” → “Repositories” → “Create new repository”

    • Repository ID: schemaorg-current-https (or whatever name you want)

    • Repository title: "Schema.org"

    • Click "Create"

  2. Get sample data:

    • Select the repository you created

    • “Import” → “RDF” → “Upload RDF files”

    • Upload an example file to the imports directory (e.g. imports/example.ttl )

    • Click "Import"

Note : The project includes example RDF files in the imports directory.

5. Setting up Claude Desktop

To use Ontology MCP in Claude Desktop, you need to update the MCP settings file:

  1. Open the Claude Desktop settings file:

    • Windows: %AppData%\Claude\claude_desktop_config.json

    • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json

    • Linux: ~/.config/Claude/claude_desktop_config.json

  2. Add the following settings:

{ "mcpServers": { "a2a-ontology-mcp": { "command": "node", "args": ["E:\\codes\\a2a_mcp\\build"], "env": { "SPARQL_ENDPOINT": "http://localhost:7200", "OPENAI_API_KEY": "your-api-key", "GEMINI_API_KEY" : "your-api-key" }, "disabled": false, "autoApprove": [] } } }

IMPORTANT : Change the path in `args' to the actual absolute path to your project build directory.

  1. Restart Claude Desktop

License

This project is provided under the MIT License. See the LICENSE file for details.

Deploy Server
A
security – no known vulnerabilities
A
license - permissive license
A
quality - confirmed to work

hybrid server

The server is able to function both locally and remotely, depending on the configuration or use case.

A Model Context Protocol (MCP) server that connects GraphDB's SPARQL endpoints and Ollama models to Claude, enabling Claude to query and manipulate ontology data while leveraging various AI models.

  1. Key Features
    1. SPARQL related functions
    2. Features related to the Ollama model
    3. OpenAI related features
    4. Google Gemini related features
    5. HTTP request functions
  2. Get started
    1. 1. Clone the repository
    2. 2. Run the GraphDB Docker container
    3. 3. Build and run the MCP server
    4. 4. Import RDF data
    5. 5. Setting up Claude Desktop
  3. License

    Related MCP Servers

    • -
      security
      A
      license
      -
      quality
      A MCP server that exposes GraphQL schema information to LLMs like Claude. This server allows an LLM to explore and understand large GraphQL schemas through a set of specialized tools, without needing to load the whole schema into the context
      Last updated -
      5
      41
      MIT License
      • Apple
      • Linux
    • A
      security
      A
      license
      A
      quality
      A Model Context Protocol (MCP) server that integrates Claude with the Terraform Cloud API, allowing Claude to manage your Terraform infrastructure through natural conversation.
      Last updated -
      62
      17
      MIT License
      • Linux
      • Apple
    • A
      security
      A
      license
      A
      quality
      A Model Context Protocol (MCP) server that enables Claude or other LLMs to fetch content from URLs, supporting HTML, JSON, text, and images with configurable request parameters.
      Last updated -
      3
      2
      MIT License
    • A
      security
      F
      license
      A
      quality
      A Model Context Protocol (MCP) server that integrates with OmniFocus to enable Claude (or other MCP-compatible AI assistants) to interact with your tasks and projects.
      Last updated -
      7
      97
      86
      • Apple

    View all related MCP servers

    MCP directory API

    We provide all the information about MCP servers via our MCP API.

    curl -X GET 'https://glama.ai/api/mcp/v1/servers/bigdata-coss/agent_mcp'

    If you have feedback or need assistance with the MCP directory API, please join our Discord server