Skip to main content
Glama

Enhanced Architecture MCP

Enhanced Architecture MCP

Enhanced Model Context Protocol (MCP) servers with professional accuracy, tool safety, user preferences, and intelligent context monitoring.

Overview

This repository contains a collection of MCP servers that provide advanced architecture capabilities for AI assistants, including:

  • Professional Accuracy Enforcement - Prevents marketing language and ensures factual descriptions
  • Tool Safety Protocols - Blocks prohibited operations and validates parameters
  • User Preference Management - Stores and applies communication and aesthetic preferences
  • Intelligent Context Monitoring - Automatic token estimation and threshold warnings
  • Multi-MCP Orchestration - Coordinated workflows across multiple servers

Active Servers

Enhanced Architecture Server (enhanced_architecture_server_context.js)

Primary server with complete feature set:

  • Professional accuracy verification
  • Tool safety enforcement
  • User preference storage/retrieval
  • Context token tracking
  • Pattern storage and learning
  • Violation logging and metrics

Chain of Thought Server (cot_server.js)

Reasoning strand management:

  • Create and manage reasoning threads
  • Branch reasoning paths
  • Complete strands with conclusions
  • Cross-reference reasoning history

Local AI Server (local-ai-server.js)

Local model integration via Ollama:

  • Delegate heavy reasoning tasks
  • Token-efficient processing
  • Hybrid local+cloud analysis
  • Model capability queries

Installation

  1. Prerequisites:
    npm install
  2. Configuration: Update your Claude Desktop configuration to include the servers:
    { "mcpServers": { "enhanced-architecture": { "command": "node", "args": ["D:\\arch_mcp\\enhanced_architecture_server_context.js"], "env": {} }, "cot-server": { "command": "node", "args": ["D:\\arch_mcp\\cot_server.js"], "env": {} }, "local-ai-server": { "command": "node", "args": ["D:\\arch_mcp\\local-ai-server.js"], "env": {} } } }
  3. Local AI Setup (Optional): Install Ollama and pull models:
    ollama pull llama3.1:8b

Usage

Professional Accuracy

Automatically prevents:

  • Marketing language ("revolutionary", "cutting-edge")
  • Competitor references
  • Technical specification enhancement
  • Promotional tone

Context Monitoring

Tracks conversation tokens across:

  • Document attachments
  • Artifacts and code
  • Tool calls and responses
  • System overhead

Provides warnings at 80% and 90% capacity limits.

User Preferences

Stores preferences for:

  • Communication style (brief professional)
  • Aesthetic approach (minimal)
  • Message format requirements
  • Tool usage patterns

Multi-MCP Workflows

Coordinates complex tasks:

  1. Create CoT reasoning strand
  2. Delegate analysis to local AI
  3. Store insights in memory
  4. Update architecture patterns

Key Features

  • Version-Free Operation - No version dependencies, capability-based reporting
  • Empirical Validation - 60+ validation gates for decision-making
  • Token Efficiency - Intelligent context management and compression
  • Professional Standards - Enterprise-grade accuracy and compliance
  • Cross-Session Learning - Persistent pattern storage and preference evolution

File Structure

D:\arch_mcp\ ├── enhanced_architecture_server_context.js # Main server ├── cot_server.js # Reasoning management ├── local-ai-server.js # Local AI integration ├── data/ # Runtime data (gitignored) ├── backup/ # Legacy server versions └── package.json # Node.js dependencies

Development

Architecture Principles

  • Dual-System Enforcement - MCP tools + text document protocols
  • Empirical Grounding - Measurable validation over assumptions
  • User-Centric Design - Preference-driven behavior adaptation
  • Professional Standards - Enterprise accuracy and safety requirements

Adding New Features

  1. Update server tool definitions
  2. Implement handler functions
  3. Add empirical validation gates
  4. Update user preference options
  5. Test cross-MCP coordination

Troubleshooting

Server Connection Issues:

  • Check Node.js version compatibility
  • Verify file paths in configuration
  • Review server logs for syntax errors

Context Tracking:

  • Monitor token estimation accuracy
  • Adjust limits for conversation length
  • Use reset tools for fresh sessions

Performance:

  • Local AI requires Ollama installation
  • Context monitoring adds ~50ms overhead
  • Pattern storage optimized for < 2ms response

License

MIT License - see individual files for specific licensing terms.

Contributing

Architecture improvements welcome. Focus areas:

  • Enhanced token estimation accuracy
  • Additional validation gates
  • Cross-domain pattern recognition
  • Performance optimization
-
security - not tested
F
license - not found
-
quality - not tested

A collection of Model Context Protocol servers providing advanced capabilities for AI assistants including professional accuracy enforcement, tool safety protocols, user preference management, and intelligent context monitoring.

  1. Overview
    1. Active Servers
      1. Enhanced Architecture Server (enhanced_architecture_server_context.js)
      2. Chain of Thought Server (cot_server.js)
      3. Local AI Server (local-ai-server.js)
    2. Installation
      1. Usage
        1. Professional Accuracy
        2. Context Monitoring
        3. User Preferences
        4. Multi-MCP Workflows
      2. Key Features
        1. File Structure
          1. Development
            1. Architecture Principles
            2. Adding New Features
          2. Troubleshooting
            1. License
              1. Contributing

                Related MCP Servers

                • -
                  security
                  F
                  license
                  -
                  quality
                  A versatile Model Context Protocol server that enables AI assistants to manage calendars, track tasks, handle emails, search the web, and control smart home devices.
                  Last updated -
                  13
                  Python
                  • Apple
                  • Linux
                • -
                  security
                  F
                  license
                  -
                  quality
                  A comprehensive Model Context Protocol server implementation that enables AI assistants to interact with file systems, databases, GitHub repositories, web resources, and system tools while maintaining security and control.
                  Last updated -
                  16
                  1
                  TypeScript
                • -
                  security
                  F
                  license
                  -
                  quality
                  A Model Context Protocol server that provides AI models with structured access to external data and services, acting as a bridge between AI assistants and applications, databases, and APIs in a standardized, secure way.
                  Last updated -
                  1
                  Python
                • -
                  security
                  A
                  license
                  -
                  quality
                  A template for building Model Context Protocol servers that allow AI assistants to interact with custom data and services through queryable resources and specialized tools.
                  Last updated -
                  TypeScript
                  MIT License

                View all related MCP servers

                MCP directory API

                We provide all the information about MCP servers via our MCP API.

                curl -X GET 'https://glama.ai/api/mcp/v1/servers/autoexecbatman/arch-mcp'

                If you have feedback or need assistance with the MCP directory API, please join our Discord server