# π START HERE - Quick Setup Guide
## β Setup Complete!
Everything is configured and ready to use with Ollama (your local models).
---
## π How to Run FusionMCP
### **Step 1: Start Ollama Server**
Open **Terminal 1**:
```bash
ollama serve
```
Leave this running. You should see:
```
Ollama is running
```
---
### **Step 2: Start MCP Server**
Open **Terminal 2**:
```bash
cd ~/Desktop/fusion360-mcp
source venv/bin/activate
python -m mcp_server.server
```
You should see:
```
β Logger initialized with level INFO
β System prompt loaded
β Initialized MCP Router with providers: ['ollama']
β MCP Server started on 127.0.0.1:9000
```
Leave this running!
---
### **Step 3: Open Fusion 360**
1. **Launch Fusion 360**
2. **Load the Add-in**:
- Press **Shift + S** (or Tools β Add-Ins β Scripts and Add-Ins)
- Click **Add-Ins** tab
- Find **FusionMCP** in the list
- Select it and click **Run**
3. **You should see**: "FusionMCP Add-in Started!"
4. **Optional**: Check "Run on Startup" to auto-load next time
---
### **Step 4: Use MCP Assistant**
1. Look for **MCP Assistant** button in Fusion 360 toolbar
2. Click it to open the assistant
3. **Try these commands**:
```
Create a 20mm cube
```
```
Make a cylindrical shaft 10mm diameter and 50mm long
```
```
Design a mounting bracket 100x50mm with 4 M5 holes at corners
```
4. Watch the magic happen! π
---
## π― Your Configuration
**Model**: llama3.1:latest (your best local model)
**Fallback**: phi4-mini (if llama3.1 fails)
**Server**: http://localhost:9000
**Mode**: 100% Local & Offline
---
## π Troubleshooting
### Issue: "Connection refused"
```bash
# Make sure MCP server is running
curl http://127.0.0.1:9000/health
# Should return: {"status":"healthy","providers":["ollama"]...}
```
### Issue: "Ollama not found"
```bash
# Check Ollama is running
curl http://localhost:11434/api/tags
# Should return list of models
```
### Issue: Add-in not visible in Fusion
1. Restart Fusion 360
2. Check folder name is exactly `FusionMCP`
3. Go to Tools β Add-Ins β Scripts and Add-Ins β Add-Ins tab
### Issue: Slow responses
```bash
# Use faster model - edit config.json:
"default_model": "ollama:phi4-mini"
```
---
## π Check Server Status
```bash
# Health check
curl http://127.0.0.1:9000/health
# List available models
curl http://127.0.0.1:9000/models
# View conversation history
curl http://127.0.0.1:9000/history?limit=5
```
---
## π Documentation
- **Full Setup Guide**: `SETUP_OLLAMA.md`
- **Quick Start**: `QUICKSTART.md`
- **Complete Docs**: `README.md`
- **Architecture**: `ARCHITECTURE.md`
---
## π¨ Your Available Models
Based on your Ollama installation:
- **llama3.1** - Best quality (default)
- **phi4-mini** - Fast & small
- **gemma3** - Good balance
- **qwen3-coder** - Code-focused
To switch models, edit `config.json`:
```json
"default_model": "ollama:phi4-mini"
```
---
## β Quick Checklist
Before testing:
- [ ] Ollama running (`ollama serve`)
- [ ] MCP server running (`python -m mcp_server.server`)
- [ ] Fusion 360 open
- [ ] FusionMCP add-in loaded
- [ ] MCP Assistant button visible
---
## π You're Ready!
Everything is configured to use **your new FusionMCP system** with:
- β Multi-model architecture
- β Intelligent fallback chain
- β Complete logging & caching
- β Production-ready code
- β Full test suite
**Next**: Open Fusion 360 and start designing with AI! π
---
**Questions?** Check the docs or run tests:
```bash
./run_tests.sh
```