Skip to main content
Glama

Chromium ARM64 Browser

by nfodor
Use-Cases.md11 kB
# 🎯 Use Cases: Real-World Agent Farm Applications #use-cases #examples #automation #business ## 1. 🔍 SEO Monitoring Platform ### The Challenge Monitor 10,000 keywords across 5 search engines daily for ranking changes. ### Traditional Solution Cost - SEO API services: $2,000/month - Cloud scrapers: $500/month + development ### Pi Farm Solution ```python # seo_monitor.py class SEOMonitor: def __init__(self, pi_agents): self.agents = pi_agents self.keywords = self.load_keywords() def distribute_monitoring(self): """Distribute keywords across Pi agents""" chunk_size = len(self.keywords) // len(self.agents) for i, agent in enumerate(self.agents): start = i * chunk_size end = start + chunk_size keywords_chunk = self.keywords[start:end] self.monitor_on_agent(agent, keywords_chunk) def monitor_on_agent(self, agent, keywords): for keyword in keywords: # Check Google result = agent.navigate(f"https://google.com/search?q={keyword}") ranking = agent.evaluate("document.querySelectorAll('.g').length") # Store results self.store_ranking(keyword, 'google', ranking) ``` **Results**: - Cost: $108/month (10 Pi farm) - Savings: $1,892/month (95%) - ROI: 0.7 months ## 2. 🛒 E-commerce Price Intelligence ### The Challenge Track competitor prices on 50,000 products across 20 websites hourly. ### Implementation ```javascript // price_tracker.js class PriceTracker { constructor(farmUrl) { this.farmUrl = farmUrl; this.products = []; } async trackPrices() { const batches = this.createBatches(this.products, 100); const promises = batches.map(batch => this.trackBatch(batch) ); const results = await Promise.all(promises); return this.processResults(results); } async trackBatch(products) { const response = await fetch(`${this.farmUrl}/browse`, { method: 'POST', body: JSON.stringify({ action: 'batch', tasks: products.map(p => ({ url: p.url, selector: p.priceSelector, productId: p.id })) }) }); return response.json(); } } ``` **Results**: - Updates: 1.2M price checks/day - Accuracy: 99.9% - Cost: $216/month (20 Pi farm) - Cloud equivalent: $2,000/month ## 3. 📸 Visual Regression Testing ### The Challenge Test 500 web applications across 5 screen sizes after each deployment. ### Farm Configuration ```yaml # visual_regression_config.yml test_suite: viewports: - name: mobile width: 375 height: 667 - name: tablet width: 768 height: 1024 - name: desktop width: 1920 height: 1080 agents: - pi-agent-01: mobile - pi-agent-02: tablet - pi-agent-03: desktop comparison: threshold: 0.1 highlight_differences: true ``` ### Test Runner ```python # visual_test_runner.py async def run_visual_tests(apps, farm_agents): results = [] for app in apps: for viewport in VIEWPORTS: agent = get_agent_for_viewport(viewport) # Take screenshot await agent.post('/browse', { 'action': 'screenshot', 'url': app.url, 'value': f'{app.name}_{viewport.name}.png' }) # Compare with baseline diff = compare_images( f'baseline/{app.name}_{viewport.name}.png', f'current/{app.name}_{viewport.name}.png' ) if diff > THRESHOLD: results.append({ 'app': app.name, 'viewport': viewport.name, 'difference': diff }) return results ``` **Results**: - Tests per day: 2,500 - Time saved: 40 hours/week - False positives: < 1% ## 4. 🌐 Content Syndication Verifier ### The Challenge Verify that content is properly displayed across 100 partner websites. ### Solution Architecture ```python # content_verifier.py class ContentSyndicationVerifier: def __init__(self, pi_farm_url): self.farm = pi_farm_url self.partners = self.load_partners() async def verify_article(self, article_id): tasks = [] for partner in self.partners: task = self.check_partner(partner, article_id) tasks.append(task) results = await asyncio.gather(*tasks) return self.compile_report(results) async def check_partner(self, partner, article_id): # Navigate to partner site await self.agent_request({ 'action': 'navigate', 'url': partner.article_url(article_id) }) # Check if article exists exists = await self.agent_request({ 'action': 'evaluate', 'script': f'document.querySelector("{partner.article_selector}") !== null' }) # Take screenshot for proof if exists: await self.agent_request({ 'action': 'screenshot', 'value': f'proof_{partner.name}_{article_id}.png' }) return { 'partner': partner.name, 'article_id': article_id, 'verified': exists, 'timestamp': datetime.now() } ``` ## 5. 🤖 Social Media Automation ### The Challenge Monitor brand mentions across 50 social platforms in real-time. ### Implementation ```javascript // social_monitor.js class SocialMediaMonitor { constructor(agents) { this.agents = agents; this.platforms = this.loadPlatforms(); } async monitorMentions(brandName) { const monitoring = this.platforms.map((platform, index) => { const agent = this.agents[index % this.agents.length]; return this.monitorPlatform(agent, platform, brandName); }); const results = await Promise.all(monitoring); return this.aggregateResults(results); } async monitorPlatform(agent, platform, brand) { // Navigate to search await agent.browse({ action: 'navigate', url: platform.searchUrl(brand) }); // Extract mentions const mentions = await agent.browse({ action: 'evaluate', script: platform.extractMentionsScript() }); return { platform: platform.name, mentions: JSON.parse(mentions), timestamp: Date.now() }; } } ``` ## 6. 📊 Market Research Automation ### The Challenge Collect pricing, features, and reviews for 1,000 products daily. ### Data Collection Pipeline ```python # market_research_pipeline.py class MarketResearchPipeline: def __init__(self, farm_config): self.farm = AgentFarm(farm_config) self.extractors = { 'amazon': AmazonExtractor(), 'ebay': EbayExtractor(), 'walmart': WalmartExtractor() } async def research_product(self, product_query): results = {} # Parallel extraction across platforms tasks = [] for platform, extractor in self.extractors.items(): task = self.extract_from_platform( platform, extractor, product_query ) tasks.append(task) platform_data = await asyncio.gather(*tasks) # Aggregate and analyze return self.analyze_market_data(platform_data) def analyze_market_data(self, data): return { 'average_price': self.calculate_average_price(data), 'price_range': self.get_price_range(data), 'top_features': self.extract_common_features(data), 'sentiment': self.analyze_reviews(data), 'availability': self.check_availability(data) } ``` ## 7. 🔐 Security Monitoring ### The Challenge Monitor 200 company websites for security headers and SSL compliance. ### Security Scanner ```python # security_scanner.py class SecurityScanner: def __init__(self, agents): self.agents = agents self.security_checks = [ self.check_ssl, self.check_headers, self.check_cookies, self.check_mixed_content ] async def scan_website(self, url): agent = self.get_available_agent() # Navigate to site await agent.navigate(url) results = {} for check in self.security_checks: results[check.__name__] = await check(agent, url) return { 'url': url, 'scan_time': datetime.now(), 'results': results, 'score': self.calculate_security_score(results) } async def check_headers(self, agent, url): headers = await agent.evaluate(""" fetch(window.location.href).then(r => Object.fromEntries(r.headers.entries()) ) """) return { 'has_csp': 'content-security-policy' in headers, 'has_hsts': 'strict-transport-security' in headers, 'has_xframe': 'x-frame-options' in headers } ``` ## 📈 Performance Metrics ### Real Customer Results | Use Case | Pi Nodes | Daily Volume | Cost/Month | Savings | |----------|----------|--------------|------------|---------| | SEO Agency | 15 | 50K keywords | $162 | 92% | | E-commerce | 25 | 100K products | $270 | 87% | | News Monitor | 8 | 500 sites/hour | $86 | 94% | | QA Company | 30 | 1K test suites | $324 | 89% | ## 🚀 Advanced Patterns ### 1. Geographic Distribution ```python # Different Pis in different locations AGENTS = { 'us-east': ['pi-us-01', 'pi-us-02'], 'eu-west': ['pi-eu-01', 'pi-eu-02'], 'asia': ['pi-asia-01', 'pi-asia-02'] } def get_regional_agent(region): return random.choice(AGENTS[region]) ``` ### 2. Specialized Agents ```python # Dedicate Pis to specific tasks AGENT_ROLES = { 'screenshot': ['pi-gpu-01', 'pi-gpu-02'], # With GPU 'scraping': ['pi-basic-01', 'pi-basic-02'], # 4GB RAM 'testing': ['pi-power-01', 'pi-power-02'] # 8GB RAM } ``` ### 3. Failover Handling ```python async def browse_with_failover(url, agents): for agent in agents: try: return await agent.navigate(url) except Exception as e: log.warning(f"Agent {agent} failed: {e}") continue raise Exception("All agents failed") ``` ## 💡 Getting Started 1. **Identify your use case** from the examples above 2. **Calculate your ROI** using [[Cost-Analysis]] 3. **Start with 3 Pis** for proof of concept 4. **Scale based on results** --- **Next**: [[Architecture]] - Technical deep dive **Related**: [[Code-Examples]] | [[Cost-Analysis]]

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nfodor/mcp-chromium-arm64'

If you have feedback or need assistance with the MCP directory API, please join our Discord server