Skip to main content
Glama

MCP Datadog Server

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
DD_SITENoDatadog site (optional, e.g.: datadoghq.com, datadoghq.eu, us3.datadoghq.com)datadoghq.com
DD_API_KEYNoDatadog API key (required)
DD_APP_KEYNoDatadog Application key (required)
DATADOG_SITENoDatadog site (alternative to DD_SITE)
DD_SUBDOMAINNoDatadog subdomain (optional)api
MCP_DD_FOLDERSNoOptional CSV of top-level collection folders to filter tools (e.g.: Logs,Monitors,Metrics,Incidents,Dashboards)
DATADOG_API_KEYNoDatadog API key (alternative to DD_API_KEY)
DATADOG_APP_KEYNoDatadog Application key (alternative to DD_APP_KEY)
MCP_DD_SCHEMA_PATHNoOptional path to Postman Collection JSON schema file
NODE_EXTRA_CA_CERTSNoOptional path to additional CA certificate (PEM/CRT) for corporate networks with TLS interception

Schema

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Tools

Functions exposed to the LLM to take actions

NameDescription
aggregate_ci_pipelines_analytics

Use this API endpoint to aggregate CI Visibility pipeline events into buckets of computed metrics and timeseries.

aggregate_ci_tests_analytics

The API endpoint to aggregate CI Visibility test events into buckets of computed metrics and timeseries.

aggregate_logs_analytics

The API endpoint to aggregate events into buckets and compute metrics and timeseries.

aggregate_network_connections

Get all aggregated connections.

aggregate_network_dns

Get all aggregated DNS traffic.

aggregate_rum_analytics

The API endpoint to aggregate RUM events into buckets of computed metrics and timeseries.

aggregate_spans_analytics

The API endpoint to aggregate spans into buckets and compute metrics and timeseries. This endpoint is rate limited to 300 requests per hour.

can_delete_monitors

Check if the given monitors can be deleted.

can_delete_slos

Check if an SLO can be safely deleted. For example, assure an SLO can be deleted without disrupting a dashboard.

create_actions_connections

Create a new Action Connection. This API requires a registered application key.

create_agentless_scanning_accounts_aws

Activate Agentless scan options for an AWS account.

create_agentless_scanning_ondemand_aws

Trigger the scan of an AWS resource with a high priority. Agentless scanning must be activated for the AWS account containing the resource to scan.

create_api_keys

Create an API key.

create_apm_config_metrics

Create a metric based on your ingested spans in your organization. Returns the span-based metric object from the request body when the request is successful.

create_apm_config_retention_filters

Create a retention filter to index spans in your organization. Returns the retention filter definition when the request is successful.

create_app_builder_app_deployment

Publish an app for use by other users. To ensure the app is accessible to the correct users, you also need to set a Restriction Policy on the app if a policy does not yet exist.

create_app_builder_apps

Create a new app, returning the app ID.

create_application_keys

Create an application key with a given name.

create_authn_mappings

Create an AuthN Mapping.

create_case_archive

Archive case

create_case_assign

Assign case to a user

create_case_attributes

Update case attributes

create_case_priority

Update case priority

create_case_status

Update case status

create_case_unarchive

Unarchive case

create_case_unassign

Unassign case

create_cases

Create a Case

create_cases_projects

Create a project.

create_catalog_entities

Create or update entities in Software Catalog.

create_catalog_kinds

Create or update kinds in Software Catalog.

create_check_runs

Submit a list of Service Checks.

Notes:

  • A valid API key is required.

  • Service checks can be submitted up to 10 minutes in the past.

create_ci_pipelines

Send your pipeline event to your Datadog platform over HTTP. For details about how pipeline executions are modeled and what execution types we support, see Pipeline Data Model And Execution Types.

Pipeline events can be submitted with a timestamp that is up to 18 hours in the past.

create_cloud_security_management_custom_frameworks

Create a custom framework.

create_cost_aws_cur_configs

Create a Cloud Cost Management account for an AWS CUR config.

create_cost_azure_uc_configs

Create a Cloud Cost Management account for an Azure config.

create_cost_gcp_uc_configs

Create a Cloud Cost Management account for an GCP Usage Cost config.

create_current_user_application_keys

Create an application key for current user

create_dashboard_lists_manual_dashboards

Add dashboards to an existing dashboard list.

create_dashboard_lists_manuals

Create an empty dashboard list.

create_dashboard_public_invitation

Send emails to specified email addresses containing links to access a given authenticated shared dashboard. Email addresses must already belong to the authenticated shared dashboard's share_list.

create_dashboard_publics

Share a specified private dashboard, generating a URL at which it can be publicly viewed.

create_dashboards

Create a dashboard using the specified options. When defining queries in your widgets, take note of which queries should have the as_count() or as_rate() modifiers appended. Refer to the following documentation for more information on these modifiers.

create_dora_deployments_v2

Use this API endpoint to provide data about deployments for DORA metrics.

This is necessary for:

  • Deployment Frequency

  • Change Lead Time

  • Change Failure Rate

create_dora_deployments_v2_2

Use this API endpoint to get a list of deployment events.

create_dora_failures_v2

Use this API endpoint to provide failure data for DORA metrics.

This is necessary for:

  • Change Failure Rate

  • Time to Restore

create_dora_failures_v2_2

Use this API endpoint to get a list of failure events.

create_downtimes

Schedule a downtime.

create_events

This endpoint allows you to post events to the stream. Tag them, set priority and event aggregate them with other events.

create_graph_embeds

Creates a new embeddable graph.

Note: If an embed already exists for the exact same query in a given organization, the older embed is returned instead of creating a new embed.

If you are interested in using template variables, see Embeddable Graphs with Template Variables.

create_integration_aws

Create a Datadog-Amazon Web Services integration. Using the POST method updates your integration configuration by adding your new configuration to the existing one in your Datadog organization. A unique AWS Account ID for role based authentication.

create_integration_aws_event_bridges

Create an Amazon EventBridge source.

create_integration_aws_filterings

Set an AWS tag filter.

create_integration_aws_logs

Attach the Lambda ARN of the Lambda created for the Datadog-AWS log collection to your AWS account ID to enable log collection.

create_integration_aws_logs_check_asyncs

Test if permissions are present to add a log-forwarding triggers for the given services and AWS account. The input is the same as for Enable an AWS service log collection. Subsequent requests will always repeat the above, so this endpoint can be polled intermittently instead of blocking.

  • Returns a status of 'created' when it's checking if the Lambda exists in the account.

  • Returns a status of 'waiting' while checking.

  • Returns a status of 'checked and ok' if the Lambda exists.

  • Returns a status of 'error' if the Lambda does not exist.

create_integration_aws_logs_services

Enable automatic log collection for a list of services. This should be run after running CreateAWSLambdaARN to save the configuration.

create_integration_aws_logs_services_asyncs

Test if permissions are present to add log-forwarding triggers for the given services and AWS account. Input is the same as for EnableAWSLogServices. Done async, so can be repeatedly polled in a non-blocking fashion until the async request completes.

  • Returns a status of created when it's checking if the permissions exists in the AWS account.

  • Returns a status of waiting while checking.

  • Returns a status of checked and ok if the Lambda exists.

  • Returns a status of error if the Lambda does not exist.

create_integration_azure_host_filters

Update the defined list of host filters for a given Datadog-Azure integration.

create_integration_azures

Create a Datadog-Azure integration.

Using the POST method updates your integration configuration by adding your new configuration to the existing one in your Datadog organization.

Using the PUT method updates your integration configuration by replacing your current configuration with the new one sent to your Datadog organization.

create_integration_gcp_accounts

Create a new entry within Datadog for your STS enabled service account.

create_integration_gcp_sts_delegates

Create a Datadog GCP principal.

create_integration_ms_teams_configuration_tenant_based_handles

Create a tenant-based handle in the Datadog Microsoft Teams integration.

create_integration_ms_teams_configuration_workflows_webhook_handles

Create a Workflows webhook handle in the Datadog Microsoft Teams integration.

create_integration_opsgenie_services

Create a new service object in the Opsgenie integration.

create_integration_pagerduty_configuration_services

Create a new service object in the PagerDuty integration.

create_integration_slack_configuration_account_channels

Add a channel to your Datadog-Slack integration.

create_integration_webhooks_configuration_custom_variables

Creates an endpoint with the name <CUSTOM_VARIABLE_NAME>.

create_integration_webhooks_configuration_webhooks

Creates an endpoint with the name <WEBHOOK_NAME>.

create_integrations_cloudflare_accounts

Create a Cloudflare account.

create_integrations_confluent_cloud_account_resources

Create a Confluent resource for the account associated with the provided ID.

create_integrations_confluent_cloud_accounts

Create a Confluent account.

create_integrations_fastly_account_services

Create a Fastly service for an account.

create_integrations_fastly_accounts

Create a Fastly account.

create_integrations_okta_accounts

Create an Okta account.

create_logs_config_archive_readers

Adds a read role to an archive. (Roles API)

create_logs_config_archives

Create an archive in your organization.

create_logs_config_custom_destinations

Create a custom destination in your organization.

create_logs_config_indexes

Creates a new index. Returns the Index object passed in the request body when the request is successful.

create_logs_config_metrics

Create a metric based on your ingested logs in your organization. Returns the log-based metric object from the request body when the request is successful.

create_logs_config_pipelines

Create a pipeline in your organization.

create_metric_tags

Create and define a list of queryable tag keys for an existing count/gauge/rate/distribution metric. Optionally, include percentile aggregations on any distribution metric. By setting exclude_tags_mode to true, the behavior is changed from an allow-list to a deny-list, and tags in the defined list are not queryable. Can only be used with application keys of users with the Manage Tags for Metrics permission.

create_metrics_config_bulk_tags

Create and define a list of queryable tag keys for a set of existing count, gauge, rate, and distribution metrics. Metrics are selected by passing a metric name prefix. Use the Delete method of this API path to remove tag configurations. Results can be sent to a set of account email addresses, just like the same operation in the Datadog web app. If multiple calls include the same metric, the last configuration applied (not by submit order) is used, do not expect deterministic ordering of concurrent calls. The exclude_tags_mode value will set all metrics that match the prefix to the same exclusion state, metric tag configurations do not support mixed inclusion and exclusion for tags on the same metric. Can only be used with application keys of users with the Manage Tags for Metrics permission.

create_monitor

Create a monitor using the specified options.

Monitor Types

The type of monitor chosen from:

  • anomaly: query alert

  • APM: query alert or trace-analytics alert

  • composite: composite

  • custom: service check

  • forecast: query alert

  • host: service check

  • integration: query alert or service check

  • live process: process alert

  • logs: log alert

  • metric: query alert

  • network: service check

  • outlier: query alert

  • process: service check

  • rum: rum alert

  • SLO: slo alert

  • watchdog: event-v2 alert

  • event-v2: event-v2 alert

  • audit: audit alert

  • error-tracking: error-tracking alert

  • database-monitoring: database-monitoring alert

  • network-performance: network-performance alert

  • cloud cost: cost alert

Notes:

  • Synthetic monitors are created through the Synthetics API. See the Synthetics API documentation for more information.

  • Log monitors require an unscoped App Key.

Query Types

Metric Alert Query

Example: time_aggr(time_window):space_aggr:metric{tags} [by {key}] operator #

  • time_aggr: avg, sum, max, min, change, or pct_change

  • time_window: last_#m (with # between 1 and 10080 depending on the monitor type) or last_#h(with # between 1 and 168 depending on the monitor type) or last_1d, or last_1w

  • space_aggr: avg, sum, min, or max

  • tags: one or more tags (comma-separated), or *

  • key: a 'key' in key:value tag syntax; defines a separate alert for each tag in the group (multi-alert)

  • operator: <, <=, >, >=, ==, or !=

  • #: an integer or decimal number used to set the threshold

If you are using the _change_ or _pct_change_ time aggregator, instead use change_aggr(time_aggr(time_window), timeshift):space_aggr:metric{tags} [by {key}] operator # with:

  • change_aggr change, pct_change

  • time_aggr avg, sum, max, min Learn more

  • time_window last_#m (between 1 and 2880 depending on the monitor type), last_#h (between 1 and 48 depending on the monitor type), or last_#d (1 or 2)

  • timeshift #m_ago (5, 10, 15, or 30), #h_ago (1, 2, or 4), or 1d_ago

Use this to create an outlier monitor using the following query: avg(last_30m):outliers(avg:system.cpu.user{role:es-events-data} by {host}, 'dbscan', 7) > 0

Service Check Query

Example: "check".over(tags).last(count).by(group).count_by_status()

  • check name of the check, for example datadog.agent.up

  • tags one or more quoted tags (comma-separated), or "*". for example: .over("env:prod", "role:db"); over cannot be blank.

  • count must be at greater than or equal to your max threshold (defined in the options). It is limited to 100. For example, if you've specified to notify on 1 critical, 3 ok, and 2 warn statuses, count should be at least 3.

  • group must be specified for check monitors. Per-check grouping is already explicitly known for some service checks. For example, Postgres integration monitors are tagged by db, host, and port, and Network monitors by host, instance, and url. See Service Checks documentation for more information.

Event Alert Query

Note: The Event Alert Query has been replaced by the Event V2 Alert Query. For more information, see the Event Migration guide.

Event V2 Alert Query

Example: events(query).rollup(rollup_method[, measure]).last(time_window) operator #

  • query The search query - following the Log search syntax.

  • rollup_method The stats roll-up method - supports count, avg and cardinality.

  • measure For avg and cardinality rollup_method - specify the measure or the facet name you want to use.

  • time_window #m (between 1 and 2880), #h (between 1 and 48).

  • operator <, <=, >, >=, ==, or !=.

  • # an integer or decimal number used to set the threshold.

Process Alert Query

Example: processes(search).over(tags).rollup('count').last(timeframe) operator #

  • search free text search string for querying processes. Matching processes match results on the Live Processes page.

  • tags one or more tags (comma-separated)

  • timeframe the timeframe to roll up the counts. Examples: 10m, 4h. Supported timeframes: s, m, h and d

  • operator <, <=, >, >=, ==, or !=

  • # an integer or decimal number used to set the threshold

Logs Alert Query

Example: logs(query).index(index_name).rollup(rollup_method[, measure]).last(time_window) operator #

  • query The search query - following the Log search syntax.

  • index_name For multi-index organizations, the log index in which the request is performed.

  • rollup_method The stats roll-up method - supports count, avg and cardinality.

  • measure For avg and cardinality rollup_method - specify the measure or the facet name you want to use.

  • time_window #m (between 1 and 2880), #h (between 1 and 48).

  • operator <, <=, >, >=, ==, or !=.

  • # an integer or decimal number used to set the threshold.

Composite Query

Example: 12345 && 67890, where 12345 and 67890 are the IDs of non-composite monitors

  • name [required, default = dynamic, based on query]: The name of the alert.

  • message [required, default = dynamic, based on query]: A message to include with notifications for this monitor. Email notifications can be sent to specific users by using the same '@username' notation as events.

  • tags [optional, default = empty list]: A list of tags to associate with your monitor. When getting all monitor details via the API, use the monitor_tags argument to filter results by these tags. It is only available via the API and isn't visible or editable in the Datadog UI.

SLO Alert Query

Example: error_budget("slo_id").over("time_window") operator #

  • slo_id: The alphanumeric SLO ID of the SLO you are configuring the alert for.

  • time_window: The time window of the SLO target you wish to alert on. Valid options: 7d, 30d, 90d.

  • operator: >= or >

Audit Alert Query

Example: audits(query).rollup(rollup_method[, measure]).last(time_window) operator #

  • query The search query - following the Log search syntax.

  • rollup_method The stats roll-up method - supports count, avg and cardinality.

  • measure For avg and cardinality rollup_method - specify the measure or the facet name you want to use.

  • time_window #m (between 1 and 2880), #h (between 1 and 48).

  • operator <, <=, >, >=, ==, or !=.

  • # an integer or decimal number used to set the threshold.

CI Pipelines Alert Query

Example: ci-pipelines(query).rollup(rollup_method[, measure]).last(time_window) operator #

  • query The search query - following the Log search syntax.

  • rollup_method The stats roll-up method - supports count, avg, and cardinality.

  • measure For avg and cardinality rollup_method - specify the measure or the facet name you want to use.

  • time_window #m (between 1 and 2880), #h (between 1 and 48).

  • operator <, <=, >, >=, ==, or !=.

  • # an integer or decimal number used to set the threshold.

CI Tests Alert Query

Example: ci-tests(query).rollup(rollup_method[, measure]).last(time_window) operator #

  • query The search query - following the Log search syntax.

  • rollup_method The stats roll-up method - supports count, avg, and cardinality.

  • measure For avg and cardinality rollup_method - specify the measure or the facet name you want to use.

  • time_window #m (between 1 and 2880), #h (between 1 and 48).

  • operator <, <=, >, >=, ==, or !=.

  • # an integer or decimal number used to set the threshold.

Error Tracking Alert Query

"New issue" example: error-tracking(query).source(issue_source).new().rollup(rollup_method[, measure]).by(group_by).last(time_window) operator # "High impact issue" example: error-tracking(query).source(issue_source).impact().rollup(rollup_method[, measure]).by(group_by).last(time_window) operator #

  • query The search query - following the Log search syntax.

  • issue_source The issue source - supports all, browser, mobile and backend and defaults to all if omitted.

  • rollup_method The stats roll-up method - supports count, avg, and cardinality and defaults to count if omitted.

  • measure For avg and cardinality rollup_method - specify the measure or the facet name you want to use.

  • group by Comma-separated list of attributes to group by - should contain at least issue.id.

  • time_window #m (between 1 and 2880), #h (between 1 and 48).

  • operator <, <=, >, >=, ==, or !=.

  • # an integer or decimal number used to set the threshold.

Database Monitoring Alert Query

Example: database-monitoring(query).rollup(rollup_method[, measure]).last(time_window) operator #

  • query The search query - following the Log search syntax.

  • rollup_method The stats roll-up method - supports count, avg, and cardinality.

  • measure For avg and cardinality rollup_method - specify the measure or the facet name you want to use.

  • time_window #m (between 1 and 2880), #h (between 1 and 48).

  • operator <, <=, >, >=, ==, or !=.

  • # an integer or decimal number used to set the threshold.

Network Performance Alert Query

Example: network-performance(query).rollup(rollup_method[, measure]).last(time_window) operator #

  • query The search query - following the Log search syntax.

  • rollup_method The stats roll-up method - supports count, avg, and cardinality.

  • measure For avg and cardinality rollup_method - specify the measure or the facet name you want to use.

  • time_window #m (between 1 and 2880), #h (between 1 and 48).

  • operator <, <=, >, >=, ==, or !=.

  • # an integer or decimal number used to set the threshold.

Cost Alert Query

Example: formula(query).timeframe_type(time_window).function(parameter) operator #

  • query The search query - following the Log search syntax.

  • timeframe_type The timeframe type to evaluate the cost - for forecast supports current - for change, anomaly, threshold supports last

  • time_window - supports daily roll-up e.g. 7d

  • function - [optional, defaults to threshold monitor if omitted] supports change, anomaly, forecast

  • parameter Specify the parameter of the type

    • for change:

      • supports relative, absolute

      • [optional] supports #, where # is an integer or decimal number used to set the threshold

    • for anomaly:

      • supports direction=both, direction=above, direction=below

      • [optional] supports threshold=#, where # is an integer or decimal number used to set the threshold

  • operator

    • for threshold supports <, <=, >, >=, ==, or !=

    • for change supports >, <

    • for anomaly supports >=

    • for forecast supports >

  • # an integer or decimal number used to set the threshold.

create_monitor_notification_rules

Creates a monitor notification rule.

create_monitor_policies

Create a monitor configuration policy.

create_notebooks

Create a notebook using the specified options.

create_on_call_escalation_policies

Create a new on-call escalation policy

create_on_call_schedules

Create a new on-call schedule

create_org_downgrade

Only available for MSP customers. Removes a child organization from the hierarchy of the master organization and places the child organization on a 30-day trial.

create_orgs

Create a child organization.

This endpoint requires the multi-organization account feature and must be enabled by contacting support.

Once a new child organization is created, you can interact with it by using the org.public_id, api_key.key, and application_key.hash provided in the response.

create_powerpacks

Create a powerpack.

create_remote_config_products_asm_waf_custom_rules

Create a new WAF custom rule with the given parameters.

create_remote_config_products_asm_waf_exclusion_filters

Create a new WAF exclusion filter with the given parameters.

A request matched by an exclusion filter will be ignored by the Application Security WAF product. Go to https://app.datadoghq.com/security/appsec/passlist to review existing exclusion filters (also called passlist entries).

create_remote_config_products_cws_agent_rules

Create a new Workload Protection agent rule with the given parameters.

Note: This endpoint is not available for the Government (US1-FED) site. Please reference the (US1-FED) specific resource below.

create_remote_config_products_cws_policies

Create a new Workload Protection policy with the given parameters.

Note: This endpoint is not available for the Government (US1-FED) site. Please reference the (US1-FED) specific resource below.

create_restriction_policy

Updates the restriction policy associated with a resource.

Supported resources

Restriction policies can be applied to the following resources:

  • Connections: connection

  • Dashboards: dashboard

  • Notebooks: notebook

  • Security Rules: security-rule

  • Service Level Objectives: slo

Supported relations for resources

Resource Type

Supported Relations

Connections

viewer

,

editor

,

resolver

Dashboards

viewer

,

editor

Notebooks

viewer

,

editor

Security Rules

viewer

,

editor

Service Level Objectives

viewer

,

editor

create_role_clone

Clone an existing role

create_role_permissions

Adds a permission to a role.

create_role_users

Adds a user to a role.

create_roles

Create a new role for your organization.

create_rum_application_retention_filters

Create a RUM retention filter for a RUM application. Returns RUM retention filter objects from the request body when the request is successful.

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ClaudioLazaro/mcp-datadog-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server