Skip to main content

Migrating to new detectors and alerts APIs

Learn about how to migrate away from legacy alerts APIs.

Sentry has a new way of creating alerts and monitors and has new endpoints to support them. This page will act as a guide to transition to the new APIs and highlight new functionality that wasn’t possible in the legacy alerts API. Users have until August 4th, 2026 to migrate to these new endpoints, after which they will become unavailable.

Terraform

If you were using the Sentry Terraform provider to manage alerts you can simply use the new Terraform resources added in 0.15.0: https://registry.terraform.io/providers/jianyuan/sentry/0.15.0-beta2/docs.

cURL Examples

If you were using the API directly, below are some example requests to help you migrate.

Metric Alert Example

If you were using the Create a Metric Alert for an Organization your request body may have looked like this:

{
"dataset":"events",
"eventTypes":["error"],
"aggregate":"count()",
"query":"is:unresolved",
"timeWindow":30,
"thresholdPeriod":20,
"projects":["arg"],
"environment":null,
"thresholdType":0,
"name":"Static metric alert rule",
"detection_type": "static",
"comparisonDelta":null,
"triggers": [
{
"label": "critical",
"thresholdType": 0,
"alertThreshold": 10.0,
"resolveThreshold": 3.0,
"actions": [
{
"type": "email",
"targetType": "user",
"targetIdentifier": "123456",
"inputChannelId": null
}
]
},
{
"label": "warning",
"thresholdType": 0,
"alertThreshold": 5.0,
"resolveThreshold": 3.0,
"actions": [
{
"type": "email",
"targetType": "user",
"targetIdentifier": "123456",
"inputChannelId": null
}
]
}
],
"queryType":0
}

To make an equivalent monitor and alert today you’ll need to:

{
"name": "Static metric monitor",
"description": "Description of my monitor",
"type": "metric_issue",
"workflowIds": [], # if you've created an alert already and want to connect it, pass the id here
"owner": null,
"dataSources": [
{
"aggregate": "count()",
"dataset" : "events",
"environment": null,
"eventTypes": ["default", "error"],
"query": "is:unresolved",
"queryType": 0,
"timeWindow": 3600
}
],
"conditionGroup": {
"logicType": "any",
"conditions": [
{
"type": "gt",
"comparison": 5.0,
"conditionResult": 50
},
{
"type": "gt",
"comparison": 10.0,
"conditionResult": 75
},
{
"type": "lte",
"comparison": 0.0,
"conditionResult": 0
}
]
},
"config": {
"detectionType": "static",
"comparisonDelta": null
}
}
{
"name": "Notify me for metric monitor",
"triggers": {
"actions": [],
"conditions": [],
"logicType": "any-short"
},
"actionFilters": [
{
"logicType": "any",
"conditions": [
{
"type": "issue_priority_greater_or_equal",
"comparison": 50,
"conditionResult": true
},
{
"type": "issue_priority_deescalating",
"comparison": 50,
"conditionResult": true
}
],
"actions": [
{
"type": "email",
"integrationId": null,
"data": {},
"config": {
"targetType": "user",
"targetDisplay": null,
"targetIdentifier": "123456"
},
"status": "active"
}
]
},
{
"logicType": "any",
"conditions": [
{
"type": "issue_priority_greater_or_equal",
"comparison": 75,
"conditionResult": true
},
{
"type": "issue_priority_deescalating",
"comparison": 75,
"conditionResult": true
}
],
"actions": [
{
"type": "email",
"integrationId": null,
"data": {},
"config": {
"targetType": "user",
"targetDisplay": null,
"targetIdentifier": "123456"
},
"status": "active"
}
]
}
],
"environment": null,
"config": {},
"detectorIds": ["123456"], # the ID of the detector you just created
"enabled": true,
"owner": null
}

Issue Alert Example

If you were using the Create an Issue Alert Rule for a Project your request body may have looked like this:

{
"actionMatch":"any",
"filterMatch":"all",
"actions":[
{
"targetType":"Member",
"fallthroughType":"ActiveMembers",
"id":"sentry.mail.actions.NotifyEmailAction",
"targetIdentifier":"123456"
}
],
"conditions":[
{
"id":"sentry.rules.conditions.first_seen_event.FirstSeenEventCondition"
}
],
"filters":[
{
"include":"true",
"value":"1",
"id":"sentry.rules.filters.issue_category.IssueCategoryFilter"
}
],
"name":"Basic issue alert",
"frequency":1440,
"owner":"team:123456"
}

To make an equivalent alert today you’ll want to determine which monitor to connect the alert to. There are some default monitors to choose from or you could make your own The closest to an alert today would be to use the Issue Stream Monitor which can be looked up using this endpoint and passing query=type:issue_stream. Next you will create the request body to Create an Alert for an Organization as so:

{
"name":"Basic issue alert",
"detectorIds":["1234567"], # ID of the monitor to connect the alert to
"triggers":{
"logicType":"any-short",
"conditions":[
{
"type":"first_seen_event",
"comparison":true,
"conditionResult":true
}
],
"actions":[]
},
"environment":null,
"actionFilters":[
{
"logicType":"all",
"conditions":[
{
"type":"issue_category",
"comparison":{
"value":1,
"include":true
},
"conditionResult":true
}
],
"actions":[
{
"type":"email",
"integrationId":null,
"data":{},
"config":{
"targetType":"user",
"targetDisplay":null,
"targetIdentifier":"123456"
},
"status":"active"
}
]
}
],
"config":{"frequency":1440},
"enabled":true
}

New Functionality

The new APIs come with new functionality you may want to use.

Multiple if/then blocks

You can now provide multiple if/then conditions for an alert. Previously you’d have to create multiple alerts, but now it can be done in a single alert.

New conditions

There are new conditions to filter by:

  • A resolved issue becomes unresolved

  • The issue priority de-escalates

  • Current issue priority is greater than or equal to (high, medium, low)

  • etc.

Connect alert to multiple detectors

If your team manages multiple metric alerts you’ve probably experienced the following scenario:

  1. Define a list of useful alerts for your product area or team

  2. Team composition changes, or product areas are handed to new teams, now you need to update all of your alerts to point to new targets and channels.

With the ability to connect multiple monitors to a single alert - changing the notification target is no longer needs to be an O(n) task.

Some examples to get you started:

  • Connect all Uptime Monitors in all projects to a single alert and manage all of your downtime notifications in one place.

  • Maintain an internal library that’s used in many Sentry projects? Send all alerts where an event’s stack.package value matches to your team’s members and channels.

  • Tackling a tough performance problem? Connect metric monitors to track breached thresholds and use a dedicated alert rule to make sure the right people are notified.

Did this answer your question?