supaguardsupaguardDocs
Guides

Alerting Guide: Notifications for Check Failures

Configure alert policies to receive notifications via email, Slack, and webhooks when synthetic checks fail. Set triggers, recovery alerts, and escalations.

supaguard's alerting system ensures the right people know about issues at the right time. Combined with Smart Retries and Failure Classification, alerts are accurate, actionable, and noise-free.

How Alerting Works

When a check fails, supaguard follows this lifecycle:

  1. Execution — Check runs from a monitoring region
  2. Smart Retry — If it fails, supaguard automatically re-runs from a different region
  3. Classification — Failure is classified as Critical, Degraded, or Healthy
  4. Policy Evaluation — Your alert policy rules are evaluated
  5. Notification — Alerts are sent to configured channels
  6. Recovery — When the check passes again, recovery notifications are sent

Overview

supaguard's alerting system notifies you when:

  • A check fails and the failure is confirmed across regions
  • A check recovers after a period of failure
  • A check has been failing for an extended period (escalation)

Alert Policies

Alert policies define how and when you receive notifications.

Creating an Alert Policy

  1. Navigate to Alert Policies in your dashboard
  2. Click Create Policy
  3. Configure the policy settings (see table below)

Policy Settings

SettingDescriptionRecommendation
NameA descriptive name for the policyUse severity-based naming (e.g., "Critical — Page On-Call")
ChannelsWhere to send notificationsStart with Slack, add PagerDuty for critical flows
TriggerWhen to send alerts"After 2 failures" for most checks
RecoveryNotify when check recoversAlways enable — know when issues resolve
EscalationEscalate if unresolvedAdd PagerDuty after 15 minutes

Alert Channels

supaguard supports multiple notification channels. Use them together for layered alerting.

Email

Send alerts to one or more email addresses. Best for individual notifications and daily digests.

{
  "type": "email",
  "recipients": ["alerts@example.com", "team@example.com"]
}

Slack

Send alerts to a Slack channel for team-wide visibility. Slack is the most popular alerting channel for engineering teams.

  1. Connect supaguard to your Slack workspace (see Slack Integration)
  2. Choose the channel for alerts
  3. Customize the message format

[!TIP] Use separate Slack channels for different severity levels: #alerts-critical for pages, #alerts-monitoring for warnings.

PagerDuty

Integrate with PagerDuty for on-call escalation and incident management. See PagerDuty Integration for setup details.

Webhooks

Send alerts to any HTTP endpoint for custom integrations.

{
  "type": "webhook",
  "url": "https://your-service.com/alerts",
  "headers": {
    "Authorization": "Bearer your-token"
  }
}

See Webhooks Integration for payload format and advanced configuration.

Alert Lifecycle

Understanding the full alert lifecycle helps you configure policies effectively:

StageWhat HappensYour Action
TriggeredCheck failed, alert sentInvestigate the failure
AcknowledgedTeam member sees the alertBegin diagnosis
InvestigatingUsing debugging toolsWatch video, check traces
ResolvedIssue fixed, check passesRecovery notification sent

Best Practices

  1. Use meaningful names — Make it easy to identify which alert fired and its severity level
  2. Avoid alert fatigue — Set appropriate thresholds before alerting. Use "after 2 failures" as default
  3. Test your channels — Send test notifications before going live to verify delivery
  4. Set up escalation — Have a backup channel for critical alerts that go unacknowledged
  5. Review regularly — Audit alert volume monthly. High-frequency alerts signal test maintenance needs
  6. Combine with Smart Retries — supaguard verifies failures from multiple regions before alerting, reducing false alarms by 99%

Next Steps

On this page