Configuring Alerts for Synthetic Monitoring
Set up alert policies with Slack, PagerDuty, and webhooks. Learn escalation rules to reduce alert fatigue and only get notified when it matters.
Synthetic monitoring is useless if you don't know when things break. supaguard integrates with your existing tools to ensure the right person gets notified at the right time—without drowning your team in noise.
Supported Integrations
supaguard connects directly with the tools your team already uses:
| Integration | Status | Best For |
|---|---|---|
| Slack | ✅ Available | Team-wide awareness, non-critical alerts |
| PagerDuty | ✅ Available | On-call escalation, critical incidents |
| Webhooks | ✅ Available | Custom systems, internal tools |
| ✅ Available | Individual notifications, summaries | |
| Opsgenie | 🔜 Coming soon | Incident management |
| Discord | 🔜 Coming soon | Developer communities |
Setting Up an Alert Policy
An Alert Policy defines when and how you are notified. Think of it as the rulebook for your monitoring alerts.
Step 1: Create the Policy
- Go to Settings → Alert Policies
- Click "New Policy"
- Give it a descriptive name (e.g., "Critical — Page On-Call" or "Warning — Slack Only")
Step 2: Choose Your Channels
Select one or more notification channels. Each channel receives the same alert, so you can broadcast to both Slack and PagerDuty simultaneously.
[!TIP] Start with Slack for general awareness and add PagerDuty only for checks that monitor revenue-critical flows (Login, Checkout, Payments).
Step 3: Configure Trigger Conditions
Set when the alert should fire:
| Setting | Description | Recommended |
|---|---|---|
| Immediate | Alert on first failure | For checkout and payment flows |
| After N failures | Alert after consecutive failures | 2-3 for most checks |
| Recovery alerts | Notify when check recovers | Always enable this |
Step 4: Define Escalation Rules
Escalation rules prevent alert fatigue by adding intelligent delays. For example:
1. Wait 2 minutes (filters blips and transient issues)
↓
2. Notify Slack #alerts channel
↓
3. Wait 15 minutes (if still unresolved)
↓
4. Notify PagerDuty → Wake up on-call engineerThis ensures minor blips never reach your pager, while genuine outages escalate quickly.
Assigning Policies to Checks
Once created, assign a policy to any check:
- Edit your Check
- Go to the Alerting tab
- Select your policy from the dropdown
- Click Save
Now, that check will follow the rules you defined, ensuring you are only bothered when it matters.
[!NOTE] A single check can have only one alert policy, but a single policy can be assigned to multiple checks. Create policies based on severity level, not per-check.
Recommended Alert Architecture
For most teams, we recommend three tiers of alert policies:
Tier 1: Critical (Revenue-Impacting)
- Checks: Login, Checkout, Payment processing
- Channels: PagerDuty + Slack
#critical-alerts - Trigger: Immediate (after Smart Retry confirmation)
- Escalation: Page on-call after 2 minutes
Tier 2: Warning (Important but Non-Urgent)
- Checks: Feature pages, API health, dashboards
- Channels: Slack
#monitoring - Trigger: After 2 consecutive failures
- Escalation: None—investigate during business hours
Tier 3: Informational (Awareness Only)
- Checks: Documentation, status pages, blog
- Channels: Email digest
- Trigger: After 3 consecutive failures
- Escalation: None
Testing Your Alerts
Before going live, always verify your channels work:
- Navigate to Settings → Alert Policies
- Click ⋮ on your policy
- Select Send Test Notification
- Verify it arrives in the expected channel
[!IMPORTANT] For Slack integrations, make sure the bot has been invited to the target channel. For PagerDuty, verify the integration key is correct by sending a test event.
Best Practices
- Name policies by severity, not by check — "Critical — Page On-Call" is better than "Login Check Alert"
- Always enable recovery notifications — Know when issues resolve, not just when they start
- Use Slack threads — Configure your Slack integration to post updates as thread replies to reduce channel noise
- Combine with Smart Retries — supaguard verifies failures from multiple regions before alerting, eliminating most false positives before they reach your policy
- Review alert volume monthly — If a check triggers more than 5 alerts/week, the test script likely needs maintenance
Next Steps
- Slack Integration Setup — Detailed Slack configuration
- PagerDuty Integration — Connect your incident management
- Smart Retries — How false alarms are eliminated before alerting
- Debugging Failures — What to do when an alert fires
Communication Channels: Email, Slack, PagerDuty, Webhooks
Set up notification channels for supaguard alerts. Configure Email, Slack webhooks, PagerDuty, Discord, and custom webhooks with payload examples.
Debugging Test Failures with Video, Network Traces, and Playwright Traces
Debug synthetic monitoring failures with video recordings, network HAR files, and Playwright traces. Identify root causes fast with supaguard's deep debugging tools.