Alerts Center
The Alerts Center is your command post for monitoring quality failures and coordinating remediation across your data products.
Accessing the Alerts Center
Navigate to Alerts in the main sidebar to access the center.
Understanding Alerts
What Triggers an Alert?
Alerts are generated when a quality check fails or produces a warning, a product-level threshold is breached, or when scheduled monitoring detects an issue.
Alert Sources
Alerts primarily come from two sources: Quality Checks, which trigger alerts upon failed or warning execution states, and Product Alerts, which are raised based on product-level monitoring thresholds.
Alert Severity
Alerts are categorized by severity to prioritize specific responses. Critical (Red) alerts require immediate attention. High (Orange) alerts represent important issues that should be addressed soon. Medium (Yellow) alerts indicate issues that should be reviewed, while Low (Blue) alerts are minor concerns mainly for awareness.
Alerts Dashboard
Alert List
The main view displays a list of all active alerts, showing the severity indicator, title and description, associated product, time elapsed since the trigger, and the current status.
Filtering Alerts
You can narrow down the alert list using filters for Severity (e.g., Critical, High), Status (Active, Acknowledged, Resolved), Product (to see alerts for a specific data asset), or Type (Quality vs. Product alerts).
Sorting
Alerts can be sorted by Newest first (the default), by Severity to prioritize critical issues, or by Product name to group alerts by asset.
Working with Alerts
Viewing Alert Details
Clicking any alert reveals the full description, source check or trigger, execution details, associated product information, and a timeline of status changes.
Acknowledging Alerts
When you start investigating an alert, open the alert details and click Acknowledge. This moves the alert to "acknowledged" status, letting other team members know it's being handled.
Resolving Alerts
Once the issue is fixed, open the alert details and click Resolve. You can optionally add a resolution note before the alert moves to "resolved" status.
Adding Annotations
Document your investigation by adding comments or notes in the alert details. These notes are visible to other team members, making them useful for handoffs and audit trails.
Creating Tickets from Alerts
When to Create a Ticket
Convert an alert to a ticket when the issue requires formal tracking, multiple people need to collaborate, you want to track resolution time, or the issue is part of a larger incident.
How to Create a Ticket
Open the alert details and click Create Ticket. The ticket is pre-populated with the alert description, severity mapping (critical maps to high priority), source product link, and execution details. You can add additional context if needed before saving.
Bidirectional Linking
After creating a ticket, the alert shows a link to the ticket, and the ticket shows a link back to the source alert. Status updates are visible in both places.
Alert Highlighting
Direct Navigation
Alerts can be highlighted via URL parameter (/alerts?highlight={alert-id}). This is useful when linking from emails or notifications, sharing specific alerts with teammates, or navigating from related tickets. The specified alert automatically opens in the detail view.
Alert Lifecycle
The lifecycle moves from Active to Acknowledged to Resolved, though alerts can also move directly from Active to Resolved.
Alert States
Active alerts are new, unhandled, require attention, and are visible in the default alert list. Acknowledged alerts indicate someone is investigating but action is still required; this status prevents duplicate investigation. Resolved alerts have been addressed, can be filtered out of the active view, and are preserved for historical reference.
Severity Escalation
Alerts can be escalated by opening the details and clicking Escalate or changing the severity. Select a higher severity level and add a reason for the escalation. Use escalation when impact is greater than initially assessed, the issue persists despite resolution attempts, or stakeholders require higher visibility.
Context Enrichment
Alerts include rich context for faster resolution.
Source Information
They provide links to the Quality Check definition, the affected Data Product, and the specific Field if applicable.
Execution Details
Execution details include the Status (what failed), Actual Value vs. Expected, the Timestamp of occurrence, and whether it was Triggered By a user or the system.
Navigation Links
Quick links allow you to view the source quality check, open the product detail page, or see the full execution history.
Dashboard Metrics
The Alerts Center header displays Active Alerts (total requiring attention), Critical (highest severity count), This Week (alerts generated recently), and Resolution Rate (percentage resolved).
Notifications and Integrations
In-Platform
Alerts appear in the Activity Center, with badge counts on the Alerts navigation item and dashboard widgets for quick visibility.
Future Integrations
Coming soon include email notifications for critical alerts, Slack/Teams integration, and webhook support for external systems.
Best Practices
Don't Ignore Alerts
Every alert represents a potential issue. Acknowledge quickly to prevent duplicate work, and resolve promptly to maintain trust in the system. If an alert is noisy, fix the underlying check.
Use Severity Appropriately
Reserve critical for true emergencies with immediate business impact. Use High for important but not urgent issues, Medium for items that should be addressed, and Low for informational alerts.
Create Tickets for Complex Issues
When you can't resolve quickly, create a ticket for tracking and assign it to the appropriate team member. This ensures the issue is tracked to completion and prevents alerts from being forgotten.
Regular Triage
Schedule regular alert reviews: daily for new alerts, weekly for aging alerts, and monthly to identify alert patterns.
Improve Based on Alerts
Use alert data to improve. Recurring alerts suggest underlying issues, false positives indicate check problems, and patterns reveal systemic quality gaps.