Overview
Grantex monitors all authorization events in real time and fires alerts when agent behavior deviates from expected patterns. Anomaly detection ships with 10 built-in rules that cover the most common threats, a custom rule DSL for organization-specific policies, and multi-channel alerting to Slack, PagerDuty, Datadog, email, and webhooks. Every alert follows a clear lifecycle — open, acknowledged, resolved — with full traceability of who responded and what action was taken. Anomaly detection is available on all plans including the free tier.Built-in Rules
Grantex ships with 10 detection rules that require zero configuration:| Rule ID | Name | Trigger | Severity |
|---|---|---|---|
velocity_spike | Velocity Spike | Request rate exceeds 3x the rolling 1-hour average | High |
scope_escalation | Scope Escalation | Agent requests scopes beyond its registered set | Critical |
unknown_agent | Unknown Agent | Token presented by an unregistered agent DID | Critical |
token_replay | Token Replay | Same token JTI used from multiple IP addresses | Critical |
off_hours_activity | Off-Hours Activity | Agent active outside its configured operating window | Low |
high_failure_rate | High Failure Rate | More than 30% of requests fail in a 15-minute window | High |
concurrent_sessions | Concurrent Sessions | Same grant token used from 3+ distinct IPs | High |
delegation_depth | Delegation Depth | Delegation chain exceeds configured max depth | Medium |
budget_overspend | Budget Overspend | Agent consumes more than 90% of budget in a single burst | High |
geo_anomaly | Geographic Anomaly | Requests from unexpected geographic regions | Medium |
Custom Rules
Create rules tailored to your threat model. Custom rules support agent filters, scope filters, time windows, and thresholds.Condition Fields
| Field | Type | Description |
|---|---|---|
agentIds | string[] | Limit rule to specific agent IDs. Empty = all agents. |
scopes | string[] | Trigger only when these scopes are involved. |
timeWindow | string | Sliding window: 5m, 15m, 1h, 6h, 24h. |
threshold | number | Number of events in the time window that triggers the alert. |
Alert Lifecycle
Every anomaly alert moves through a defined lifecycle:Managing Alerts
Notification Channels
Route alerts to the tools your team uses:| Channel | Type | Config |
|---|---|---|
| Slack | slack | webhookUrl |
| PagerDuty | pagerduty | routingKey |
| Datadog | datadog | apiKey, site |
email | to, from | |
| Webhook | webhook | url, secret |
Creating a Channel
severities filter. Only alerts matching the configured severities are sent to that channel. This lets you route critical alerts to PagerDuty while sending low-severity alerts to a Slack monitoring channel.
SSE Event Stream
Subscribe to real-time anomaly events via Server-Sent Events:curl:
Metrics API
Query aggregate anomaly metrics:Prometheus Metrics
TheGET /metrics endpoint exposes Prometheus-format counters:
Dashboard
The developer portal includes a full anomaly detection dashboard at/dashboard/anomalies:
- Severity overview — Open alert counts by severity with color-coded indicators
- Activity chart — 14-day bar chart of alert volume
- Alert list — Filterable by status and severity with inline acknowledge/resolve/revoke actions
- Alert detail — Full context, timeline, and resolution notes
- Rule builder — View built-in rules, create custom rules, toggle enable/disable
Related
- Anomaly Detection Setup Guide — Step-by-step configuration
- Event Streaming — SSE and WebSocket endpoints
- Metrics & Observability — Prometheus, Grafana, OpenTelemetry
- Budget Controls — Financial guardrails for agents