Rule Testing API
The Rule Testing API allows you to test alert rule configurations before activating them, ensuring they work as expected.
Test Rule Configuration
Test a rule configuration without saving it to the database.
Endpoint
POST /api/v1/rules/testRequest Body
{
"name": "string (required)",
"ruleType": "enum (required)",
"ruleConfig": {
"key": "value"
},
"severity": "enum (required)"
}Example Request
curl -X POST http://localhost:8080/api/v1/rules/test \
-H "Content-Type: application/json" \
-d '{
"name": "Test High Latency Rule",
"ruleType": "THRESHOLD",
"ruleConfig": {
"metric": "avgLatency",
"operator": "GREATER_THAN",
"threshold": 100.0,
"windowMinutes": 5
},
"severity": "HIGH"
}'Success Response
Status Code: 200 OK
{
"success": true,
"message": "Rule test passed successfully",
"compilationStatus": "SUCCESS",
"generatedDrl": "rule \"Test High Latency Rule\"\nwhen\n $metrics: MetricsFact(avgLatency > 100.0)\nthen\n // Alert logic\nend",
"validationErrors": [],
"simulationResult": {
"wouldTrigger": true,
"matchedConditions": [
"avgLatency > 100.0"
],
"currentValues": {
"avgLatency": 125.3
}
},
"recommendations": [
"Consider setting suppressionWindowMinutes to avoid alert fatigue"
]
}Error Response
Status Code: 200 OK (with success: false)
{
"success": false,
"message": "Rule compilation failed",
"compilationStatus": "ERROR",
"generatedDrl": null,
"validationErrors": [
"Invalid operator: GREATER_THEN (did you mean GREATER_THAN?)",
"Metric 'avgLatency' not found in available metrics"
],
"simulationResult": null,
"recommendations": []
}Test Existing Rule by ID
Test an existing rule without modifying it.
Endpoint
POST /api/v1/rules/test/{id}Path Parameters
| Parameter | Type | Description |
|---|---|---|
| id | integer | Rule ID |
Example Request
curl -X POST "http://localhost:8080/api/v1/rules/test/1"Success Response
Status Code: 200 OK
Same structure as “Test Rule Configuration” response.
Test Result Structure
Fields
| Field | Type | Description |
|---|---|---|
| success | boolean | Whether the test passed |
| message | string | Human-readable test result |
| compilationStatus | enum | SUCCESS, ERROR, WARNING |
| generatedDrl | string | Generated Drools DRL code |
| validationErrors | string[] | Array of validation errors |
| simulationResult | object | Simulation against current metrics |
| recommendations | string[] | Best practice recommendations |
Compilation Status
- SUCCESS: Rule compiles and is valid
- ERROR: Rule has syntax or logic errors
- WARNING: Rule compiles but has potential issues
Simulation Result
{
wouldTrigger: boolean; // Would rule trigger now?
matchedConditions: string[]; // Which conditions matched
currentValues: Record<string, any>; // Current metric values
}Available Metrics for Rules
Rules can be configured against the following metrics:
Summary Metrics
totalEvents(number)eventsPerSecond(number)avgLatency(number)errorRate(number)uniqueUsers(number)uniqueSources(number)uniqueEventTypes(number)
Latency Metrics
latencyP50(number)latencyP95(number)latencyP99(number)latencyMin(number)latencyMax(number)
Throughput Metrics
currentThroughput(number)peakThroughput(number)
Error Metrics
totalErrors(number)errorRate(number)
Event Counts
eventsByType[eventType](number)eventsBySource[source](number)eventsBySeverity[severity](number)
Supported Operators
Comparison Operators
GREATER_THAN(>)GREATER_THAN_OR_EQUAL(>=)LESS_THAN( < )LESS_THAN_OR_EQUAL(< =)EQUALS(==)NOT_EQUALS(!=)
Logical Operators
AND- Multiple conditions must be trueOR- At least one condition must be trueNOT- Negation
Example Rule Configurations
High Error Rate
{
"name": "High Error Rate",
"ruleType": "THRESHOLD",
"ruleConfig": {
"metric": "errorRate",
"operator": "GREATER_THAN",
"threshold": 5.0,
"windowMinutes": 5
},
"severity": "CRITICAL"
}Low Throughput
{
"name": "Low Throughput Alert",
"ruleType": "THRESHOLD",
"ruleConfig": {
"metric": "currentThroughput",
"operator": "LESS_THAN",
"threshold": 10.0,
"windowMinutes": 5
},
"severity": "MEDIUM"
}High Latency
{
"name": "High P99 Latency",
"ruleType": "THRESHOLD",
"ruleConfig": {
"metric": "latencyP99",
"operator": "GREATER_THAN",
"threshold": 100.0,
"windowMinutes": 5
},
"severity": "HIGH"
}Many Unique Users
{
"name": "Unusual Traffic Spike",
"ruleType": "THRESHOLD",
"ruleConfig": {
"metric": "uniqueUsers",
"operator": "GREATER_THAN",
"threshold": 1000,
"windowMinutes": 5
},
"severity": "MEDIUM"
}Best Practices
- Always Test First: Test rules before enabling them in production
- Use Realistic Thresholds: Base thresholds on actual system behavior
- Test with Current Data: Rule testing uses real-time metrics for simulation
- Review Validation Errors: Fix all errors before saving rules
- Consider Recommendations: Review and apply suggested best practices
- Iterate: Adjust thresholds based on false positives/negatives
Common Validation Errors
| Error | Cause | Solution |
|---|---|---|
| ”Invalid operator” | Typo in operator name | Use exact operator names from docs |
| ”Metric not found” | Metric doesn’t exist | Check available metrics list |
| ”Invalid threshold type” | Wrong data type | Ensure threshold matches metric type |
| ”Missing required field” | Missing ruleConfig property | Add all required fields |
| ”DRL compilation failed” | Syntax error in generated DRL | Check configuration structure |
Testing Workflow
-
Create Rule Configuration:
- Define name, type, and config
- Set severity and thresholds
-
Test Configuration:
POST /api/v1/rules/test- Review compilation status
-
Fix Errors (if any):
- Address validation errors
- Adjust configuration
-
Review Simulation:
- Check if rule would trigger
- Verify matched conditions
-
Adjust Thresholds:
- Fine-tune based on simulation
- Consider historical data
-
Apply Recommendations:
- Set suppression windows
- Configure max alerts
-
Create Rule:
POST /api/v1/rules- Enable rule
-
Monitor Performance:
- Watch trigger count
- Adjust as needed
Integration Examples
Test Before Create
async function createRuleWithTesting(ruleConfig) {
// Test the rule first
const testResult = await fetch('http://localhost:8080/api/v1/rules/test', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(ruleConfig)
}).then(r => r.json());
if (!testResult.success) {
console.error('Rule test failed:', testResult.validationErrors);
return null;
}
if (testResult.simulationResult.wouldTrigger) {
console.log('Warning: Rule would trigger immediately');
}
// Create the rule if test passed
const createResult = await fetch('http://localhost:8080/api/v1/rules', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(ruleConfig)
}).then(r => r.json());
return createResult;
}Batch Test Multiple Rules
#!/bin/bash
# Test multiple rule configurations
for config in rule_configs/*.json; do
echo "Testing $config..."
curl -s -X POST http://localhost:8080/api/v1/rules/test \
-H "Content-Type: application/json" \
-d @$config | jq '.success, .validationErrors'
doneLast updated on