Skip to Content
API ReferenceRule Testing API

Rule Testing API

The Rule Testing API allows you to test alert rule configurations before activating them, ensuring they work as expected.

Test Rule Configuration

Test a rule configuration without saving it to the database.

Endpoint

POST /api/v1/rules/test

Request Body

{ "name": "string (required)", "ruleType": "enum (required)", "ruleConfig": { "key": "value" }, "severity": "enum (required)" }

Example Request

curl -X POST http://localhost:8080/api/v1/rules/test \ -H "Content-Type: application/json" \ -d '{ "name": "Test High Latency Rule", "ruleType": "THRESHOLD", "ruleConfig": { "metric": "avgLatency", "operator": "GREATER_THAN", "threshold": 100.0, "windowMinutes": 5 }, "severity": "HIGH" }'

Success Response

Status Code: 200 OK

{ "success": true, "message": "Rule test passed successfully", "compilationStatus": "SUCCESS", "generatedDrl": "rule \"Test High Latency Rule\"\nwhen\n $metrics: MetricsFact(avgLatency > 100.0)\nthen\n // Alert logic\nend", "validationErrors": [], "simulationResult": { "wouldTrigger": true, "matchedConditions": [ "avgLatency > 100.0" ], "currentValues": { "avgLatency": 125.3 } }, "recommendations": [ "Consider setting suppressionWindowMinutes to avoid alert fatigue" ] }

Error Response

Status Code: 200 OK (with success: false)

{ "success": false, "message": "Rule compilation failed", "compilationStatus": "ERROR", "generatedDrl": null, "validationErrors": [ "Invalid operator: GREATER_THEN (did you mean GREATER_THAN?)", "Metric 'avgLatency' not found in available metrics" ], "simulationResult": null, "recommendations": [] }

Test Existing Rule by ID

Test an existing rule without modifying it.

Endpoint

POST /api/v1/rules/test/{id}

Path Parameters

ParameterTypeDescription
idintegerRule ID

Example Request

curl -X POST "http://localhost:8080/api/v1/rules/test/1"

Success Response

Status Code: 200 OK

Same structure as “Test Rule Configuration” response.


Test Result Structure

Fields

FieldTypeDescription
successbooleanWhether the test passed
messagestringHuman-readable test result
compilationStatusenumSUCCESS, ERROR, WARNING
generatedDrlstringGenerated Drools DRL code
validationErrorsstring[]Array of validation errors
simulationResultobjectSimulation against current metrics
recommendationsstring[]Best practice recommendations

Compilation Status

  • SUCCESS: Rule compiles and is valid
  • ERROR: Rule has syntax or logic errors
  • WARNING: Rule compiles but has potential issues

Simulation Result

{ wouldTrigger: boolean; // Would rule trigger now? matchedConditions: string[]; // Which conditions matched currentValues: Record<string, any>; // Current metric values }

Available Metrics for Rules

Rules can be configured against the following metrics:

Summary Metrics

  • totalEvents (number)
  • eventsPerSecond (number)
  • avgLatency (number)
  • errorRate (number)
  • uniqueUsers (number)
  • uniqueSources (number)
  • uniqueEventTypes (number)

Latency Metrics

  • latencyP50 (number)
  • latencyP95 (number)
  • latencyP99 (number)
  • latencyMin (number)
  • latencyMax (number)

Throughput Metrics

  • currentThroughput (number)
  • peakThroughput (number)

Error Metrics

  • totalErrors (number)
  • errorRate (number)

Event Counts

  • eventsByType[eventType] (number)
  • eventsBySource[source] (number)
  • eventsBySeverity[severity] (number)

Supported Operators

Comparison Operators

  • GREATER_THAN (>)
  • GREATER_THAN_OR_EQUAL (>=)
  • LESS_THAN ( < )
  • LESS_THAN_OR_EQUAL (< =)
  • EQUALS (==)
  • NOT_EQUALS (!=)

Logical Operators

  • AND - Multiple conditions must be true
  • OR - At least one condition must be true
  • NOT - Negation

Example Rule Configurations

High Error Rate

{ "name": "High Error Rate", "ruleType": "THRESHOLD", "ruleConfig": { "metric": "errorRate", "operator": "GREATER_THAN", "threshold": 5.0, "windowMinutes": 5 }, "severity": "CRITICAL" }

Low Throughput

{ "name": "Low Throughput Alert", "ruleType": "THRESHOLD", "ruleConfig": { "metric": "currentThroughput", "operator": "LESS_THAN", "threshold": 10.0, "windowMinutes": 5 }, "severity": "MEDIUM" }

High Latency

{ "name": "High P99 Latency", "ruleType": "THRESHOLD", "ruleConfig": { "metric": "latencyP99", "operator": "GREATER_THAN", "threshold": 100.0, "windowMinutes": 5 }, "severity": "HIGH" }

Many Unique Users

{ "name": "Unusual Traffic Spike", "ruleType": "THRESHOLD", "ruleConfig": { "metric": "uniqueUsers", "operator": "GREATER_THAN", "threshold": 1000, "windowMinutes": 5 }, "severity": "MEDIUM" }

Best Practices

  1. Always Test First: Test rules before enabling them in production
  2. Use Realistic Thresholds: Base thresholds on actual system behavior
  3. Test with Current Data: Rule testing uses real-time metrics for simulation
  4. Review Validation Errors: Fix all errors before saving rules
  5. Consider Recommendations: Review and apply suggested best practices
  6. Iterate: Adjust thresholds based on false positives/negatives

Common Validation Errors

ErrorCauseSolution
”Invalid operator”Typo in operator nameUse exact operator names from docs
”Metric not found”Metric doesn’t existCheck available metrics list
”Invalid threshold type”Wrong data typeEnsure threshold matches metric type
”Missing required field”Missing ruleConfig propertyAdd all required fields
”DRL compilation failed”Syntax error in generated DRLCheck configuration structure

Testing Workflow

  1. Create Rule Configuration:

    • Define name, type, and config
    • Set severity and thresholds
  2. Test Configuration:

    • POST /api/v1/rules/test
    • Review compilation status
  3. Fix Errors (if any):

    • Address validation errors
    • Adjust configuration
  4. Review Simulation:

    • Check if rule would trigger
    • Verify matched conditions
  5. Adjust Thresholds:

    • Fine-tune based on simulation
    • Consider historical data
  6. Apply Recommendations:

    • Set suppression windows
    • Configure max alerts
  7. Create Rule:

    • POST /api/v1/rules
    • Enable rule
  8. Monitor Performance:

    • Watch trigger count
    • Adjust as needed

Integration Examples

Test Before Create

async function createRuleWithTesting(ruleConfig) { // Test the rule first const testResult = await fetch('http://localhost:8080/api/v1/rules/test', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(ruleConfig) }).then(r => r.json()); if (!testResult.success) { console.error('Rule test failed:', testResult.validationErrors); return null; } if (testResult.simulationResult.wouldTrigger) { console.log('Warning: Rule would trigger immediately'); } // Create the rule if test passed const createResult = await fetch('http://localhost:8080/api/v1/rules', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(ruleConfig) }).then(r => r.json()); return createResult; }

Batch Test Multiple Rules

#!/bin/bash # Test multiple rule configurations for config in rule_configs/*.json; do echo "Testing $config..." curl -s -X POST http://localhost:8080/api/v1/rules/test \ -H "Content-Type: application/json" \ -d @$config | jq '.success, .validationErrors' done
Last updated on