This project simulates a small enterprise environment to design, validate, and operationalize detections across both host and network telemetry.
The focus is not just detection creation, but building a repeatable detection engineering workflow supported by validation, testing, and CI/CD.

- Detection engineering across endpoint and network telemetry
- Validation of detections using structured test data and PCAP replay
- CI/CD pipelines for detection quality assurance and deployment
- Integration of Splunk (host) and Security Onion (network)
- Mapping detections to MITRE ATT&CK techniques
- Treating detections as version-controlled, testable code
Many detection efforts fail due to lack of validation and consistency.
This project focuses on solving that problem by treating detections as code:
- version-controlled
- testable
- continuously validated
- deployable through CI/CD
The goal is not just to detect threats, but to ensure detections are reliable and maintainable over time.
- Simulate realistic adversary behavior in a controlled lab
- Collect and centralize telemetry from multiple sources
- Develop detections aligned to MITRE ATT&CK
- Validate detections using repeatable test cases
- Track coverage and identify detection gaps
- Enforce structure and quality through CI/CD pipelines
This project is designed for a self-hosted Windows GitHub Actions runner and a working lab environment.
Required components:
- Python 3.11
- Git
- Playwright
- Chromium browser for Playwright
- Suricata installed and accessible in
PATH - Splunk Enterprise with REST API access
- Splunk HTTP Event Collector (HEC) configured for test ingestion
- Security Onion UI accessible from the runner
This repository includes three GitHub Actions workflows:
-
validate
runs repository validation, syntax validation, and detection tests -
validate-and-deployfor Splunk
validates and deploys Splunk detections as alerts -
validate-and-deploy-securityonion
validates and deploys Security Onion detections through UI automation
These workflows are designed for a self-hosted Windows runner and will not function correctly on a default GitHub-hosted runner without major changes.
py -m pip install -r requirements.txt
py -m playwright install chromium
- Windows Event Logs
- Sysmon
- Process creation and access events
- Registry and command-line activity
- Zeek connection logs
- Suricata alerts
- Protocol and traffic metadata
This project models a full detection engineering lifecycle:
- Adversary simulation generates telemetry
- Alerts are analyzed by a cybersecurity analyst
- Detections are developed and stored as code
- CI/CD pipelines validate detections using test data
- Validated detections are deployed back into the environment
This creates a continuous feedback loop for improving detection quality.
Each detection includes structured test coverage:
- JSON-based event fixtures
- Injected into a test index via HEC
- Queries executed and validated
- PCAP-based validation
- Traffic replayed against rules
- Alert generation verified
- JSON event samples
- Rule logic evaluated against expected matches
GitHub Actions pipelines enforce quality and automate deployment:
- Repository structure validation
- Detection syntax validation
- Splunk detection testing
- Suricata PCAP validation
- Sigma rule validation
- Splunk detections deployed as alerts via API
- Security Onion detections deployed via UI automation (implemented with Playwright due to limited API support in the free version)
Security Onion detections are deployed using Playwright-based UI automation.
This approach was intentionally chosen to:
- Simulate real-world constraints where APIs may be limited
- Demonstrate automation capability across non-API systems
- Enable full lifecycle management (create/update/delete)
Note: This method is dependent on UI structure and may require adjustments across versions.
- Lab environment does not reflect full enterprise scale
- Detection logic is simplified and requires tuning for production
- Security Onion deployment relies on UI automation
- ATT&CK coverage is partial and expanding
- Detection scoring and prioritization is not yet implemented
- Expand ATT&CK coverage across additional tactics
- Add negative test cases (false positive validation)
- Introduce detection scoring / severity modeling
- Improve Sigma > SIEM translation workflows
- Add automated reporting / dashboards
This project focuses on building detections the same way mature security teams do:
- structured
- validated
- version-controlled
- continuously improved
The goal is not just visibility, but confidence in detection quality.