Governance model, scope, and membership guidelines for the AI Security Alliance.
Status: Ratified (January 2026)
AISECA exists to define the gold standard for securing enterprise AI tooling through a practitioner-led, vendor-agnostic framework.
The board focuses on:
- AI Security Standards -- developing and maintaining the AI Security Maturity Framework
- Control Catalogues -- defining practical, implementable security controls for enterprise AI
- Benchmarking -- enabling organisations to measure and compare AI security maturity
- Guidance -- publishing implementation guidance mapped to regulatory frameworks (NIST, ISO, EU AI Act)
- Vendor-agnostic -- no single vendor influences the direction of the framework
- Practitioner-led -- board members are practitioners, not purely advisory
- Open development -- the framework is developed publicly on GitHub
- Evidence-based -- controls are grounded in real-world implementation experience
- Consensus-driven -- major decisions require board consensus
Board membership is by invitation or application. Members are expected to:
- Have hands-on experience with AI security, governance, or enterprise risk
- Contribute actively to framework development and review
- Maintain vendor-neutrality in their contributions
- Participate in quarterly review cycles
Anyone can contribute to the framework via GitHub. Community contributions are reviewed by board members before incorporation.
Visit aiseca.org and submit an application through the "Join the Alliance" form.
- Framework changes: Require review and approval from at least two board members
- New controls: Proposed via PR, discussed in GitHub Discussions, ratified by board vote
- Charter amendments: Require unanimous board consent
| Deliverable | Cadence |
|---|---|
| Framework Revision | Quarterly |
| Controls Catalog Update | Ongoing |
| Benchmarking Report | Bi-annually |
| Public Release Notes | Per release |
This charter is released under CC BY 4.0.
AISECA -- AI Security Alliance | aiseca.org | GitHub