EU AI ACT DECLARATION OF CONFORMITY
Document No.: DoC-AI-2025-001
Version: 1.0
Date: July 23, 2025
1. Manufacturer / Provider
Name: SenSec LLC
Legal form: Wyoming Limited Liability Company
Address: 30 N Gould St Ste N, Sheridan, WY 82801, USA
Contact: legal@sensec.app | ai@sensec.app | privacy@sensec.app
EU/EEA Representative (Art. 74 AI Act / Art. 27 GDPR): Sentinel Security s.r.o., Jičínská 226/17, Žižkov, 130 00 Praha 3, VAT ID: CZ19997604
2. Identification of the AI System(s)
Product/Service Name: SenSec AI Security Management Platform ("SenSec AI Platform")
Modules / Features Covered:
PatrolAssign v1.0 – task & patrol assignment recommender
IncidentRanker v1.0 – incident triage & prioritization engine
AnomalyWatch v1.0 – anomaly / pattern detection alerts
SentinelNLP v1.0 – NLP-based transcription & summarization
ForecastPro v1.0 – workload & staffing predictive analytics
Model Release Cycle: Continuous delivery with semantic versioning; material updates documented in release notes and internal model cards.
Intended Purpose: Decision-support for security operations (scheduling, incident handling, risk signaling). The system does not autonomously make final, legally binding, or disciplinary decisions.
3. Classification under the EU AI Act
Risk Category: High-Risk AI System under Annex III, point 5 (AI used to make or assist in decisions affecting employment / worker management & task allocation). Certain incident access-control features may also fall under Annex III point 6 (access to essential services/premises).
Conformity Assessment Route: Annex VII Quality Management System + Technical Documentation (internal control). No Notified Body engaged at this stage.
Notified Body: N/A
EU Database Registration ID: To be obtained and published prior to placing the high-risk modules on the EU market.
4. Applicable Requirements & Standards
The system complies with Title III, Chapter 2 requirements of the EU AI Act, including:
Art. 9 – Risk management system
Art. 10 – Data and data governance
Art. 11 & Annex IV – Technical documentation
Art. 12 – Record-keeping & logging
Art. 13 – Transparency & information to users
Art. 14 – Human oversight
Art. 15 – Accuracy, robustness & cybersecurity
Standards & Frameworks Referenced:
ISO/IEC 42001:2023 (AI Management System) – selected controls implemented
ISO/IEC 23894:2023 (AI Risk Management)
ISO/IEC 27001:2022 & 27002:2022 (Information Security)
ISO/IEC 27701:2019 (Privacy Information Management)
NIST AI Risk Management Framework (Jan 2023)
EN 301 549 (Accessibility requirements for ICT products and services) where applicable
5. Declaration
We hereby declare, under our sole responsibility, that the AI system identified above conforms to the relevant provisions of the EU AI Act and that the technical documentation described in Annex IV has been drawn up and will be made available to competent authorities upon request. The system is subject to ongoing post‑market monitoring and serious-incident reporting in accordance with Articles 61–63.
TECHNICAL DOCUMENTATION SUMMARY (ANNEX IV OVERVIEW)
(Public summary; full technical file is maintained internally for 10 years after the last EU placement)
A. System Description
Overview: The SenSec AI Platform ingests patrol logs, incident reports, guard telemetry (e.g., GPS coordinates), and customer-configured workflows to output recommendations, alerts, and summaries that assist security operations teams.
Architecture: Multi-tenant SaaS hosted primarily on AWS (us-east-1 & eu-central-1). AI models are served via containerized microservices behind authenticated REST/GraphQL endpoints. TLS 1.2+ for data in transit; AES-256 or equivalent for data at rest.
Interfaces: Web dashboard (React), native mobile apps (iOS/Android), public APIs, and webhook callbacks.
Users: Security managers, supervisors, guards, and authorized client stakeholders.
Operating Environment: Modern web browsers or supported mobile OS; stable network connection required.
B. Intended Purpose & Use Conditions
Purpose: Provide decision-support for guard/task allocation, incident prioritization, and operational analytics.
Use Conditions: Customers must configure roles/permissions, review AI outputs for high-impact decisions, and follow product guidance.
Contraindications: Not for life-critical autonomous decisions or as the sole basis for employment termination or legal determinations.
C. Data & Data Governance
Training Data Sources: Aggregated/de-identified historical operational data from consenting customers; synthetic datasets for edge cases; publicly available corpora for NLP pre-training.
Pre-processing: Pseudonymization of direct identifiers where feasible; normalization of formats (time, location); exclusion/minimization of special categories of data.
Quality Controls: Automated schema checks, outlier detection, deduplication; quarterly dataset audits for representativeness.
Bias / Fairness Testing: Evaluation of recommendation/triage outputs across shifts, roles, locations to detect systematic disparities.
D. Model Information
Models in Production:
PatrolAssign v1.0 (gradient boosted decision trees)
IncidentRanker v1.0 (XGBoost classifier)
AnomalyWatch v1.0 (unsupervised statistical/ML anomaly detection)
SentinelNLP v1.0 (transformer-based summarization & classification)
ForecastPro v1.0 (time-series forecasting using Prophet/ARIMA hybrids)
Key Performance Metrics:
IncidentRanker: Precision 0.89 / Recall 0.83 (validation set of 120k labeled incidents)
PatrolAssign: ≥85% acceptance rate of top-3 recommendations in pilot deployments
SentinelNLP: ROUGE-L 0.62 on internal summarization benchmark; 0.92 accuracy for incident-type classification
AnomalyWatch: False positive rate <3% and false negative rate <5% on historical baseline
Explainability Tools: SHAP-based feature importance and scenario analysis dashboards for IncidentRanker and PatrolAssign; confidence scores in UI for NLP summaries.
E. Risk Management & Mitigation
Identified Risks: Model drift, biased recommendations, false negatives on critical incidents, adversarial or corrupted inputs, data leakage, misinterpretation of AI outputs.
Mitigations: Human-in-the-loop review; configurable thresholds; continuous monitoring & retraining triggers; differential privacy techniques in analytics; strict RBAC to training data; secure prompt handling for NLP.
Residual Risks: Catalogued in risk register; communicated via UI warnings and documentation.
F. Human Oversight Measures
Mandatory manual approval for high-impact changes (e.g., incident severity downgrade).
Override controls and audit trail of overrides and rationales.
User training materials and inline guidance on interpreting outputs.
Ability to request human review/appeal of AI-influenced decisions.
G. Logging, Monitoring & Post‑Market Surveillance
Logging: Immutable logs capture inputs, outputs, model/version IDs, user IDs, timestamps.
Monitoring: Drift detection (Population Stability Index, KS tests), alerting for metric degradation; quarterly KPI review by AI governance team.
Feedback Channels: In-app feedback button, ai@sensec.app, scheduled customer success check-ins.
Serious Incident Reporting: Internal SLA to assess and notify EU authorities within 15 days; root cause analysis and corrective actions tracked in our QMS.
H. Cybersecurity & Integrity
Secure CI/CD (SAST/DAST, dependency scanning, signed images); WAF and rate limiting in production; anomaly-based IDS.
Annual third-party penetration tests; coordinated vulnerability disclosure program (see Security & Vulnerability Disclosure Policy).
Robust backup/restore and disaster recovery procedures.
I. Accuracy, Robustness & Performance
Target thresholds defined for each model; enforced in CI pipelines.
Stress & adversarial testing performed pre-release; rollback playbooks maintained.
Confidence indicators shown to users where feasible; guidance on validating outputs.
J. Compliance & Documentation Control
Controlled documents under internal QMS (versioning, approval workflows).
Change management includes FRIA updates and, if needed, re-declaration of conformity.
Full technical documentation retained for 10 years and provided to competent authorities upon lawful request.