This page is a procurement document. It maps AISB's architectural primitives against the six EU AI Act conformity requirements that apply to high-risk AI deployed in building automation. If you are a legal / procurement reviewer evaluating AISB for an EU-domiciled enterprise, this is the surface to cite. For the narrative thesis, see the August 2 procurement deadline post.
The deadline
2 August 2026 — EU AI Act Articles 9, 10, 14 and 26 become binding for high-risk AI systems. Penalty for non-compliance: up to €35M or 7% of global annual turnover per Article 99(3), whichever is higher. The text is in force; the enforcement date is fixed.
Holland & Knight (April 2026 advisory), Baker Botts (March 2026 client note), and McKenna Consultants (May 2026 legal analysis) converge on the same reading: AI used in safety components of buildings, including HVAC and fire/life-safety control, and AI that influences access to workplaces via occupancy / hazard / code-violation monitoring, falls under Annex III high-risk scope.
Why CRE building automation is in scope
EU AI Act Article 6 + Annex III enumerates eight high-risk categories. Three of them apply directly to building-automation AI:
- Annex III §2 — safety components of products subject to Union harmonisation legislation (HVAC tied to occupancy + access; fire alarm + sprinkler control)
- Annex III §6 — law enforcement and migration assistance (occupant identity / access decisions where AI is the deciding factor)
- Annex III §4 — employment, workers' management, access to self-employment (workplace AI that influences hiring, work allocation, or termination — captures certain workplace-analytics deployments)
The first category is the dominant one for CRE smart-building stacks. An AI control loop that adjusts HVAC setpoints based on occupancy sensing, that interacts with code-violation detection, or that influences emergency-egress lighting under fire-alarm logic, is in scope. The "we only optimize comfort" defense does not work — Article 6 looks at the function in the safety system, not the marketing copy.
The six conformity requirements + how AISB satisfies each
| EU AI Act requirement | What it asks | AISB architectural primitive |
|---|---|---|
| Art. 9 — Risk Management System | Documented, iterative risk identification + mitigation across the full AI lifecycle. Update on substantial change. | v82 Daily Squad Self-Test Loop — 18-test Standard Suite run nightly (06:30 TPE), routes failures to fix queue. Risk register maintained in data-logs/risk/. Documented retraining trigger thresholds. |
| Art. 10 — Data Governance | Training, validation, testing data sets meet quality criteria. Bias examination. Statistical properties documented. | CRE-EN Privacy Broker — differential privacy (Laplace mechanism, per-zone ε-budget), k-anonymity floor, GDPR Art. 9 + Colorado SB-205 + EU AI Act overlay. Per-region consent enforcement at the fusion layer, not bolted on. |
| Art. 11 — Technical Documentation | System purpose, design, training, performance, monitoring — sufficient for a deployer + authority to assess conformity. | v61 immutable raw/ landing — content-addressed (SHA-256), chmod 444, never mutated. Provenance preserved end-to-end. Architectural docs versioned in data-logs/beast-os-architecture.md. |
| Art. 12 — Record-Keeping | Automatic logging of events over the system's lifetime. Logs retained at least 6 months. | v17 Context Tree + episodic traces — every significant agent output logged to data-logs/memory-lifecycle/traces/ with timestamp, agent, confidence, source. Append-only JSONL. Retention configurable; 6-month minimum is the floor. |
| Art. 14 — Human Oversight | Effective oversight by natural persons. Ability to intervene, override, halt. Awareness of automation bias. | Recommend-only architecture (default) — 17 mandatory agents emit assumption_surface_v1 envelopes before action. Tool risk classification routes HIGH-risk actions to Robin. No autonomous money-movement, deployment, or external-comm. |
| Art. 15 — Accuracy, Robustness, Cybersecurity | Appropriate level of accuracy, robustness against errors/inconsistencies, cybersecurity against adversarial attacks. | v91 Security Hardening Plane — Trivy SCA, license + secret gates, SBOM, SARIF. Adversarial Ship Gate (v48/v65) — 4-phase intelligent pipeline reviews every patch. Taint-flow guard tracks untrusted content through agent handoff surfaces. |
Article 26 — what the deployer obligation actually requires
If you are the deployer (the entity putting the AI system into use in the EU), Article 26 layers four obligations on top of the high-risk system's existing conformity:
- Qualified human oversight. Natural persons with the competence, training, authority, and resources to perform the oversight function. Not a checkbox.
- Monitor operation per instructions. Inform the provider of identified risks or serious incidents. Suspend use if risks materialise.
- Maintain logs for at least 6 months. Auto-generated logs from the AI system, retained, accessible to authorities on request.
- Fundamental rights impact assessment (where Annex III §1 / §6 / §7 apply) — documented, prior to first use.
AISB's role here is as the system provider. The deployer obligation rests with the EU-domiciled enterprise. AISB's output is structured to make the deployer's Article 26 burden tractable:
- Per-recommendation audit trail with timestamp, agent, confidence score, source citation — directly consumable as Article 12 evidence
- Recommend-only by default means the "qualified human oversight" surface is the procurement officer / facility manager already in the loop — no new role required
- v82 self-test results published nightly, available as Article 26(5) "monitor operation per instructions" evidence
- Risk events logged to
data-logs/security-events/with severity and action taken — directly consumable for Article 26(2) "inform the provider" obligation
Procurement questionnaire crib — 8 questions an EU legal team will ask
| # | Question | AISB answer |
|---|---|---|
| 1 | Is your system classified as high-risk under Annex III? | Where deployed in HVAC / fire-life-safety control or occupancy-driven access decisions, yes — and we provision for it by default. |
| 2 | Where is your Article 9 risk-management documentation? | v82 Daily Squad Self-Test Loop output + risk register, both versioned. Sample artifacts available under NDA. |
| 3 | How do you satisfy Article 10 data governance? | CRE-EN Privacy Broker — differential privacy + k-anonymity floor + per-region consent. Sample DPIA template available. |
| 4 | Can you produce Article 11 technical documentation? | v61 immutable raw landing + architecture docs. Pre-deployment, we produce a per-tenant Article 11 packet. |
| 5 | Can you produce Article 12 logs for any decision in the last 6 months? | Yes — append-only JSONL traces, queryable by tenant + agent + timestamp. Minimum retention 6 months; configurable higher. |
| 6 | How is Article 14 human oversight implemented? | Recommend-only architecture by default. No autonomous action on the four blast-radius categories (money / deployment / public content / external comm). |
| 7 | What is your incident-response timeline under Article 26(5)? | Detection-to-provider-notification ≤ 24 hours for serious incidents. Detection-to-deployer-alert in real time via the existing alerting surface. |
| 8 | If our DPA / data residency requires EU-only processing, can you accommodate? | Yes — edge-deployment profile available. Microsoft Foundry Local + per-tenant key isolation. No cross-region model fine-tuning without explicit deployer consent. |
What this page is not
This is not legal advice. The conformity-assessment process under the EU AI Act is a regulated procedure that produces a CE mark; AISB participates in that process as a provider, but a deployer's own qualification of any AI system for use in the EU is the deployer's own legal responsibility.
This is not a claim that AISB is exempt from the EU AI Act. We are explicitly engineered to meet the high-risk conformity requirements, not to avoid them.
This is not a substitute for the deployer's own Article 27 Fundamental Rights Impact Assessment where Annex III §1 / §6 / §7 applies. That FRIA is the deployer's to author; AISB's outputs are inputs to that document, not a replacement.
Cross-references
- Narrative thesis: The August 2026 Procurement Deadline Most CRE Platforms Are Pretending Is Not Real
- Verification methodology: IPMVP Verification Moat
- Open-protocol posture: The Smart Building Moat Has Moved — Why Open-Protocol BMS Is Now Table Stakes
- Ask the agent: /ask/ — submit a procurement query; the agent responds in recommend-only mode with code-anchored citations
Reference list
- Regulation (EU) 2024/1689 — the EU AI Act, OJ L 12 July 2024. Articles 6, 9–15, 26, 27, 99. Annex III.
- Holland & Knight LLP — "EU AI Act: High-Risk Designation for Building Automation," April 2026 advisory.
- Baker Botts LLP — "Preparing for August 2026: Annex III Practitioner Guide," March 2026 client note.
- McKenna Consultants — "EU AI Act Article 26 Deployer Obligations: A Compliance Roadmap," May 2026 legal analysis.
- EU Commission — AI Act FAQ + Q&A published progressively through Q1/Q2 2026.
Page maintained against current legal-source reading. Last reviewed: 11 May 2026. Material legal developments are tracked in the AISB regulatory queue and pushed to this page within 7 days of publication. Procurement teams citing this page in an RFP / RFI response are welcome to do so; please reference the page URL + the review date.