This page is a procurement document. It maps AISB's architectural primitives against the six EU AI Act conformity requirements that apply to high-risk AI deployed in building automation. If you are a legal / procurement reviewer evaluating AISB for an EU-domiciled enterprise, this is the surface to cite. For the narrative thesis, see the August 2 procurement deadline post.

The deadline

2 August 2026 — EU AI Act Articles 9, 10, 14 and 26 become binding for high-risk AI systems. Penalty for non-compliance: up to €35M or 7% of global annual turnover per Article 99(3), whichever is higher. The text is in force; the enforcement date is fixed.

Holland & Knight (April 2026 advisory), Baker Botts (March 2026 client note), and McKenna Consultants (May 2026 legal analysis) converge on the same reading: AI used in safety components of buildings, including HVAC and fire/life-safety control, and AI that influences access to workplaces via occupancy / hazard / code-violation monitoring, falls under Annex III high-risk scope.

Why CRE building automation is in scope

EU AI Act Article 6 + Annex III enumerates eight high-risk categories. Three of them apply directly to building-automation AI:

The first category is the dominant one for CRE smart-building stacks. An AI control loop that adjusts HVAC setpoints based on occupancy sensing, that interacts with code-violation detection, or that influences emergency-egress lighting under fire-alarm logic, is in scope. The "we only optimize comfort" defense does not work — Article 6 looks at the function in the safety system, not the marketing copy.

The six conformity requirements + how AISB satisfies each

EU AI Act requirementWhat it asksAISB architectural primitive
Art. 9 — Risk Management System Documented, iterative risk identification + mitigation across the full AI lifecycle. Update on substantial change. v82 Daily Squad Self-Test Loop — 18-test Standard Suite run nightly (06:30 TPE), routes failures to fix queue. Risk register maintained in data-logs/risk/. Documented retraining trigger thresholds.
Art. 10 — Data Governance Training, validation, testing data sets meet quality criteria. Bias examination. Statistical properties documented. CRE-EN Privacy Broker — differential privacy (Laplace mechanism, per-zone ε-budget), k-anonymity floor, GDPR Art. 9 + Colorado SB-205 + EU AI Act overlay. Per-region consent enforcement at the fusion layer, not bolted on.
Art. 11 — Technical Documentation System purpose, design, training, performance, monitoring — sufficient for a deployer + authority to assess conformity. v61 immutable raw/ landing — content-addressed (SHA-256), chmod 444, never mutated. Provenance preserved end-to-end. Architectural docs versioned in data-logs/beast-os-architecture.md.
Art. 12 — Record-Keeping Automatic logging of events over the system's lifetime. Logs retained at least 6 months. v17 Context Tree + episodic traces — every significant agent output logged to data-logs/memory-lifecycle/traces/ with timestamp, agent, confidence, source. Append-only JSONL. Retention configurable; 6-month minimum is the floor.
Art. 14 — Human Oversight Effective oversight by natural persons. Ability to intervene, override, halt. Awareness of automation bias. Recommend-only architecture (default) — 17 mandatory agents emit assumption_surface_v1 envelopes before action. Tool risk classification routes HIGH-risk actions to Robin. No autonomous money-movement, deployment, or external-comm.
Art. 15 — Accuracy, Robustness, Cybersecurity Appropriate level of accuracy, robustness against errors/inconsistencies, cybersecurity against adversarial attacks. v91 Security Hardening Plane — Trivy SCA, license + secret gates, SBOM, SARIF. Adversarial Ship Gate (v48/v65) — 4-phase intelligent pipeline reviews every patch. Taint-flow guard tracks untrusted content through agent handoff surfaces.

Article 26 — what the deployer obligation actually requires

If you are the deployer (the entity putting the AI system into use in the EU), Article 26 layers four obligations on top of the high-risk system's existing conformity:

  1. Qualified human oversight. Natural persons with the competence, training, authority, and resources to perform the oversight function. Not a checkbox.
  2. Monitor operation per instructions. Inform the provider of identified risks or serious incidents. Suspend use if risks materialise.
  3. Maintain logs for at least 6 months. Auto-generated logs from the AI system, retained, accessible to authorities on request.
  4. Fundamental rights impact assessment (where Annex III §1 / §6 / §7 apply) — documented, prior to first use.

AISB's role here is as the system provider. The deployer obligation rests with the EU-domiciled enterprise. AISB's output is structured to make the deployer's Article 26 burden tractable:

#QuestionAISB answer
1Is your system classified as high-risk under Annex III?Where deployed in HVAC / fire-life-safety control or occupancy-driven access decisions, yes — and we provision for it by default.
2Where is your Article 9 risk-management documentation?v82 Daily Squad Self-Test Loop output + risk register, both versioned. Sample artifacts available under NDA.
3How do you satisfy Article 10 data governance?CRE-EN Privacy Broker — differential privacy + k-anonymity floor + per-region consent. Sample DPIA template available.
4Can you produce Article 11 technical documentation?v61 immutable raw landing + architecture docs. Pre-deployment, we produce a per-tenant Article 11 packet.
5Can you produce Article 12 logs for any decision in the last 6 months?Yes — append-only JSONL traces, queryable by tenant + agent + timestamp. Minimum retention 6 months; configurable higher.
6How is Article 14 human oversight implemented?Recommend-only architecture by default. No autonomous action on the four blast-radius categories (money / deployment / public content / external comm).
7What is your incident-response timeline under Article 26(5)?Detection-to-provider-notification ≤ 24 hours for serious incidents. Detection-to-deployer-alert in real time via the existing alerting surface.
8If our DPA / data residency requires EU-only processing, can you accommodate?Yes — edge-deployment profile available. Microsoft Foundry Local + per-tenant key isolation. No cross-region model fine-tuning without explicit deployer consent.

What this page is not

This is not legal advice. The conformity-assessment process under the EU AI Act is a regulated procedure that produces a CE mark; AISB participates in that process as a provider, but a deployer's own qualification of any AI system for use in the EU is the deployer's own legal responsibility.

This is not a claim that AISB is exempt from the EU AI Act. We are explicitly engineered to meet the high-risk conformity requirements, not to avoid them.

This is not a substitute for the deployer's own Article 27 Fundamental Rights Impact Assessment where Annex III §1 / §6 / §7 applies. That FRIA is the deployer's to author; AISB's outputs are inputs to that document, not a replacement.

Cross-references

Reference list

Page maintained against current legal-source reading. Last reviewed: 11 May 2026. Material legal developments are tracked in the AISB regulatory queue and pushed to this page within 7 days of publication. Procurement teams citing this page in an RFP / RFI response are welcome to do so; please reference the page URL + the review date.