ALI Ethical Consent Framework™
Founded by Alison Leigh, PhD, MFT
Clinical Psychologist
hdai.org
The Framework
ALI — The Ethical Consent
Architecture for AI.
The ALI Ethical Consent Framework is the first clinically grounded consent layer designed for deployment in AI systems operating in sensitive human contexts. Invented by Alison Leigh, PhD, MFT. Built from thirty years of clinical practice. Available now.
A deployable ethical consent layer — structurally embedded.
ALI is not a set of guidelines. It is not a policy document. It is not a checklist. The ALI Ethical Consent Framework is a structural architecture — a deployable layer that sits inside an AI system and governs how that system engages with human beings in sensitive, high-stakes, or psychologically loaded contexts.
It was designed by Alison Leigh, PhD, MFT from first principles drawn out of clinical psychology: that entering someone's psychological interior creates specific, enforceable obligations — and that those obligations must be structural, not aspirational, to mean anything at all.
ALI integrates with existing AI safety infrastructure. It does not replace security layers, privacy compliance, or bias mitigation — it addresses the domain none of them cover: the ethical consent of the human being inside the conversation.
Named for its inventor.
Accountable by design.
ALI carries the name of its inventor, Alison Leigh, PhD, MFT. That is a deliberate choice. A framework that makes demands of accountability in AI must itself be accountable — and a named framework, tied to a named inventor with named credentials and a documented body of work, is accountable in a way that a branded product name never is.
When an organization deploys ALI, they are deploying a framework with a verifiable origin, a verifiable author, and a verifiable academic foundation. That provenance is part of the value.
Three principles. One integrated layer.
The ALI framework is organized around three core architectural principles, each drawn directly from clinical psychology ethics and translated into deployable AI system design. The full methodology behind each principle is available to qualified organizations under NDA.
Alignment in ALI refers to the systematic alignment of AI behavioral architecture with established principles of human psychological safety and informed consent. An AI system that is aligned in the ALI sense does not simply avoid causing harm — it actively operates within the ethical boundaries that clinical psychology has defined as necessary for safe human interaction.
This principle addresses the full behavioral posture of the system: how it presents itself, what it claims to be, what it acknowledges it cannot do, and how it handles escalation when the interaction moves into territory that requires human professional intervention.
ALI functions as a layer — meaning it is structural, not supplementary. A policy document is supplementary. A training guideline is supplementary. A layer is embedded in the system itself, operating at the point of interaction rather than as an after-the-fact review. This is the critical distinction between aspirational AI ethics and deployable AI ethics.
The Layer principle ensures that ALI integrates with an organization's existing tech stack — functioning alongside security, privacy, and compliance infrastructure rather than replacing or competing with any of it. The ethical consent layer closes a gap; it does not reopen one.
The Interaction principle governs the specific moment of AI-human exchange — the point at which a person shares something private, asks something vulnerable, or relies on an AI system in a way that creates psychological risk. This is where ethical failure in AI causes actual harm, and this is where ALI operates most precisely.
The Interaction layer defines the consent architecture for that moment: what the user is told, what the system acknowledges about its own limitations, how the interaction is paced when vulnerability is detected, and how the system protects the person's psychological safety throughout the exchange.
HDAI conducts a formal assessment of the organization's current AI deployment — identifying the specific contexts where ethical consent exposure exists, mapping the gaps in existing safety infrastructure, and determining the appropriate configuration of the ALI layer for that system.
The ALI framework is integrated into the organization's AI system architecture — configured to the specific use case, user population, and risk profile of that deployment. Integration is designed to work with existing safety layers, not to require a rebuild of existing infrastructure.
The organization receives full documentation of the ALI deployment — including audit trails, consent architecture records, and the evidentiary foundation for demonstrating ethical compliance to regulators, boards, legal counsel, and the public. This documentation is a core deliverable of every ALI engagement.
The full methodology is protected.
Access is structured.
The complete ALI framework architecture is proprietary intellectual property. HDAI makes it available to qualified organizations through a structured access model that protects the framework while enabling genuine evaluation and deployment.
Available to any qualified organization. A formal briefing with Alison Leigh, PhD, MFT covering what ALI is, what it addresses, and whether it is the right fit for your deployment context. No NDA required at this stage.
The conceptual paper detailing the ALI architecture is available under NDA to organizations in active evaluation. Contact HDAI to initiate the NDA process.
Organizations ready to deploy ALI engage through a formal pilot program — including assessment, integration support, documentation, and a defined evaluation period. Pilot terms are negotiated directly with HDAI.
Enterprise and government licensing of the ALI framework is available following successful pilot completion. Licensing terms are structured to reflect the scope, scale, and deployment context of the organization.
Companies deploying conversational AI in mental health, healthcare, financial services, legal, HR, or any consumer context where users share sensitive personal information. If your AI enters a psychologically loaded conversation, ALI is relevant to your deployment.
Federal, state, and municipal agencies deploying AI in public-facing services — benefits administration, mental health crisis lines, veterans services, social services, and any government AI that interacts with citizens in high-stakes or vulnerable contexts.
AI companies and platform developers seeking to differentiate their products through a verifiable, clinically grounded ethical consent standard — and to build the documentation of responsible deployment before regulation requires it.
A formal briefing with Alison Leigh, PhD, MFT introducing the ALI Ethical Consent Framework to your organization's leadership, legal team, or product team. The briefing covers what ALI is, what gap it closes, how it deploys, and whether it is the right fit for your specific AI deployment context.
The briefing is the entry point for all HDAI engagements. Organizations do not proceed to pilot or licensing without first completing a formal briefing. This ensures that every deployment is appropriate, purposeful, and correctly scoped.
A formal assessment of your organization's current AI deployment — mapping where ethical consent exposure exists, identifying the gaps in your existing safety infrastructure, and producing a documented risk profile specific to your system and user population.
The assessment is conducted by Alison Leigh, PhD, MFT and produces a written report that can be shared with legal counsel, the board, regulators, or any stakeholder requiring documented evidence of ethical due diligence.
A structured pilot program in which the ALI Ethical Consent Framework is integrated into a defined scope of your AI deployment — with full support from HDAI through assessment, configuration, integration, and evaluation. The pilot produces a documented, measurable outcome that provides the evidentiary foundation for full deployment or licensing.
Pilot programs are scoped individually, structured around the organization's deployment context, and governed by a formal agreement between the organization and HDAI. Access to the full framework methodology is provided under NDA for the duration of the pilot.
Following successful pilot completion, enterprise organizations and government agencies may license the ALI Ethical Consent Framework for ongoing deployment across their AI systems. Licensing provides the organization with the right to deploy ALI as a permanent structural layer, with ongoing support, documentation, and access to framework updates from HDAI.
Licensing terms are structured to reflect the scope, scale, user population, and deployment context of the organization. All licensing agreements include ongoing access to Alison Leigh, PhD, MFT for consultation, and full documentation support for regulatory and legal purposes.
The value of deploying ALI is concrete.
ALI deployment produces a documented, auditable record of ethical consent architecture in your AI system. When regulators, legal counsel, or a board asks whether your organization addressed ethical consent in its AI deployment, the answer is verifiable and complete.
ALI is not a marketing claim. It is a verifiable framework with a documented inventor, a documented academic foundation, and a documented deployment record. Organizations that adopt ALI can demonstrate ethical consent architecture to users, partners, investors, and the public in a way that a policy statement cannot replicate.
The framework exists.
Your organization can deploy it.
Contact HDAI to arrange a framework briefing with Alison Leigh, PhD, MFT, request NDA access to the conceptual paper, or begin a conversation about pilot deployment.