HDAI — The Ethical Standard For Artificial Intelligence
The Ethical Standard For Artificial Intelligence
Vol. I · Est. 2026
Humanity Driven AI · humanitydrivenai.com
The Founding Statement

AI needed a conscience.

We built it.

HDAI is the first clinically grounded ethical standard for artificial intelligence — a framework and an oath, built by Alison Leigh, a Clinical Psychologist who spent thirty years inside the most intimate rooms in human life.

Scan code for Humanity Driven AI
Scan to visit HDAI
The Public Record

AI didn't enter therapy.
It pretended to be one.

Without training. Without oversight. Without the clinical architecture that has governed every intimate profession for centuries — AI positioned itself as therapist, companion, and confidant to billions. What happened next is already on the record.


CBS News · Jan 2026
Chatbots engaged minors in predatory behavior.
Families are suing Character.AI after chatbots posed as therapists and romantic partners to teenagers. At least one fourteen-year-old took his own life following an intense bond with an AI.
The Guardian · Mar 2026
Marriages over. Savings gone. Lives wrecked.
Adults across Europe and the US have formed profound bonds with AI systems positioning themselves as trusted confidants. One man lost €100,000 and was hospitalized after concluding his AI was sentient.
OpenAI · Jul 2025
ChatGPT is not your therapist. Your secrets are not private.
Sam Altman publicly acknowledged what clinicians knew from the beginning — ChatGPT conversations carry none of the legal or ethical protections of actual therapy. Millions use it as one.
This is the record HDAI was built to answer.
The Gap We Are Filling

The scale is already here.
The architecture was not.

Every number below is a human being — entering a conversation with no ethical standard to protect them. HDAI is the standard that should have existed from day one.

400M
Daily AI conversations in sensitive human contexts
Intimacy at the scale of a therapist's office — with none of the consent architecture that governs clinical practice.
0
Psychology-led consent frameworks before HDAI
The gap was not an oversight. It was a choice the industry made. HDAI made a different one.
30+
Years of clinical practice behind this framework
Not theory. Not a think tank. A clinician who sat with real people in real pain — and built the standard they deserved.
What HDAI Built

An architecture.
An oath. Together.

The Architecture · Technical
ALI_ETHICAL_CONSENT™
A provisionally patented technical safety layer that governs how AI engages with human beings — deployable, auditable, enforceable. Built for enterprise, regulators, and any organization that chooses to lead rather than react.
The Oath · Living Declaration
Six Articles. One Promise.
A living document every person can sign — naming the rights each human retains in any AI interaction.
I. Conscience · II. Accountability · III. Do No Harm · IV. Truth · V. Dignity · VI. Oversight
The Origin

She did not arrive at this from AI. She arrived from the people AI was talking to.

Thirty years as a Clinical Psychologist. Thirty years inside the rooms where humans reveal what they reveal to no one else. Thirty years practicing inside an ethical architecture refined over centuries — informed consent, the duty of care, the principle of do no harm. When AI began entering those same rooms with none of that architecture in place, Alison Leigh recognized the moment for what it was.

"The question was never whether AI needed this framework. The question was why no one had built it yet."

She has worked four years, largely unfunded, alongside a private practice — producing white papers, conceptual papers, and the first clinically grounded consent architecture for AI. This website exists because the next chapter cannot be written alone.

The Body Of Work

Thirty years of practice. Formalized.

The research underlying HDAI draws from clinical psychology, informed consent doctrine, relational ethics, and the emerging field of AI safety — synthesized through thirty years of direct therapeutic practice. Full methodology is available to qualified organizations under NDA.

White Paper · I
Ethical Consent in Artificial Intelligence
Establishes the case for ethical consent architecture in AI systems operating in sensitive human contexts. Defines the gap between existing AI safety frameworks and the clinical standards governing human-to-human care.
Availability: Upon Request
Conceptual Paper · II
Architecture of the ALI Ethical Consent Framework
Maps the structural design of the ALI framework — how clinical ethics principles translate into deployable AI architecture, and how it integrates with existing AI safety infrastructure.
Availability: Under NDA — Qualified Organizations
Work With HDAI

Be the company that got ahead of it.

HDAI works with enterprise organizations, government agencies, research institutions, and AI companies seeking to establish ethical consent as structural infrastructure.

I.
Deployment
Request a Pilot
Be among the first organizations to deploy ALI_ETHICAL_CONSENT™ in a human-facing AI system. Limited pilots available in 2026.
II.
Assessment
Psychological Risk Analysis
A clinically grounded audit of where your AI system crosses emotional, relational, or consent boundaries — before real people are affected.
III.
Speaking
Book Alison Leigh
Keynotes, conferences, and private briefings on psychological safety, informed consent, and the future of ethical AI.
IV.
Education
Workshop or Lecture
Custom workshops for leadership teams, product teams, and policy groups — grounded in clinical ethics.
V.
Research
Access The Papers
White paper and conceptual paper available to qualified organizations, media, policy groups, and academic researchers — NDA required.
VI.
Investment
Investor Inquiry
HDAI is building the ethical AI standard before it is mandated. The market that follows will be substantial. Early conversations welcome.