About Human Futures AI Training Institute™ - Programs
The Human Futures AI Training Institute™ is a dedicated learning and professional development hub focused on building responsible, human-centred AI practices for the real world. Drawing on our expertise in computational rhetoric, design justice, and AI ethics, the Institute offers training programs that empower professionals to work with AI safely, critically, and with cultural awareness.
Our programs equip healthcare teams, educators, designers, and leaders with essential skills in AI literacy, ethical boundaries, bias and identity harm, data stewardship, and human-centred futures thinking. Above all, the Institute is committed to one guiding principle: technology must serve human dignity, not overshadow it.
The Human Futures AI Training Institute™ brings a recognizable, future-focused brand to the evolving AI landscape—combining rigorous scholarship, practical tools, and a deep commitment to people, communities, and the futures they deserve.
Human Futures AI Training Institute™ - Programs
-
Program for Therapists, Counselors & Mental Health Professionals
Ethics & Boundaries in AI-supported Care
This program explores how AI is entering mental-health and care settings, examining the risks, rhetoric, and realities that shape its use in practice. Participants develop a grounded understanding of how algorithmic systems interpret language, behaviour, and emotion—and where those interpretations can break down.
The training addresses algorithmic bias, identity misrecognition, and cultural harm in clinical technologies, with particular attention to how these systems can distort or oversimplify lived experience. Participants learn how client narratives move through digital systems and what it means to protect data dignity in environments built for efficiency rather than care.
The program also provides guidance on when AI tools may be appropriate in therapeutic contexts—and when restraint is the more ethical choice. Through applied discussion, participants build an AI-aware treatment approach that prioritizes professional judgment, transparency, and the preservation of trust at the centre of care.
By the end of this program, participants will have developed a clear, practice-informed understanding of how AI systems intersect with mental-health and care work. They will be able to recognize ethical risk, bias, and narrative distortion in AI-supported care technologies, and to assess where algorithmic interpretation may undermine clinical judgment or relational trust.
-
Program for Public, Community & Global South Health Workers
Designing Data Futures that Honour People and Communities
This program examines how data and AI systems shape communities, public health, and collective futures—often in ways that reproduce historical inequities. Participants explore the concept of algorithmic colonialism, learning how global health technologies can unintentionally reinforce power imbalances and marginalize local knowledge.
The training centres community-led and culturally grounded approaches to data design, emphasizing respect, participation, and accountability. Participants engage with the role of AI in epidemiology, prevention, and resource planning, while critically examining whose data is used, how it is interpreted, and who ultimately benefits.
Attention is given to privacy, consent, and culturally appropriate data governance, particularly in contexts where data extraction has long been disconnected from community wellbeing. Through case studies drawn from African, Caribbean, and Indigenous communities, participants develop practical insight into designing data futures that honour people, place, and lived experience.
By the end of this program, participants will have developed a critical understanding of how data and AI systems shape community wellbeing, public health priorities, and collective futures—particularly in contexts marked by historical inequity and power imbalance. They will be equipped to recognize extractive data practices and assess the social consequences of AI deployment at community and population levels.
Participants will be able to apply community-centred and culturally grounded approaches to data design, supporting ethical decision-making around privacy, consent, and governance. They will leave with practical insight for designing data systems that respect local knowledge, uphold accountability, and contribute to more just and inclusive data futures.
-
Program for Physicians, Nurses & Allied Health Practitioners
AI Essentials: Safe, Responsible & Culturally-competent Use
This program provides a practical foundation for professionals working with AI in clinical and care-related contexts. It focuses on how AI is used in diagnostics, triage, and everyday clinical workflows, helping participants understand both the capabilities and the limitations of these systems.
Participants examine the rhetoric of “objectivity” often attached to medical AI, learning how claims of neutrality can obscure bias, uncertainty, and value judgments. The training introduces ethical frameworks for high-stakes decision-making, supporting responsible judgment in environments where errors can have serious consequences.
The program also addresses how to communicate AI-related risks clearly and compassionately to patients, ensuring transparency and trust. Attention is given to documentation, liability, and professional standards, equipping participants to use AI tools in ways that align with ethical obligations, regulatory expectations, and culturally competent care.
By the end of this program, participants will have developed a clear, practical understanding of how AI is used in clinical and care-related contexts, including its capabilities, limitations, and associated risks. They will be equipped to critically assess claims of objectivity and neutrality in medical AI, recognizing where bias, uncertainty, and value judgments shape system outputs.
Participants will be able to apply ethical frameworks to high-stakes decision-making, communicate AI-related risks transparently and compassionately to patients, and align AI use with professional standards, documentation requirements, and regulatory expectations. They will leave prepared to use AI tools in ways that support safe, responsible, and culturally competent care.
-
Program for Healthcare Administrators & Policy Leaders
AI Governance in Healthcare: Policy, Risk, and Equity-Centered System Design
This program focuses on the governance structures that shape how AI is selected, deployed, and managed within healthcare organizations. Participants are introduced to core AI governance concepts, including bias, risk assessment, and the evolving regulatory landscape that influences clinical and organizational decision-making.
The training explores how procurement criteria can be designed to centre equity rather than efficiency alone, helping organizations evaluate AI systems for their social and human impact before adoption. Participants also examine the rhetoric of metrics, learning how dashboards, performance indicators, and visualizations influence clinical priorities and shape care practices.
Attention is given to designing clear, accountable organizational AI policies that align with professional standards and ethical commitments. The program concludes with a focus on change management, supporting leaders and teams in navigating the cultural, operational, and ethical challenges of responsible AI adoption in healthcare settings.
By the end of this program, participants will have a clear, working understanding of how AI governance decisions shape equity, risk, and accountability in healthcare organizations. They will be equipped to critically assess AI systems beyond performance claims, evaluating procurement, policy, and oversight practices through an equity-centred lens.
Participants will be able to design and contribute to responsible AI governance structures, including procurement criteria, organizational policies, and change-management strategies. They will leave with the confidence to engage leadership, clinical teams, and technical stakeholders in informed, ethical decision-making around AI adoption and use.
-
Program for Wellness, Coaching & Integrative Health Practitioners
AI in Holistic Practice: Tools, Boundaries & Ethical Use in Wellness Cultures
This program explores how AI is increasingly being used in coaching, wellness, and holistic practice, and what it means to engage these tools with care, intention, and ethical clarity. Participants examine AI not as an authority, but as a reflective companion—one that can support insight while remaining firmly bounded by human judgment.
The training focuses on avoiding harm in AI-supported coaching, particularly where emotional vulnerability, identity formation, and meaning-making are involved. Participants learn how sensitive personal narratives move through digital systems and how to protect privacy, consent, and trust in wellness contexts.
Practical applications such as session preparation, reflective journaling, and insight generation are explored alongside critical discussion of boundaries and appropriate use. The program also examines the rhetoric of healing, unpacking how AI language and framing can shape client identity, expectations, and self-understanding—often in subtle but consequential ways.
By the end of this program, participants will have developed a grounded understanding of how AI tools can be used responsibly within wellness and holistic practice without displacing human judgment, relational care, or ethical boundaries. They will be able to assess when AI support is appropriate, when restraint is necessary, and how to prevent harm in emotionally sensitive contexts.
Participants will be equipped to protect personal narratives, uphold consent and privacy, and recognize how AI language and framing can influence identity and meaning-making. They will leave with practical strategies for integrating AI into reflective practices in ways that preserve trust, agency, and the integrity of healing relationships.
-
Program in Ethical Data Design for Healthcare & Human Futures
Ethical Design, AI Literacy & Human Impact in Clinical Technologies
This program grounds AI design in the realities of healthcare, where data systems interact directly with patient lives, clinical judgment, and trust. Participants explore how human-centred design principles apply when AI systems mediate diagnosis, treatment pathways, and care decisions.
The training examines ethical data design and literary human impact, focusing on how patient stories are translated into structured data—and what is lost, distorted, or over-simplified in the process. Participants engage deeply with issues of bias, identity, and cultural harm in data systems, learning how misrecognition can shape clinical outcomes.
A core focus is data dignity: protecting patient narratives, context, and agency within digital systems built for scale. The program introduces ethotic heuristics as a practical framework for responsible AI design, supporting designers in making value-aligned decisions. Participants conclude by applying human-centred methods to real design challenges, strengthening their ability to intervene thoughtfully in healthcare AI systems.
By the end of this program, participants will have developed a critical, practice-ready understanding of how AI systems shape clinical care and patient experience. They will be equipped to recognize ethical risk, bias, and narrative loss in healthcare data systems, and to assess how design decisions influence trust, equity, and outcomes.
-
Program for Designers & Creative Technologists - Human Centred AI Data Design
Practical, critical training for designers shaping AI-powered systems
This multi-module program equips designers and technology professionals with the literacy, ethical frameworks, and intervention skills needed to work responsibly with AI-driven systems—particularly in contexts where human dignity, identity, and lived experience are at stake.
Participants develop a grounded understanding of how AI systems operate beyond interfaces, examining how data, models, and design decisions shape real-world outcomes. The program moves from foundational AI literacy to applied design action, supporting participants in identifying harm, interrogating power, and recognizing where bias and misrecognition emerge within socio-technical systems.
Throughout the training, emphasis is placed on ethical data design, narrative integrity, and cultural awareness. Participants learn how to redesign systems, workflows, and interfaces with care and accountability, strengthening their capacity to intervene thoughtfully and advocate for human-centred approaches within their teams and organizations.
By the end of the program, participants are better prepared to make informed judgments about how—and whether—AI should be used, and how design practice can contribute to more just, responsible, and future-ready AI systems.

