My work is dedicated to exploring the immense potential of artificial intelligence as a tool for enhancing human capabilities and addressing complex societal challenges. My primary focus is on developing AI technologies that serve as powerful assistive tools to help bridge gaps in accessibility for individuals with cognitive differences and neurodiversity.

Part of my research is devoted to identifying and mitigating harmful biases in AI systems. These harmful biases can and do perpetuate discrimination and inequality, so it’s important to develop methods that make AI more equitable for everyone. When we neutralize these harmful biases we make sure that AI technologies are more trustworthy and beneficial across diverse communities.

I also explore the computational representation of abstract concepts like justice, democracy, and transparency. These are foundational principles of our society, yet they are often challenging to quantify and implement in technology. My work seeks to understand how these values can be systematically incorporated into AI systems and measured to make them not only more effective but also aligned with ethical standards.

Biography

Currently positioned at Morgan State University’s CEAMLS, Gabriella Waters weaves her expertise in human neurobiology and AI to craft the technologies of tomorrow. Her leadership roles span setting research agendas at CoNA Lab to spearheading initiatives at Progressive Heuristics. A staunch advocate for equity in AI, she’s dedicated to ensuring technology respects and understands human diversity

Mission

Enhancing human capabilities, promoting accessibility, ensuring fairness, and incorporating equity into AI.

Gabriella Waters speaking at lectern

Publications

The Ethical Implications of AI Hype:

Examining the over inflation and misrepresentation of AI capabilities and performance

AI & Ethics | August 2024 | Volume 4, Issue 3

AI and Ethics seeks to promote informed debate and discussion of the ethical, regulatory, and policy implications that arise from the development of AI. It will focus on how AI techniques, tools, and technologies are developing, including consideration of where these developments may lead in the future. The journal will provide opportunities for academics, scientists, practitioners, policy makers, and the public to consider how AI might affect our lives in the future, and what implications, benefits, and risks might emerge.  – Learn More

Embedding Ethics and Equity in Artificial Intelligence and Machine Learning Infrastructure

Big Data | September 2023 | Volume 11, Number S1

The National Alliance against Disparities in Patient Health (NADPH) leads the Infrastructure Core within AIM-AHEAD, which functions to assess AIM-AHEAD AI/ML system user “needs and constraints and pilot and test different data and computing infrastructure, tools, and governance models including data policy and organizational models.” – Learn More

Book cover - AI and EthicsAI and Ethics seeks to promote informed debate and discussion of the ethical, regulatory, and policy implications that arise from the development of AI. It will focus on how AI techniques, tools, and technologies are developing, including consideration of where these developments may lead in the future. The journal will provide opportunities for academics, scientists, practitioners, policy makers, and the public to consider how AI might affect our lives in the future, and what implications, benefits, and risks might emerge.  Learn More

The National Alliance against Disparities in Patient Health (NADPH) leads the Infrastructure Core within AIM-AHEAD, which functions to assess AIM-AHEAD AI/ML system user “needs and constraints and pilot and test different data and computing infrastructure, tools, and governance models including data policy and organizational models.” Learn More

The burgeoning influence of Large Language Models (LLMs) in shaping public discourse and decision-making underscores the imperative to address inherent biases within these AI systems. In the wake of AI’s expansive integration across sectors, addressing racial bias in LLMs has never been more critical. This paper introduces a novel framework called Comprehensive Bias Neutralization Framework (CBNF) which embodies an innovative approach to quantifying and mitigating biases within LLMs. Learn More

Artificial intelligence (AI) and machine learning (ML) technology design and development continues to be rapid, despite major limitations in its current form as a practice and discipline to address all sociohumanitarian issues and complexities. From these limitations emerges an imperative to strengthen AI and ML literacy in underserved communities and build a more diverse AI and ML design and development workforce engaged in health research. Learn More

Developing Ethics and Equity Principles, Terms, and Engagement Tools to Advance Health Equity and Researcher Diversity in Artificial Intelligence/Machine Learning: A Modified Delphi Approach – JMIR AI, September 2023

Embedding Ethics and Equity in AI/ML Infrastructure for Health – Big Data, September 2023

Upcoming – Decisional Value Scores: A New Family of Metrics for Ethical AI/ML (featuring the Waters AI Transparency/Responsibility Score) – HCI Conference 2024

Upcoming – Identifying Bias: Applying Definitions Across Machine Learning and Psychological Modalities – Bias Mitigation: Human vs. Machine

Upcoming – Defining Bias: Applying Definitions Across Machine Learning and Psychological Modalities – Bias vs. Error

Upcoming – Creating a Platform for Ethical Testing and Evaluation – Metrics and Mathematical Descriptions

Upcoming – Human Brain vs. ANN

Appearences

Media