AI is no longer just influencing our apps and devices. It is making decisions not just in hiring, education or healthcare, but in our everyday digital interactions. But what happens when those decisions unintentionally disadvantage people with disabilities? When algorithms misinterpret the use of assistive technology, ignore disability-related data, or reinforce stereotypes, the result is not only unfairness, but also a barrier to accessibility and equal opportunity.
We already recognize an inaccessible app as an accessibility violation. But here is the critical question: if an AI system rejects resumes because of gaps in work experience due to medical issue, or mislabels the speech of someone with a disability, or wrongly flags autistic candidates as dishonest due to atypical eye contact, should that not also be considered an accessibility failure?
In this session, we will explore the intersection of accessibility and AI ethics, sharing our experiences of working with diverse customers on accessibility initiatives. We will show how AI is rapidly changing the landscape, why organizations must be proactive, and how testers play a pivotal role in ensuring inclusive outcomes exploring real-world examples where disabled users are affected by bias. We will share insights on how testers, designers, and organizations can foster more inclusive and responsible AI practices, drawing on lessons from our work.
By redefining AI bias as an accessibility failure, we can extend accessibility beyond interfaces and into outcomes, ensuring technology empowers rather than excludes. Because when AI discriminates, accessibility is compromised, and it is time we address accessibility not just in design, but in decision-making systems themselves.
Key Take aways:
– Recognize how AI bias creates accessibility barriers.
– Transform accessibility from design compliance to AI decision making
– Integrate inclusive testing, data, and design practices.