Anamika Mukhopadhyay

Principal Engineer

Biography.

As a passionate QA Consultant with over a decade of immersive experience spanning various domains of software testing and automation, Anamika has delved into a wide spectrum of testing methodologies, cultivating a profound comprehension of how functionality, performance, user experience, accessibility and emerging techs like AI, intricately intertwine. In her current role at Nagarro, she proudly leads the Global Mobile and Accessibility Testing practices. Having a very dynamic perspective enables her to introduce pioneering strategies that not only streamline testing processes but also amplify efficiency and effectiveness in the projects she oversees and the consulting endeavors she engages in, often guiding and helping enterprises on setting up world class testing capabilities. When not working, you might catch her exploring a new city and relishing its local cuisine.

Talk.

Is AI Bias Against Disabled People an Accessibility Violation?

AI is no longer just influencing our apps and devices. It is making decisions not just in hiring, education or healthcare, but in our everyday digital interactions. But what happens when those decisions unintentionally disadvantage people with disabilities? When algorithms misinterpret the use of assistive technology, ignore disability-related data, or reinforce stereotypes, the result is not only unfairness, but also a barrier to accessibility and equal opportunity.
We already recognize an inaccessible app as an accessibility violation. But here is the critical question: if an AI system rejects resumes because of gaps in work experience due to medical issue, or mislabels the speech of someone with a disability, or wrongly flags autistic candidates as dishonest due to atypical eye contact, should that not also be considered an accessibility failure?
In this session, we will explore the intersection of accessibility and AI ethics, sharing our experiences of working with diverse customers on accessibility initiatives. We will show how AI is rapidly changing the landscape, why organizations must be proactive, and how testers play a pivotal role in ensuring inclusive outcomes exploring real-world examples where disabled users are affected by bias. We will share insights on how testers, designers, and organizations can foster more inclusive and responsible AI practices, drawing on lessons from our work.
By redefining AI bias as an accessibility failure, we can extend accessibility beyond interfaces and into outcomes, ensuring technology empowers rather than excludes. Because when AI discriminates, accessibility is compromised, and it is time we address accessibility not just in design, but in decision-making systems themselves.
Key Take aways:
– Recognize how AI bias creates accessibility barriers.
– Transform accessibility from design compliance to AI decision making
– Integrate inclusive testing, data, and design practices.

Get in Touch

We would love to speak with you.
Feel free to reach out using the below details.