A computer chip in a dark background with bright orange letters "AI".
Insights
Jan 2026
4 min read

Trust Issues: Disability and the AI Divide

In short:

  • Trust in AI averages just 2.5 out of 5, with people experiencing chronic illness rating it lowest at 1.9—privacy concerns and unreliable answers are the biggest barriers, not complexity.
  • 68% want humans leading on basic tasks and 74% on complex tasks, rejecting the idea that AI should replace human support or decision-making.
  • Blind and low vision communities are already using AI for image description and navigation, but 52% of all respondents want data deleted immediately or never stored at all.

The Disability Community's Relationship with AI: Cautious, Curious, and Conditional

What we heard

The disability community's relationship with AI is cautious, curious, and conditional. Trust levels sit at an average of 2.5 out of 5—hardly a ringing endorsement. But dig deeper and the picture gets more interesting. People aren't rejecting AI outright. They're rejecting how it's being built and sold to them.

Privacy and security worries topped the list of barriers, named by 23% of respondents as the main thing stopping them using AI more often. Unreliable answers came second with 18% of people, followed by cost at 13%. Complexity wasn't the problem—only seven percent of people found it too confusing. The issue isn't that people don't understand AI. It's that they don't trust it.

One person summed it up: "Surveillance, breach of intellectual property and copyright, and bias." Another said: "I worry about how my data is being used." These aren't vague concerns. They're informed critiques from people who know what happens when technology gets rolled out without safeguards.

Trust varies by disability type

Trust varied significantly by disability type. People with mental health or psychosocial disabilities rated it highest at 3.4 out of 5. Those experiencing chronic illness, pain, or fatigue rated it lowest at 1.9. Physical mobility and neurodivergent communities sat around 2.5 to 2.6.

The pattern reveals something important: trust isn't just about the technology—it's about whether people feel the technology understands their lives and protects their data.

The role people want AI to play

When asked how AI should support basic tasks, 68% said either human-first with AI assistance or no AI at all. For complex tasks, that figure rose to 74%. The message is clear: people don't want AI making decisions. They want it helping humans make better ones.

Yet some uses of AI are already transforming lives. Blind and low vision respondents spoke powerfully about image description tools. "I find AI image descriptions extremely helpful," said one person. Another shared: "Describing images because I'm blind. It's increased my independence and allowed me to engage with content which was previously inaccessible."

Neurodivergent respondents valued AI for breaking down overwhelming tasks. "AI can break tasks down into more achievable steps. It can help make messages sound more polite or formal. It can assist with editing important assignments." One person with ADHD said: "AI summaries help me focus my research for my job, but I still dig in further to ensure the summary is correct."

The nuance matters. People aren't using AI blindly—they're using it strategically, with human verification. As one respondent put it: "I always use my productivity first before AI considers taking over."

Data concerns run deep

But the enthusiasm has limits. When asked about data retention, 52% wanted their information deleted after the session or not stored at all. Another 24% wanted control, saying data should stay "until I delete it." Only tiny minorities were comfortable with longer retention periods. The dominant sentiment? Don't keep our data unless we explicitly allow it.

The worries people voiced went beyond personal privacy. Misinformation, job losses, environmental impact, and bias all featured heavily. "People are refusing to learn and teach themselves—they rely entirely on AI to give them answers and won't even fact-check it," said one respondent.

Another raised disability-specific concerns: "Worry about bias and unfair treatment of people with disabilities." Someone else pointed to environmental harm: "Widespread misinformation, deterioration of people's cognitive abilities after excessive use, extreme environmental and social impact, especially in poorer countries."

These responses reveal a community that sees both sides—the liberating potential and the structural risks. One person captured this tension perfectly: "What worries me most is that AI might make it too easy to rely on it instead of building my own coping skills, which I really need to manage my ADHD long term."

Why it matters

The disability community's scepticism about AI isn't technophobia. It's pattern recognition. These are people who've watched assistive technology promised and under-delivered. Who've seen automation replace human support workers without improving outcomes. Who've experienced algorithms making decisions about their lives—support payments, job applications, insurance—with no accountability when those decisions are wrong.

So when AI companies promise seamless support and frictionless experiences, disability communities ask: at what cost? Who benefits? What happens to our data? What happens when it gets things wrong?

The trust gap isn't about education or exposure. Mental health and psychosocial disability communities—who rated trust highest—aren't necessarily more tech-savvy. They may simply have fewer current alternatives and more urgent need. Meanwhile, people managing chronic pain and fatigue—who rated trust lowest—may have experienced more medical gaslighting, more dismissal of their expertise, more technology that promised relief but delivered surveillance.

The insistence on human oversight matters because lived experience is expertise. AI can describe an image, but it can't know what matters in that image to a blind person navigating their day. It can break down tasks, but it can't know which executive function challenges a neurodivergent person needs support with versus which ones build important skills.

This is why 74% want humans leading on complex tasks. Not because they doubt AI's processing power, but because they doubt its judgement. They know the difference between assistance and replacement.

The data retention concerns cut deeper than privacy. They're about power. If an AI company holds your communication history, your health queries, your daily routines—they hold leverage. For people with disability, who already navigate systems designed to deny support and question credibility, that's not paranoia. It's realism.

Meanwhile, the current uses reveal AI's genuine potential. Image description isn't a nice-to-have—it's access. Task breakdown isn't convenience—it's capability. Route planning, colour detection, text summarisation—these tools are already reducing barriers. The question is whether they'll stay accessible, affordable, and under user control.

What needs to change

Build with disability communities, not for them. The gap between blind users finding AI "life-changing" and others finding it untrustworthy isn't random—it reflects who's been centred in design. Involve disabled people at every stage, compensate them properly, and listen when they identify risks.

Make data deletion the default, not an option buried in settings. If 52% want immediate deletion and another 24% want full control, the standard should be: don't store unless explicitly requested. Session-based by default. User control as non-negotiable.

Prioritise reliability over novelty. Unreliable answers were the second-biggest barrier. If AI can't consistently get things right, it doesn't matter how clever the interface is. Disabled people can't afford to fact-check every output when the stakes are medical appointments, legal documents, or accessible route planning.

Design for human-AI collaboration, not replacement. Three-quarters of respondents want humans leading on complex tasks. Stop pitching AI as a substitute for support workers, educators, or decision-makers. Build tools that make human support better, not redundant.

Address cost as an access issue. Thirteen respondents named cost as the main barrier. Accessibility features shouldn't be premium add-ons. If AI image description is essential access for blind people, it should be free or subsidised. Same for task-breakdown tools for neurodivergent people, transcription for deaf people, and simplified interfaces for people with intellectual disability.

Create clear accountability for bias and errors. Multiple respondents worried about disability bias in AI. When algorithms make mistakes about benefit eligibility, job applications, or health recommendations, there must be human review pathways and genuine recourse. "The computer said no" isn't acceptable.

Be honest about environmental impact. Several respondents raised this—AI's energy use and resource extraction have consequences. Don't greenwash. Be transparent about costs and invest in genuinely sustainable infrastructure.

Finally, respect the "no AI" choice. Twenty-eight people said they want no AI for basic tasks, twenty-five for complex ones. That's valid. Not everyone needs to adopt every technology. Design systems where opting out doesn't mean opting out of support, access, or opportunity.

AI has real potential to reduce barriers for disabled people. But potential isn't the same as delivery. And delivery without trust, without control, without accountability, isn't access—it's just another system that doesn't work for the people who need it most.

author profile avatar

Kelly Schulz

Director - Knowable Me

Kelly is the Managing Director of [Knowable.Me](http://knowable.me/), driving value creation and providing data and insights into the needs and preferences of people with disabilities.

Throughout her career, Kelly has held senior corporate roles in Complaints, Accessibility & Inclusion, Customer Experience, and Brand & Communications. Her blend of strategic thinking and human-centred design methodologies brings alignment of disparate groups to influence positive momentum and drive growth.

Kelly holds Chair and non-executive board roles and is a member of the Technology, Innovation & Value Creation Committee of Swinburne University. She is a graduate of the Australian Institute of Company Directors.

Kelly identifies as “blind, with just enough vision to be dangerous” and is ably assisted by her guide dog, Zali.

A note from Knowable Me

This article is written by one of our brilliant community members. Their experiences, opinions and perspectives are uniquely their own — and that’s exactly why they matter. They don’t necessarily reflect the views of Knowable Me or our partners, but they do reflect real life. And we think sharing real life is how things change.