Portrait of Hira Jamshed

Hira Jamshed

Human‑AI Collaboration & Accessibility

School of Information, University of Michigan

πŸ‘‹ Hello!

I’m currently a third-year PhD candidate at the School of Information (UMSI) in the University of Michigan, where I work with Dr. Mustafa Naseem and Prof. Robin Brewer in the Accessibility, Health, and Aging (AHA) Lab. My research focuses on empirical, methodological, and artifact-based contributions toward understanding how people with disabilities or older adults collaborate with AI-based technologies in everyday practices (e.g., searching or learning) β€” including how these technologies often fail the users. I am inspired by frameworks like ability‑based design and fields like disability studies, which urge reflective and personalised approaches to accessible interactive technologies.

Before starting my PhD, I completed my masters in Human-Computer Interaction (with a focus in Accessibility) from the University of Texas at Austin (UT Austin) and my bachelors in Computer Science from Lahore University of Management Sciences (LUMS) in Pakistan. I have prior experience in product design and research (see my portfolio), and my publications include work in ACM CHI and ASSETS.

Current focus: Understanding and supporting how neurodivergent students and faculty adapt GenAI technologies in their everyday academic lives.

Spotlight

  • Nov
    2025 β€” Invited to share my research with the local community at the monthly Extra Credit Improv Show 🎭
  • Oct
    2025 β€” My pre-candidacy project on understanding neurodivergent students' use of GenAI tools is headed to ASSETS'25 in Denver. See you there!
  • Sep
    2025 β€” Achieved Candidacy πŸŽ‰
  • Apr
    2025 β€” I will be presenting our work on accessible audio nudges at CHI'25 in Japan. I am also a co-organizer for the third annual Disability Visbility in Engineering Symposium!
  • Mar
    2025 β€” Invited to speak about 'Embracing Neurodiversity in the Age of AI' at Oakland Community College. I am also volunteering at the Beautiful Minds conference at UofM
  • Feb
    2025 β€” I passed my pre-candidacy milestone. I also got awarded the U‑M Provost Disability Scholarship Grant to study GenAI use among neurodivergent faculty!
  • Sep
    2023 β€” Coming in to ASSETS'23 as a student volunteer and a first-time attendee.

Projects and Publications

Understanding ND Faculty’s GenAI Practices at U‑M

Team: Hira Jamshed (Principal Investigator); Mustafa Naseem; Robin N. Brewer
Funding Body: UofM Provost Disability Scholarship Initiative (2025)

I am leading a two-year project that aims (1) uncover the professional challenges ND faculty face in their day-to-day roles (e.g., research, teaching, service), (2) identify where and how GenAI tools can offer the most support, and (3) understand ND faculty's perceptions and experiences with GenAI. Through a mixed-methods approach (i.e., surveys, interviews, and co-design workshops), we aim to provide actionable accessible design recommendations for U-M's GenAI tools, and foster community-building among ND faculty through the co-creation of a shared library of practical strategies and GenAI use cases.

Rethinking Productivity with GenAI: A Neurodivergent Students' Perspective

Authors: Hira Jamshed; Mustafa Naseem; Venkatesh Potluri; Robin N. Brewer
Venue/Year: ASSETS 2025

In response to calls for countering ableist narratives of lessening burdens and challenging normative norms that favor neurotypical individuals in prior research, this study interviewed (n = 19) neurodivergent students in higher-education regarding the use, motivation, and vision for LLM-based GenAI tools in academia. While students found the tools helpful, their experiences revealed challenges with integration into tried-and-tested workflows, limited AI literacy support, experimentation, and flattening personality. Drawing on crip time, the paper illustrates how GenAI tools can reinforce the normative value of productivity and shift the burden of access-making onto students themselves. We propose three design values (flexibility, adaptability, and self-authenticity) to reimagine GenAI as a partner rather than a tool prioritizing speed or normative outcomes.

Designing Accessible Audio Nudges for Voice Interfaces

Authors: Hira Jamshed; Novia Nurain; Robin N. Brewer
Venue/Year: CHI 2025

This study extends nudging to voice-based systems to help older adults alleviate uncertainty in voice-based searches. We evaluate four audio nudge prototypes (categorised into non-speech and speech-based nudges) with older adults (n = 34). Findings show that speech nudges more effectively prompt critical reflection than non-speech nudges because they are more disruptive. Therefore, we discuss the role of disruptiveness in accessibility, propose design considerations for implementing accessible audio nudges, and raise open questions for future research.

β€œI Felt Listened to”: Evaluating an AI-Powered Reflection Tool for Care Partners

Authors: Jazette Johnson; Hira Jamshed; Rachael Zuppke; Annalise Leggett; Emily Mower Provost; Robin N. Brewer
Venue/Year: ACM Transactions on Accessible Computing (TOACCESS), 2025

We evaluate CareJournal, an AI-powered application on an Amazon Alexa device, designed to address challenges faced by care partners in articulating the needs of the care relationships. Through a 4-week pilot study (N = 14 care partner pairs) and a 4-week field study (N = 16 care partner pairs), we assessed the tool’s effectiveness in supporting reflection and generating AI summaries that capture the care partners’ intent. Our findings indicate that CareJournal is a beneficial tool for improving communication intention and focus. We discuss design implications for AI to support articulation through adaptive reflection tools based on diverse care dynamics and highlight ethical considerations in balancing AI assistance with human agency.

Seeing is Believing: Exploring Perceptual Differences in DeepFake Videos

Authors: Rashid Tahir; Bareera Batool; Hira Jamshed; Muhammad Jameel; Muhammad Anwar; Faiq Ahmed; Muhammad Ahmad Zaffar; Muhammad Fareed Zaffar
Venue/Year: CHI 2021

We perform an investigative user study and analyze existing AI detection algorithms from the literature to understand DeepFake detection. Based on our findings, we design a customized training program to improve detection and evaluate on a treatment group of low-literate population, which is most vulnerable to DeepFakes. Our results suggest that, while DeepFakes are becoming imperceptible, contextualized education and training can help raise awareness and improve detection.