
IVI for Elderly Drivers
Role: UX Researcher & Designer
Research Methodology: Mixed Method (user surveys for quantitative data and interviews for qualitative insights, followed by behavioural landscape mapping).
Solution: This project introduced an elderly-accessible mode into an existing in-vehicle infotainment (IVI) system through a toggle feature. Grounded in user research with older adults, the goal was to understand their cognitive, visual, and motor challenges while interacting with digital car interfaces.
Elderly drivers in the Indian market experience significant challenges while interacting with in-vehicle infotainment (IVI) systems due to cognitive, visual, and motor limitations. This research aims to investigate the accessibility gaps within current IVI interfaces, explore the contextual and behavioural needs of aging users, and understand how interface complexity contributes to cognitive overload behind the wheel.
Hypothesis
This involved studying automotive interface trends, accessibility standards, age-related cognitive considerations, and feedback patterns within the Indian market. The aim was to ground the research in existing knowledge and identify areas that required closer investigation during user interactions.
Government of India, Ministry of Road Transport & Highways. (n.d.). Ministry of Road Transport & Highways, Government of India.
National Institute of Mental Health and Neuro Sciences. (n.d.). NIMHANS official website.
Institute of Road Traffic Education. (n.d.). IRTE – Road Safety Through Education & Research.
Society of Automotive Engineers India. (n.d.). SAEINDIA – Advancing mobility knowledge and solutions.
Tuch, A. N., Bargas-Avila, J. A., Opwis, K., & Wilhelm, F. H. (2020). From skeuomorphism to flat design: Age-related differences in performance and aesthetic perceptions. ResearchGate.
Exploring the Existing Landscape
REFERENCES
To truly & closely understand the cognitive and interactional challenges elderly users face with infotainment systems in today's time, we conducted focused user research of two-pronged approach. This phase explored how interface complexity, accessibility gaps, and multitasking while driving impacted their sense of control, comfort, and confidence on the road.
Tuning into the Driver's Minds
A user survey was conducted with a sample size of 30 participants aged 55 and above, all of whom drive independently. The survey aimed to capture behavioural cues, interface-related struggles, and general perceptions around infotainment systems. Here are few of the main insights that helped in validating the initial hypothesis and in identifying recurring friction points within the system.
USER SURVEY
Semi-structured Questionnaire: Framed for User Behaviour Inquiry
Q.1. How often do you drive, and what kind of trips do you usually take?
Q.2. Have you used modern cars with built-in infotainment systems or digital dashboards?
Follow-up: What was your first impression of using them?
Q.3. How comfortable do you feel using touchscreens, buttons, or voice assistants while driving?
Q.4. Can you recall a moment when using the infotainment system felt confusing or frustrating?
Q.5. Are there instances when you’ve avoided using the system because it felt overwhelming?
Q.6. Do you ever feel distracted while interacting with it?
Follow-up: What exactly causes that distraction?
Q.7. Do you find it easy to read text or understand icons on the infotainment screen?
Q.8. Have you experienced any difficulty in reaching or accurately tapping certain buttons or areas of the screen?
Q.9. Have you ever felt unsafe or uncertain while using the system while driving?
Follow-up: Could you describe what happened in that moment?
Q.10. Do you prefer audio cues or visual cues when using maps or navigation?
Q.11. What kind of features or changes would make you feel more at ease while using the system?
Q.12. Have you tried using voice commands while driving?
Follow-up: How effective were they for you?
Q.13. What kind of support, guidance, or onboarding would help you use the infotainment features more comfortably?
Q.14. If you could change or improve one thing about the system, what would it be?
Driving Context & Exposure to Technology
Ease of Use & Interaction Challenges
Accessibility & Visual Challenges
Safety & Cognitive Load
Voice Interaction & Assistance
Expectations
driving frequency, trip types, tech familiarity, first impressions
input comfort, friction points, avoidance triggers, overwhelm moments
readability issues, icon clarity, touch effort, layout strain
distraction levels, audio vs visual, unsafe moments, focus breaks
voice UI trust, misinterpretation, need for guidance, feature discovery
feature gaps, comfort needs, desired changes, experience ideals
To deepen the contextual understanding uncovered in the survey, we conducted in-depth interviews with 12 elderly drivers, each with varying degrees of comfort with technology and on-road infotainment use. These conversations revealed layered insights into their habits, cognitive load, interface interpretation, and coping mechanisms while driving. The open-ended format allowed participants to express frustrations, workarounds, and emotional responses in a natural setting.
IN-DEPTH INTERVIEW

Mapped across four key dimensions: tech confidence, usage pattern, interaction style, and adjustment curve, this landscape captures how users engage with infotainment systems beyond just features. Through clustering real behaviours and needs, four archetypes emerged: the Anxious Seeker (wants safety above all, avoids infotainment unless necessary), Over-Cautious Dependent (needs help or voice to operate: wary of mistakes), Adaptive Navigator (learns slowly but steadily; trusts memory & basic flows), and Confident Retainer (prefers what they know; confident but dislikes UI changes). Each reflects a distinct way users build trust, adapt, or avoid interaction, helping design decisions align with real-world usage patterns.
BEHAVIOURAL LANDSCAPE MAPPING : EMERGENT ARCHETYPES FROM INTERVIEW INSIGHTS
KEY BEHAVIOURAL INSIGHTS
Many users actively avoid exploring infotainment features due to fear of distraction or missteps.
Touchscreen hesitation is common, especially when visual layouts are dense or unfamiliar.
Users prefer voice interaction, but inconsistent system responses reduce trust and increase stress.
A large number rely on co-passengers or children to initially learn the system or recover from errors.
Confidence grows slowly over time, but even experienced users resist layout changes or feature updates.
Users want visible separation of functions (calls, music, maps) and clear feedback after actions to feel in control

Setting the Wheels in Motion
DESIGN PROPOSAL
The solution introduces a toggle within the existing infotainment system that activates a supportive mode for elderly users.
When enabled by the first-time users (FTUs), the toggle displays a QR code that links to an external onboarding mobile app named CarConnect.
CarConnect begins by guiding users through a short onboarding process that includes creating a profile and assessing their comfort level with technology and voice interfaces.
Based on responses, the app uses natural language processing and machine learning to adapt the system’s voice assistant by modifying speed, feedback style, and interaction complexity.
Once setup is complete, the infotainment interface transitions into a simplified, age-inclusive design that supports ease of use, safety, and cognitive comfort for elderly drivers


MOCKUPS
The QR screen acts as a quick-access gateway to the CarConnect app.
By scanning it, users instantly link their phone to the car without complex setup. This enables a personalized and simplified infotainment experience, pulling in their preferred settings, navigation history, and easy controls all through a familiar device. It’s especially helpful for new, elderly, or guest users who avoid traditional pairing steps.
Optimizes voice assistant responsiveness through NLP (Natural Language Processing) and ML (Machine Learning) by adapting to the user’s speech pace, accent, and clarity, making interactions more accurate, polite, and personalized over time.
CarConnect's Splash Screen with its Logo
Infotainment's QR Screen













To improve accuracy, the system lets users label contacts the way they naturally call them, like “Maa” or “Didi”. Using NLP, the voice assistant learns to recognize these personal references instead of relying solely on formal names. This makes calling faster, reduces errors, and creates a more intuitive, human-like interaction.
Even if locations weren’t previously saved in the user’s navigation app, the system allows them to add personalized labels like “my hospital” or “Dadi’s house” within CarConnect. This enables the voice assistant to recognize and respond to natural commands without relying on pre-saved addresses, making navigation feel more personal and intuitive.
Users can also add emergency contacts with designated trigger phrases like “Call anyone” or “Help me”. Using NLP, the voice assistant maps these phrases to the assigned contact and prioritizes them in high-stress contexts or repeated urgent tones. This ensures that in moments of panic or distraction, users don’t need to recall exact names, just natural, intuitive phrases, allowing for faster, more reliable emergency response.
This step enables users to personalize their voice assistant experience by sharing key preferences like comfort level, speech pace, and activation cues. These inputs are not cosmetic; they directly inform how the assistant behaves. For instance, a lower comfort rating prompts the assistant to speak more slowly, simplify its responses, and offer gentler confirmation prompts—ideal for users still adjusting to voice-based interaction.
Similarly, selecting a preferred wake phrase reduces confusion and boosts recognition accuracy through consistent NLP mapping. Over time, these small customizations help the assistant become more empathetic, efficient, and aligned with individual user behavior, especially in high-cognitive-load driving environments.
This step acts as a voice calibration checkpoint, allowing users to test how well the assistant understands their speech. By simulating a simple command and guiding them through icon selection, the system gathers early feedback on pronunciation, pace, and phrasing.
This helps the voice assistant fine-tune its NLP model in real time, adapting to the user’s natural speaking style. For hesitant users, it builds trust early by proving the system can adjust to them, not the other way around.
Step 1: Creating Driver's Profile
Step 2: Adding Contacts & their Labels
Step 3: Creating Driver's Profile
Step 4: Adding Emergency Numbers and Trigger Phrase
Step 5: Personalizing Experience via Voice Assistant
Step 6: Quick Test to Adjust the System with Users











Core functions like Media, Navigation, and Calling are placed centrally for quick access, while system controls like Bluetooth, Volume, and Menu are grouped on the side. This clear separation reduces distraction, making key actions easy to find and system settings less intrusive during driving.
Core functions like Media, Navigation, and Calling are placed centrally for quick access, while system controls like Bluetooth, Volume, and Menu are grouped on the side. This clear separation reduces distraction, making key actions easy to find and system settings less intrusive during driving.
The navigation screens focus on intent-based shortcuts like “Take me to…” and user-labelled locations, replacing cluttered map views with a cleaner, voice-first design. A minimized music video stays docked for seamless multitasking, letting users access media without disrupting directions. This layout enhances clarity, supports natural voice input, and feels more intuitive than typical infotainment systems
Infotainment's Dashboard
Infotainment's Play Music
Infotainment's Navigation
Infotainment's Call
Infotainment's Volume
The call screens prioritize clarity and ease, with large buttons and clear caller info for distraction-free use. Navigation and music stay minimized as docked cards, enabling smooth multitasking. The voice assistant stays active in the background, handling commands like “Mute music” or “Resume route,” making the experience seamless and hands-free, especially for users who rely more on voice than touch.
The vertical volume slider appears upon tapping the volume icon, offering quick, focused access without navigating away from the current task. Whether in a call, playing music, or using navigation, users can adjust sound instantly.

The interface layout is guided by natural touch behaviour, ensuring high-frequency actions align with the most interacted zones while minimizing visual and cognitive load. Frequently used features are positioned where users intuitively reach, while less immediate elements are placed in peripheral zones to avoid interference. Passive tasks remain visible but unobtrusive, and assistive elements are accessible without dominating attention. This spatial logic creates a balanced, responsive environment tailored for real-time use without overwhelming the user.
LAYOUT DECISIONS BACKED UP BY SURVEY
Blue for Core Functions
Blue offers high contrast against black, making it easy for elderly users with declining vision or cataracts to spot important actions. It avoids the stress response that red or yellow may trigger, and supports pattern recognition through consistent use.
Glassmorphism in Voice Assistant
Glassmorphism creates a visually distinct layer without using harsh borders or heavy color blocks, helping elderly users differentiate functions while maintaining a sense of familiarity and calmness. The translucency keeps background awareness, reducing disorientation.
White on Maps or Info Areas
White text over complex visuals (like maps or album covers) ensures maximum legibility, crucial for seniors with reduced contrast sensitivity. It ensures essential directions or information are never lost visually.
White Buttons with Blue Icons or Texts
Seniors benefit from clear affordance, a white base defines the button space, while the blue icon directs attention without causing panic. This combo avoids confusion between destructive and safe actions.
Sidebar with Glassmorphism & Subtle Glow
Using glassmorphism and subtle glow effects give the sidebar visual depth, helping seniors distinguish it from the main dashboard without feeling cluttered.
TONE, CONTRAST & FOCUS
Refine the UI to reflect a more tech-forward identity
Introduce subtle depth, motion, or futuristic visual cues to enhance its appeal without compromising usability.
Enhance the mobile application's interface
Improve layout clarity and responsiveness to ensure the app complements the infotainment system seamlessly.
Integrate the app’s unique identity into the infotainment UI
Carry forward elements like iconography & personalization patterns to maintain brand continuity across platforms.
Develop an interactive prototype for usability testing
Move beyond static screens to validate interactions, gestures, and accessibility in simulated driving conditions.
ACTIONABLE NEXT STEPS
Tracing Gaps, Guiding Change
USER FEEDBACK
To gather early impressions, we showed key screens to a few elderly users and captured their responses. While it wasn’t a working prototype, the walkthrough helped us understand what felt clear, where confusion emerged, and how they emotionally responded to the interface. Their feedback offered valuable cues that shaped our direction moving forward.
We also conducted a quick survey to gauge broader impressions of the interface, focusing on usability, layout logic, and clarity of core actions.
LEARNINGS & REFLECTIONS
This project was a reminder that designing for real-world contexts means listening more than assuming. What began as a system-focused UI challenge quickly unfolded into a deeper exploration of confidence, clarity, and cognitive load, especially for users navigating unfamiliar digital environments. Sitting with elderly users, hearing their pauses and hesitations, made it clear that good design isn’t just about clean visuals; it’s about emotional reassurance. From rethinking button placement to simplifying decision points, every iteration became less about features and more about how those features made someone feel while using them. It taught me that empathy isn’t a phase in the process, it’s the thread that holds the system together.
IVI for Elderly Drivers
Role: UX Researcher & Designer
Research Methodology: Mixed Method (user surveys for quantitative data and interviews for qualitative insights, followed by behavioural landscape mapping).
Solution: This project introduced an elderly-accessible mode into an existing in-vehicle infotainment (IVI) system through a toggle feature. Grounded in user research with older adults, the goal was to understand their cognitive, visual, and motor challenges while interacting with digital car interfaces.


Hypothesis
Elderly drivers in the Indian market experience significant challenges while interacting with in-vehicle infotainment (IVI) systems due to cognitive, visual, and motor limitations. This research aims to investigate the accessibility gaps within current IVI interfaces, explore the contextual and behavioural needs of aging users, and understand how interface complexity contributes to cognitive overload behind the wheel.
Exploring the Existing Landscape
This involved studying automotive interface trends, accessibility standards, age-related cognitive considerations, and feedback patterns within the Indian market. The aim was to ground the research in existing knowledge and identify areas that required closer investigation during user interactions.
REFERENCES
Government of India, Ministry of Road Transport & Highways. (n.d.). Ministry of Road Transport & Highways, Government of India.
National Institute of Mental Health and Neuro Sciences. (n.d.). NIMHANS official website.
Institute of Road Traffic Education. (n.d.). IRTE – Road Safety Through Education & Research.
Society of Automotive Engineers India. (n.d.). SAEINDIA – Advancing mobility knowledge and solutions.
Tuch, A. N., Bargas-Avila, J. A., Opwis, K., & Wilhelm, F. H. (2020). From skeuomorphism to flat design: Age-related differences in performance and aesthetic perceptions. ResearchGate.
Tuning into the Driver's Minds
To truly & closely understand the cognitive and interactional challenges elderly users face with infotainment systems in today's time, we conducted focused user research of two-pronged approach. This phase explored how interface complexity, accessibility gaps, and multitasking while driving impacted their sense of control, comfort, and confidence on the road.
USER SURVEY
A user survey was conducted with a sample size of 30 participants aged 55 and above, all of whom drive independently. The survey aimed to capture behavioural cues, interface-related struggles, and general perceptions around infotainment systems. Here are few of the main insights that helped in validating the initial hypothesis and in identifying recurring friction points within the system.
Semi-structured Questionnaire: Framed for User Behaviour Inquiry
Q.1. How often do you drive, and what kind of trips do you usually take?
Q.2. Have you used modern cars with built-in infotainment systems or digital dashboards?
Follow-up: What was your first impression of using them?
Q.3. How comfortable do you feel using touchscreens, buttons, or voice assistants while driving?
Q.4. Can you recall a moment when using the infotainment system felt confusing or frustrating?
Q.5. Are there instances when you’ve avoided using the system because it felt overwhelming?
Q.6. Do you ever feel distracted while interacting with it?
Follow-up: What exactly causes that distraction?
Q.7. Do you find it easy to read text or understand icons on the infotainment screen?
Q.8. Have you experienced any difficulty in reaching or accurately tapping certain buttons or areas of the screen?
Q.9. Have you ever felt unsafe or uncertain while using the system while driving?
Follow-up: Could you describe what happened in that moment?
Q.10. Do you prefer audio cues or visual cues when using maps or navigation?
Q.11. What kind of features or changes would make you feel more at ease while using the system?
Q.12. Have you tried using voice commands while driving?
Follow-up: How effective were they for you?
Q.13. What kind of support, guidance, or onboarding would help you use the infotainment features more comfortably?
Q.14. If you could change or improve one thing about the system, what would it be?
Driving Context & Exposure to Technology
Ease of Use & Interaction Challenges
Accessibility & Visual Challenges
Safety & Cognitive Load
Voice Interaction & Assistance
Expectations
driving frequency, trip types, tech familiarity, first impressions
input comfort, friction points, avoidance triggers, overwhelm moments
readability issues, icon clarity, touch effort, layout strain
distraction levels, audio vs visual, unsafe moments, focus breaks
voice UI trust, misinterpretation, need for guidance, feature discovery
feature gaps, comfort needs, desired changes, experience ideals
To deepen the contextual understanding uncovered in the survey, we conducted in-depth interviews with 12 elderly drivers, each with varying degrees of comfort with technology and on-road infotainment use. These conversations revealed layered insights into their habits, cognitive load, interface interpretation, and coping mechanisms while driving. The open-ended format allowed participants to express frustrations, workarounds, and emotional responses in a natural setting.
IN-DEPTH INTERVIEW


BEHAVIOURAL LANDSCAPE MAPPING : EMERGENT ARCHETYPES FROM INTERVIEW INSIGHTS
Mapped across four key dimensions: tech confidence, usage pattern, interaction style, and adjustment curve, this landscape captures how users engage with infotainment systems beyond just features. Through clustering real behaviours and needs, four archetypes emerged: the Anxious Seeker (wants safety above all, avoids infotainment unless necessary), Over-Cautious Dependent (needs help or voice to operate: wary of mistakes), Adaptive Navigator (learns slowly but steadily; trusts memory & basic flows), and Confident Retainer (prefers what they know; confident but dislikes UI changes). Each reflects a distinct way users build trust, adapt, or avoid interaction, helping design decisions align with real-world usage patterns.
KEY BEHAVIOURAL INSIGHTS
Many users actively avoid exploring infotainment features due to fear of distraction or missteps.
Touchscreen hesitation is common, especially when visual layouts are dense or unfamiliar.
Users prefer voice interaction, but inconsistent system responses reduce trust and increase stress.
A large number rely on co-passengers or children to initially learn the system or recover from errors.
Confidence grows slowly over time, but even experienced users resist layout changes or feature updates.
Users want visible separation of functions (calls, music, maps) and clear feedback after actions to feel in control
Setting the Wheels in Motion
DESIGN PROPOSAL
The solution introduces a toggle within the existing infotainment system that activates a supportive mode for elderly users.
When enabled by the first-time users (FTUs), the toggle displays a QR code that links to an external onboarding mobile app named CarConnect.
CarConnect begins by guiding users through a short onboarding process that includes creating a profile and assessing their comfort level with technology and voice interfaces.
Based on responses, the app uses natural language processing and machine learning to adapt the system’s voice assistant by modifying speed, feedback style, and interaction complexity.
Once setup is complete, the infotainment interface transitions into a simplified, age-inclusive design that supports ease of use, safety, and cognitive comfort for elderly drivers
































MOCKUPS
Infotainment's QR Screen
CarConnect's Splash Screen with its Logo
The QR screen acts as a quick-access gateway to the CarConnect app.
By scanning it, users instantly link their phone to the car without complex setup. This enables a personalized and simplified infotainment experience, pulling in their preferred settings, navigation history, and easy controls all through a familiar device. It’s especially helpful for new, elderly, or guest users who avoid traditional pairing steps.
Optimizes voice assistant responsiveness through NLP (Natural Language Processing) and ML (Machine Learning) by adapting to the user’s speech pace, accent, and clarity, making interactions more accurate, polite, and personalized over time.
Step 1: Creating Driver's Profile
Step 2: Adding Contacts & their Labels
To improve accuracy, the system lets users label contacts the way they naturally call them, like “Maa” or “Didi”. Using NLP, the voice assistant learns to recognize these personal references instead of relying solely on formal names. This makes calling faster, reduces errors, and creates a more intuitive, human-like interaction.
Step 3: Creating Driver's Profile
Even if locations weren’t previously saved in the user’s navigation app, the system allows them to add personalized labels like “my hospital” or “Dadi’s house” within CarConnect. This enables the voice assistant to recognize and respond to natural commands without relying on pre-saved addresses, making navigation feel more personal and intuitive.
Step 4: Adding Emergency Numbers and Trigger Phrase
Users can also add emergency contacts with designated trigger phrases like “Call anyone” or “Help me”. Using NLP, the voice assistant maps these phrases to the assigned contact and prioritizes them in high-stress contexts or repeated urgent tones. This ensures that in moments of panic or distraction, users don’t need to recall exact names, just natural, intuitive phrases, allowing for faster, more reliable emergency response.
Step 5: Personalizing Experience via Voice Assistant
This step enables users to personalize their voice assistant experience by sharing key preferences like comfort level, speech pace, and activation cues. These inputs are not cosmetic; they directly inform how the assistant behaves. For instance, a lower comfort rating prompts the assistant to speak more slowly, simplify its responses, and offer gentler confirmation prompts—ideal for users still adjusting to voice-based interaction.
Similarly, selecting a preferred wake phrase reduces confusion and boosts recognition accuracy through consistent NLP mapping. Over time, these small customizations help the assistant become more empathetic, efficient, and aligned with individual user behavior, especially in high-cognitive-load driving environments.
Step 6: Quick Test to Adjust the System with Users
This step acts as a voice calibration checkpoint, allowing users to test how well the assistant understands their speech. By simulating a simple command and guiding them through icon selection, the system gathers early feedback on pronunciation, pace, and phrasing.
This helps the voice assistant fine-tune its NLP model in real time, adapting to the user’s natural speaking style. For hesitant users, it builds trust early by proving the system can adjust to them, not the other way around.
























Infotainment's Dashboard
Core functions like Media, Navigation, and Calling are placed centrally for quick access, while system controls like Bluetooth, Volume, and Menu are grouped on the side. This clear separation reduces distraction, making key actions easy to find and system settings less intrusive during driving.
Infotainment's Play Music
Core functions like Media, Navigation, and Calling are placed centrally for quick access, while system controls like Bluetooth, Volume, and Menu are grouped on the side. This clear separation reduces distraction, making key actions easy to find and system settings less intrusive during driving.
Infotainment's Navigation
The navigation screens focus on intent-based shortcuts like “Take me to…” and user-labelled locations, replacing cluttered map views with a cleaner, voice-first design. A minimized music video stays docked for seamless multitasking, letting users access media without disrupting directions. This layout enhances clarity, supports natural voice input, and feels more intuitive than typical infotainment systems
Infotainment's Call
The call screens prioritize clarity and ease, with large buttons and clear caller info for distraction-free use. Navigation and music stay minimized as docked cards, enabling smooth multitasking. The voice assistant stays active in the background, handling commands like “Mute music” or “Resume route,” making the experience seamless and hands-free, especially for users who rely more on voice than touch.
Infotainment's Volume
The vertical volume slider appears upon tapping the volume icon, offering quick, focused access without navigating away from the current task. Whether in a call, playing music, or using navigation, users can adjust sound instantly.
LAYOUT DECISIONS BACKED UP BY SURVEY
The interface layout is guided by natural touch behaviour, ensuring high-frequency actions align with the most interacted zones while minimizing visual and cognitive load. Frequently used features are positioned where users intuitively reach, while less immediate elements are placed in peripheral zones to avoid interference. Passive tasks remain visible but unobtrusive, and assistive elements are accessible without dominating attention. This spatial logic creates a balanced, responsive environment tailored for real-time use without overwhelming the user.
Blue for Core Functions
Blue offers high contrast against black, making it easy for elderly users with declining vision or cataracts to spot important actions. It avoids the stress response that red or yellow may trigger, and supports pattern recognition through consistent use.
White on Maps or Info Areas
White text over complex visuals (like maps or album covers) ensures maximum legibility, crucial for seniors with reduced contrast sensitivity. It ensures essential directions or information are never lost visually.
White Buttons with Blue Icons or Texts
Seniors benefit from clear affordance, a white base defines the button space, while the blue icon directs attention without causing panic. This combo avoids confusion between destructive and safe actions.
Sidebar with Glassmorphism & Subtle Glow
Using glassmorphism and subtle glow effects give the sidebar visual depth, helping seniors distinguish it from the main dashboard without feeling cluttered.
Glassmorphism in Voice Assistant
Glassmorphism creates a visually distinct layer without using harsh borders or heavy color blocks, helping elderly users differentiate functions while maintaining a sense of familiarity and calmness. The translucency keeps background awareness, reducing disorientation.
TONE, CONTRAST & FOCUS
Tracing Gaps, Guiding Change
USER FEEDBACK
To gather early impressions, we showed key screens to a few elderly users and captured their responses. While it wasn’t a working prototype, the walkthrough helped us understand what felt clear, where confusion emerged, and how they emotionally responded to the interface. Their feedback offered valuable cues that shaped our direction moving forward.
We also conducted a quick survey to gauge broader impressions of the interface, focusing on usability, layout logic, and clarity of core actions.
LEARNINGS & REFLECTIONS
This project was a reminder that designing for real-world contexts means listening more than assuming. What began as a system-focused UI challenge quickly unfolded into a deeper exploration of confidence, clarity, and cognitive load, especially for users navigating unfamiliar digital environments. Sitting with elderly users, hearing their pauses and hesitations, made it clear that good design isn’t just about clean visuals; it’s about emotional reassurance. From rethinking button placement to simplifying decision points, every iteration became less about features and more about how those features made someone feel while using them. It taught me that empathy isn’t a phase in the process, it’s the thread that holds the system together.


Aakriti | UX Researcher & Designer
IVI for Elderly Drivers
Role: UX Researcher & Designer
Research Methodology: Mixed Method (user surveys for quantitative data and interviews for qualitative insights, followed by behavioural landscape mapping).
Solution: This project introduced an elderly-accessible mode into an existing in-vehicle infotainment (IVI) system through a toggle feature. Grounded in user research with older adults, the goal was to understand their cognitive, visual, and motor challenges while interacting with digital car interfaces.


Hypothesis
Elderly drivers in the Indian market experience significant challenges while interacting with in-vehicle infotainment (IVI) systems due to cognitive, visual, and motor limitations. This research aims to investigate the accessibility gaps within current IVI interfaces, explore the contextual and behavioural needs of aging users, and understand how interface complexity contributes to cognitive overload behind the wheel.
Exploring the Existing Landscape
This involved studying automotive interface trends, accessibility standards, age-related cognitive considerations, and feedback patterns within the Indian market. The aim was to ground the research in existing knowledge and identify areas that required closer investigation during user interactions.
REFERENCES
Government of India, Ministry of Road Transport & Highways. (n.d.). Ministry of Road Transport & Highways, Government of India.
National Institute of Mental Health and Neuro Sciences. (n.d.). NIMHANS official website.
Institute of Road Traffic Education. (n.d.). IRTE – Road Safety Through Education & Research.
Society of Automotive Engineers India. (n.d.). SAEINDIA – Advancing mobility knowledge and solutions.
Tuch, A. N., Bargas-Avila, J. A., Opwis, K., & Wilhelm, F. H. (2020). From skeuomorphism to flat design: Age-related differences in performance and aesthetic perceptions. ResearchGate.
Tuning into the Driver's Minds
To truly & closely understand the cognitive and interactional challenges elderly users face with infotainment systems in today's time, we conducted focused user research of two-pronged approach. This phase explored how interface complexity, accessibility gaps, and multitasking while driving impacted their sense of control, comfort, and confidence on the road.
USER SURVEY
A user survey was conducted with a sample size of 30 participants aged 55 and above, all of whom drive independently. The survey aimed to capture behavioural cues, interface-related struggles, and general perceptions around infotainment systems. Here are few of the main insights that helped in validating the initial hypothesis and in identifying recurring friction points within the system.


IN-DEPTH INTERVIEW
To deepen the contextual understanding uncovered in the survey, we conducted in-depth interviews with 12 elderly drivers, each with varying degrees of comfort with technology and on-road infotainment use. These conversations revealed layered insights into their habits, cognitive load, interface interpretation, and coping mechanisms while driving. The open-ended format allowed participants to express frustrations, workarounds, and emotional responses in a natural setting.


BEHAVIOURAL LANDSCAPE MAPPING : EMERGENT ARCHETYPES FROM INTERVIEW INSIGHTS
Mapped across four key dimensions: tech confidence, usage pattern, interaction style, and adjustment curve, this landscape captures how users engage with infotainment systems beyond just features. Through clustering real behaviours and needs, four archetypes emerged: the Anxious Seeker (wants safety above all, avoids infotainment unless necessary), Over-Cautious Dependent (needs help or voice to operate: wary of mistakes), Adaptive Navigator (learns slowly but steadily; trusts memory & basic flows), and Confident Retainer (prefers what they know; confident but dislikes UI changes). Each reflects a distinct way users build trust, adapt, or avoid interaction, helping design decisions align with real-world usage patterns.
KEY BEHAVIOURAL INSIGHTS
Many users actively avoid exploring infotainment features due to fear of distraction or missteps.
Touchscreen hesitation is common, especially when visual layouts are dense or unfamiliar.
Users prefer voice interaction, but inconsistent system responses reduce trust and increase stress.
A large number rely on co-passengers or children to initially learn the system or recover from errors.
Confidence grows slowly over time, but even experienced users resist layout changes or feature updates.
Users want visible separation of functions (calls, music, maps) and clear feedback after actions to feel in control
Tracing Gaps, Guiding Change
USER FEEDBACK
To gather early impressions, we showed key screens to a few elderly users and captured their responses. While it wasn’t a working prototype, the walkthrough helped us understand what felt clear, where confusion emerged, and how they emotionally responded to the interface. Their feedback offered valuable cues that shaped our direction moving forward.
We also conducted a quick survey to gauge broader impressions of the interface, focusing on usability, layout logic, and clarity of core actions.
Enhance the mobile application's interface
Integrate the app’s unique identity into the infotainment UI
Develop an interactive prototype for usability testing
Refine the UI to reflect a more tech-forward identity
ACTIONABLE NEXT STEPS
Introduce subtle depth, motion, or futuristic visual cues to enhance its appeal without compromising usability.
Improve layout clarity and responsiveness to ensure the app complements the infotainment system seamlessly.
Carry forward elements like iconography & personalization patterns to maintain brand continuity across platforms.
Move beyond static screens to validate interactions, gestures, and accessibility in simulated driving conditions.


DESIGN PROPOSAL
Setting the Wheels in Motion
The solution introduces a toggle within the existing infotainment system that activates a supportive mode for elderly users.
When enabled by the first-time users (FTUs), the toggle displays a QR code that links to an external onboarding mobile app named CarConnect.
CarConnect begins by guiding users through a short onboarding process that includes creating a profile and assessing their comfort level with technology and voice interfaces.
Based on responses, the app uses natural language processing and machine learning to adapt the system’s voice assistant by modifying speed, feedback style, and interaction complexity.
Once setup is complete, the infotainment interface transitions into a simplified, age-inclusive design that supports ease of use, safety, and cognitive comfort for elderly drivers






























MOCKUPS
Infotainment's QR Screen
The QR screen acts as a quick-access gateway to the CarConnect app.
By scanning it, users instantly link their phone to the car without complex setup. This enables a personalized and simplified infotainment experience, pulling in their preferred settings, navigation history, and easy controls all through a familiar device. It’s especially helpful for new, elderly, or guest users who avoid traditional pairing steps.
Optimizes voice assistant responsiveness through NLP (Natural Language Processing) and ML (Machine Learning) by adapting to the user’s speech pace, accent, and clarity, making interactions more accurate, polite, and personalized over time.
CarConnect's Splash Screen with its Logo
Step 1: Creating Driver's Profile
Step 2: Adding Contacts & their Labels
To improve accuracy, the system lets users label contacts the way they naturally call them, like “Maa” or “Didi”. Using NLP, the voice assistant learns to recognize these personal references instead of relying solely on formal names. This makes calling faster, reduces errors, and creates a more intuitive, human-like interaction.
Step 3: Creating Driver's Profile
Even if locations weren’t previously saved in the user’s navigation app, the system allows them to add personalized labels like “my hospital” or “Dadi’s house” within CarConnect. This enables the voice assistant to recognize and respond to natural commands without relying on pre-saved addresses, making navigation feel more personal and intuitive.
Step 4: Adding Emergency Numbers and Trigger Phrase
Users can also add emergency contacts with designated trigger phrases like “Call anyone” or “Help me”. Using NLP, the voice assistant maps these phrases to the assigned contact and prioritizes them in high-stress contexts or repeated urgent tones. This ensures that in moments of panic or distraction, users don’t need to recall exact names, just natural, intuitive phrases, allowing for faster, more reliable emergency response.
Step 5: Personalizing Experience via Voice Assistant
This step enables users to personalize their voice assistant experience by sharing key preferences like comfort level, speech pace, and activation cues. These inputs are not cosmetic; they directly inform how the assistant behaves. For instance, a lower comfort rating prompts the assistant to speak more slowly, simplify its responses, and offer gentler confirmation prompts—ideal for users still adjusting to voice-based interaction.
Similarly, selecting a preferred wake phrase reduces confusion and boosts recognition accuracy through consistent NLP mapping. Over time, these small customizations help the assistant become more empathetic, efficient, and aligned with individual user behavior, especially in high-cognitive-load driving environments.
Step 6: Quick Test to Adjust the System with Users
This step acts as a voice calibration checkpoint, allowing users to test how well the assistant understands their speech. By simulating a simple command and guiding them through icon selection, the system gathers early feedback on pronunciation, pace, and phrasing.
This helps the voice assistant fine-tune its NLP model in real time, adapting to the user’s natural speaking style. For hesitant users, it builds trust early by proving the system can adjust to them, not the other way around.
























Blue offers high contrast against black, making it easy for elderly users with declining vision or cataracts to spot important actions. It avoids the stress response that red or yellow may trigger, and supports pattern recognition through consistent use.
Seniors benefit from clear affordance, a white base defines the button space, while the blue icon directs attention without causing panic. This combo avoids confusion between destructive and safe actions.
Glassmorphism creates a visually distinct layer without using harsh borders or heavy color blocks, helping elderly users differentiate functions while maintaining a sense of familiarity and calmness. The translucency keeps background awareness, reducing disorientation.
Blue for Core Functions
White on Maps or Info Areas
White text over complex visuals (like maps or album covers) ensures maximum legibility, crucial for seniors with reduced contrast sensitivity. It ensures essential directions or information are never lost visually.
White Buttons with Blue Icons or Texts
Sidebar with Glassmorphism & Subtle Glow
Using glassmorphism and subtle glow effects give the sidebar visual depth, helping seniors distinguish it from the main dashboard without feeling cluttered.
Glassmorphism in Voice Assistant
Core functions like Media, Navigation, and Calling are placed centrally for quick access, while system controls like Bluetooth, Volume, and Menu are grouped on the side. This clear separation reduces distraction, making key actions easy to find and system settings less intrusive during driving.
Core functions like Media, Navigation, and Calling are placed centrally for quick access, while system controls like Bluetooth, Volume, and Menu are grouped on the side. This clear separation reduces distraction, making key actions easy to find and system settings less intrusive during driving.
Infotainment's Navigation
Infotainment's Call
Infotainment's Dashboard
Infotainment's Play Music
The navigation screens focus on intent-based shortcuts like “Take me to…” and user-labelled locations, replacing cluttered map views with a cleaner, voice-first design. A minimized music video stays docked for seamless multitasking, letting users access media without disrupting directions. This layout enhances clarity, supports natural voice input, and feels more intuitive than typical infotainment systems
The call screens prioritize clarity and ease, with large buttons and clear caller info for distraction-free use. Navigation and music stay minimized as docked cards, enabling smooth multitasking. The voice assistant stays active in the background, handling commands like “Mute music” or “Resume route,” making the experience seamless and hands-free, especially for users who rely more on voice than touch.
Infotainment's Volume
The vertical volume slider appears upon tapping the volume icon, offering quick, focused access without navigating away from the current task. Whether in a call, playing music, or using navigation, users can adjust sound instantly.
LAYOUT DECISIONS BACKED UP BY SURVEY
The interface layout is guided by natural touch behaviour, ensuring high-frequency actions align with the most interacted zones while minimizing visual and cognitive load. Frequently used features are positioned where users intuitively reach, while less immediate elements are placed in peripheral zones to avoid interference. Passive tasks remain visible but unobtrusive, and assistive elements are accessible without dominating attention. This spatial logic creates a balanced, responsive environment tailored for real-time use without overwhelming the user.
TONE, CONTRAST & FOCUS
Introduce subtle depth, motion, or futuristic visual cues to enhance its appeal without compromising usability.
Improve layout clarity and responsiveness to ensure the app complements the infotainment system seamlessly.
Carry forward elements like iconography & personalization patterns to maintain brand continuity across platforms.
Move beyond static screens to validate interactions, gestures, and accessibility in simulated driving conditions.
Integrate the app’s unique identity into the infotainment UI
Develop an interactive prototype for usability testing
Refine the UI to reflect a more tech-forward identity
ACTIONABLE NEXT STEPS
Enhance the mobile application's interface
This project was a reminder that designing for real-world contexts means listening more than assuming. What began as a system-focused UI challenge quickly unfolded into a deeper exploration of confidence, clarity, and cognitive load, especially for users navigating unfamiliar digital environments. Sitting with elderly users, hearing their pauses and hesitations, made it clear that good design isn’t just about clean visuals; it’s about emotional reassurance. From rethinking button placement to simplifying decision points, every iteration became less about features and more about how those features made someone feel while using them. It taught me that empathy isn’t a phase in the process, it’s the thread that holds the system together.
LEARNINGS & REFLECTIONS