All Projects
UX Research · eHMI · Autonomous Vehicles

Multimodal eHMIs for Older Pedestrians

This dissertation explored how older pedestrians interpret external Human Machine Interfaces (eHMIs) in AV yielding scenarios, comparing visual, audio, and haptic communication to reduce uncertainty and improve real-world understanding.

Institution Birmingham City University
Date Jan 2026
Industry Automotive · Autonomous Vehicles
Time Frame 12 Weeks
Developed With Solo Project
🏆 1st Place · PGXPO & Research Excellence Award
Autonomous vehicle eHMI study

TL:DR

  • Best-performing setup: Visual + Haptics (clarity + confidence)
  • Most dependable: C7 (Visual + Audio + Haptics) scored highest for perceived dependability
  • Recognition: PGXPO 1st Place + Research Excellence Award

Challenge

Designing eHMI Communication for Older Pedestrians

Autonomous vehicles must communicate their intent to cross without the social cues pedestrians rely on: eye contact, hand gestures, or engine noise. This creates genuine uncertainty at crossings, particularly for older adults who may already feel less confident navigating modern traffic environments.

This dissertation examined how older adults interpret and respond to multimodal eHMI signals, and which combinations of visual, audio, and haptic cues best support safe, confident crossing decisions in AV yielding scenarios.

Process

A Research-Led Design Approach

Discover

  • Reviewed literature on AV-pedestrian communication, eHMIs, and age-related perception needs for older pedestrians.
  • Identified a core gap: yielding intent is often ambiguous without driver cues (eye contact, gestures), increasing uncertainty for older users.

Define

  • Problem statement: "How might we communicate AV yielding intent clearly and confidently to older pedestrians using multimodal eHMI cues?"
  • Defined study variables, modality conditions (Visual, Audio, Haptics, and combinations) and key success metrics: clarity, confidence, perceived safety, and overall UX.

Develop

  • Designed the eHMI haptic condition and standardised stimuli for a yielding scenario with consistent timing and presentation across all 8 conditions.
  • Built the test flow and materials: consent forms, task instructions, UEQ measures, and a semi-structured interview for preference capture.
  • Piloted and refined to reduce bias, improve clarity, and keep sessions consistent.

Deliver

  • Conducted moderated sessions with 10 older participants using a within-subjects comparison across eHMI conditions.
  • Analysed quantitative UEQ results and triangulated with qualitative feedback to form recommendations.
  • Produced practical design guidance and next-step validation ideas (larger sample, higher-fidelity simulation, multi-vehicle scenarios).

PGXPO Poster

PGXPO Conference Poster

Click to view

Overview

Project at a Glance

  • Role: Researcher (MSc dissertation, end-to-end)
  • Audience: Older pedestrians
  • Goal: Identify which eHMI modality (or combination) best supports clarity, confidence, and correct yielding interpretation
  • Approach: Controlled prototype stimuli + participant feedback to turn results into practical design guidance
  • Outputs: Ranked modality preferences, insights from interviews, and actionable recommendations for AV yielding communication

Research Key Questions

  • Which eHMI modality (or combination) best supports interpretation of yielding intent?
  • Do multimodal combos increase confidence over single cues?
  • What do older pedestrians find confusing or reassuring?

Study Design

  • Participants: n = 10 older adults
  • Design: Within-subject comparing 8 conditions
  • Measures: Participant ratings (including preference ranking) + short interviews to capture reasoning
  • Analysis: Quantitative comparison across conditions, paired with qualitative themes to explain why certain cues worked better
  • Tools: SPSS / Excel / eHaptics Designer / eHaptics TacSuit

Modality Overview (eHMIs)

C0 = No eHMI
C1 = Visual only
C2 = Haptics only
C3 = Audio (bell sound)
C4 = Visual + Haptics
C5 = Visual + Audio
C6 = Audio + Haptics
C7 = Visual + Audio + Haptics

Results

Key Findings

10
Older adult participants in a within-subjects study
8
eHMI conditions tested across each participant
1st
Place at PGXPO + Research Excellence Award
  • Created a complete research package: stimuli, study protocol, data capture, analysis, and recommendations.
  • Found that multimodal signalling improved perceived clarity and confidence compared to single-modality cues.
  • Recommended a layered approach: visual primary cue + secondary confirmation (haptic or audio) to reduce ambiguity.
  • Delivered practical guidelines on timing, silence, and avoiding overload for older users.

What Mattered Most

  • Multimodal + single cues performed best for overall understanding and confidence.
  • Visual was the most intuitive; audio worked well, with audio/haptics best as confirmation.
  • C7 (Visual + Audio + Haptics) scored highest for perceived dependability, but attribution risk increases in multi-vehicle scenes.
  • Results translate into practical eHMI guidance that supports older pedestrians in high-uncertainty yielding moments.

Design Recommendations

Guidelines for AV eHMI Design

  • Use visual as the primary channel for communicating yielding intent.
  • Add a secondary confirmation cue (haptic and/or audio) to reinforce confidence.
  • Keep timing consistent and avoid cue overload; don't layer too much complexity.
  • Design for multi-vehicle attribution so users can instantly tell which vehicle is communicating.

Future Recommendations

Next Steps for Validation

  • Test with a larger and more diverse sample (including different levels of tech familiarity).
  • Validate in a more realistic environment (VR or controlled street-style setup).
  • Explore more complex scenes (multiple vehicles, distractions, varied crossing contexts).
  • Calibrate haptic intensity and audio volume to ensure accessibility without annoyance.

Back to

All Projects