Gia

AI Design Award Winner

Designing a human–AI safety system for attention-critical environments.

Product Human–AI safety
Role Principal Product Designer
Recognition AI Design Award Winner, 2025

Context

Motorcycling is a high cognitive load, safety-critical activity.

Riders continuously process speed, road conditions, navigation, and risk. Attention is fully occupied.

Most rider technologies fail in two ways:

  • they increase visual load
  • or they interrupt at the wrong moment

After experiencing a serious motorcycle accident, I reframed the problem:

Not how to add more information,
but how to intervene without increasing cognitive load.

Motorcyclist – focus on the road

Problem

This was not an interface problem. It was human–AI coordination under strict limits: attention already saturated, reaction time critical, interruptions risky, and no room to operate complex systems mid-ride.

The core question:

How do you design an AI system that improves safety without competing for attention?

In safety-critical systems, value comes from timing, not volume.

System principles

Gia was designed as a constrained system where attention is the primary limitation.

  • Success = reduced cognitive load, not feature breadth
  • Any interaction that pulls attention is failure
  • Voice and haptics over visual interfaces
  • AI supports judgment, not overrides it
  • Learning happens through behavior, not configuration

System model

Gia is a real-time decision system that:

  • Ingests environmental and behavioral signals
  • Predicts moments where intervention improves safety
  • Filters outputs to only high-value signals
  • Communicates through voice and ambient cues
  • Adapts to rider behavior over time

The system prioritizes what not to surface.

Motorcyclist – situational awareness

Interaction model

Gia operates on three rules:

  • When to observe (continuous sensing)
  • When to intervene (threshold-based triggers)
  • When to stay silent (low-confidence or low-impact scenarios)

This ensures the system supports, rather than competes with, rider attention.

Design & prototyping

I focused on identifying when riders have cognitive availability versus overload.

This included:

  • mapping high-risk vs low-attention moments
  • testing voice timing under real riding constraints
  • evaluating when intervention improves vs disrupts

Every decision was evaluated against:

Does this reduce cognitive burden in motion?

Gia voice interface – voice feedback animation

Voice

Voice was the primary interface.

When eyes and hands are occupied, spoken cues preserve situational awareness.

  • short
  • contextual
  • timed to moments of availability

This was not conversational AI.
It was controlled intervention.

Motorcyclist on Ducati – high-speed cornering on track

Key decisions

Haptics over visuals

Decision
We designed alerts to be primarily tactile rather than visual.
Context
Visual attention is already fully occupied while riding.
Tradeoff
Reduced information density.
Impact
Lower cognitive load and safer interaction model.

Selective intervention

Decision
We constrained when the system surfaces recommendations.
Context
Constant input creates distraction and distrust.
Tradeoff
Less “assistive” presence.
Impact
Higher signal-to-noise ratio and better rider trust.

Behavioral learning over configuration

Decision
We avoided heavy onboarding configuration.
Context
Riders won’t configure systems before riding.
Tradeoff
Slower initial system accuracy.
Impact
Lower friction and more natural adaptation over time.

Explicit failure states

Decision
We defined how the system behaves when data is incomplete or uncertain.
Context
AI systems are inherently probabilistic.
Tradeoff
Conservative outputs vs more “impressive” features.
Impact
Increased reliability and safety perception.
Street view through Gia visor – HUD elements in rider field of view

Outcomes

Gia demonstrated a viable model for AI systems in attention-constrained environments:

  • reduced reliance on visual interfaces
  • higher trust through predictable intervention
  • clearer signal-to-noise ratio
  • improved alignment between system behavior and user expectation

The project validated that:

AI effectiveness in safety-critical contexts depends on restraint and timing, not capability.

Learnings

  • Attention is the primary constraint in safety systems
  • AI value depends on knowing when not to act
  • Trust requires supporting, not replacing, human judgment
  • Adding features often reduces safety

Reflection

Gia clarified a shift in how I design AI systems. The challenge is not intelligence. It is control. Designing when a system should act, remain silent, or defer to human judgment requires defining boundaries, not features.

This project strengthened my ability to design systems where:

  • attention is limited
  • decisions are time-critical
  • and failure has real consequences