Neosensory

Designing a neuroplastic learning system for high-frequency hearing loss.

Product iOS & Android
Company Neosensory
Role Product Designer

Context

High-frequency hearing loss removes critical speech detail, especially consonants and clarity.

Neosensory does not amplify sound. It translates audio into haptic signals, requiring users to learn a new sensory language.

Early adoption revealed a critical issue:

Users were dropping off in the first weeks.

The system worked.
The learning did not.

Problem

Users frequently reported that "everything feels the same." Without context, this early ambiguity was interpreted as malfunction rather than learning. As a result, many disengaged before meaningful differentiation could occur.

Engineering focused on improving signal clarity.

I reframed the problem:

The issue was not signal quality.
It was expectation misalignment.

I demonstrated that perceived competence, not accuracy, predicted retention, shifting roadmap focus from signal refinement to learning design.

The core problem was not usability. It was adaptation.

Users were not failing to learn the signals. They were failing to understand where they were in the learning process.

Hypothesis

If the product reflects how neuroplastic learning unfolds, and makes early progress perceptible before accuracy emerges, users will persist through ambiguity and complete the adaptation cycle.

Learning system design

I designed a staged learning system aligned with how users build perceptual understanding over time.

  1. 1
    Onboarding

    Normalize ambiguity.

    • Frame confusion as expected
    • Introduce neuroplastic learning model
    • Establish psychological safety
  2. 2
    Differentiation

    Accelerate perceived progress.

    • Reduce signal variety
    • Increase repetition
    • Introduce early milestone: "I felt a difference" (Day 3)
  3. 3
    Immersion

    Transfer learning beyond the app.

    • Real-world practice
    • Voluntary repetition
    • Reinforce persistence over accuracy

Iterative pacing

Early versions introduced too much signal variety, increasing cognitive load.

I reduced early complexity, increased repetition, and staged exposure more gradually.

Results:

  • Increased session time
  • Higher return rates
  • More consistent competence growth
  • Stronger Day 3 milestone achievement

Behavioral signals

I introduced behavioral metrics to track learning progression and predict retention.

Perceived competence

Users who reported increasing confidence between Weeks 2–4 were significantly more likely to continue.

Emotional state

Before: confusion interpreted as failure → drop-off
After: confusion framed as learning → continuation increased

Time to first success

Day 3 milestone ("I felt a difference") strongly predicted retention and progression.

Drop-off reduction

Expectation-setting reduced early abandonment and support requests related to perceived malfunction.

Key decisions

Perceived competence over accuracy

Decision
We prioritized user confidence signals alongside performance metrics.
Context
Users disengage when they feel they’re failing, even if improving.
Tradeoff
Slight deviation from purely objective scoring.
Impact
Increased retention and continued engagement.

Adaptive progression over fixed difficulty

Decision
We implemented dynamic difficulty based on user performance trends.
Context
Static levels don’t accommodate learning variability.
Tradeoff
Increased system complexity.
Impact
More personalized learning curve and higher completion rates.

Timing over volume

Decision
We reduced constant feedback in favor of well-timed signals.
Context
Over-feedback creates noise and dependency.
Tradeoff
Less immediate guidance.
Impact
Encouraged internal calibration and deeper learning.

Outcomes

  • 25,000 users onboarded
  • Increased 3-month retention
  • Higher 12-week completion rates
  • Reduced early-stage drop-off
  • Increased NPS
  • Strong correlation between perceived competence and retention

The system improved outcomes by aligning with how users learn, not how signals perform.

Business impact

The redesign transformed Speech Recognition into a core product feature.

  • Reduced early abandonment
  • Increased sustained engagement
  • Improved long-term retention
  • Increased user confidence in the device

Retention shifted from fragile to structured.

Reflection

This project changed how I design learning systems.

Users don’t drop off because systems fail.
They drop off because they believe they are failing.

Key takeaways:

  • Perceived competence drives retention more than accuracy
  • Early micro-success determines long-term engagement
  • Confusion must be framed, not removed

Designing for learning means designing for belief, not just performance.