Neosensory

Designing a neuroplastic learning system for high-frequency hearing loss.

Product iOS & Android
Company Neosensory
Role Product Designer

Context

High-frequency hearing loss limits access to critical speech cues such as consonants and clarity. Neosensory does not amplify sound. It translates audio into haptic signals, relying on neuroplasticity to build a new sensory pathway.

Early adoption revealed a structural risk. Users were abandoning during the first weeks of use.

The signals were accurate. The hardware worked. The issue was perceived failure during early adaptation.

Users frequently reported that "everything feels the same." Without context, this early ambiguity was interpreted as malfunction rather than learning. As a result, many disengaged before meaningful differentiation could occur.

The core problem was not interface usability. It was learning design.

Users were not failing to learn the signals. They were failing to understand where they were in the learning process.

Hypothesis

If we structure the experience around how neuroplastic learning actually unfolds, and make progress perceptible before accuracy emerges, users will persist through early ambiguity and complete the adaptation cycle.

This required shifting the product from signal exposure to a structured learning system.

Critical Tension: Accuracy vs Adaptation

Early discussions centered on improving signal differentiation through hardware and algorithm refinement. My position was that increasing signal clarity alone would not solve early abandonment. The core issue was expectation misalignment, not signal performance.

This required shifting focus from technical refinement to learning architecture.

To align engineering and research, I translated qualitative confidence signals into measurable retention correlations. Demonstrating that perceived competence predicted continuation reframed the discussion from interface preference to behavioral impact.

This shift redirected roadmap priorities from signal refinement experiments toward structured progression design.

Intervention: A 12-Week Learning Architecture

I led the design of a staged progression model grounded in behavioral reinforcement and cognitive adaptation.

  1. 1
    Onboarding

    Goal: Normalize confusion and establish psychological safety. Explicitly framed ambiguity as expected. Introduced short educational content explaining neuroplastic adaptation. Set expectation that differentiation emerges gradually. Embedded in-app check-ins after each level asking how the experience felt.

  2. 2
    Differentiation

    Goal: Accelerate perceived competence. Structured sessions to emphasize contrast-based signal exposure. Reduced early signal variety to prevent overwhelm. Designed repetition loops to reinforce subtle distinctions. Introduced "I felt a difference" milestone language at the end of Level 1. Level 1 spanned three days. At completion, many users reported "I felt a difference." This became a critical micro-success moment.

  3. 3
    Immersion

    Goal: Transfer confidence beyond the app. Structured real-world practice sessions. Encouraged voluntary repetition rather than score-based progression. Reinforced persistence over correctness.

Behavioral Instrumentation

We integrated competence and confidence tracking directly into the learning flow.

Perceived Competence Tracking

After completing each level, users responded to structured prompts such as: "I feel I am beginning to distinguish signals." "I feel more confident using the device." These responses were tracked longitudinally across the 12-week program. Users who reported increasing perceived competence during Weeks 2 to 4 were significantly more likely to continue through later stages. Confidence became a measurable product metric.

Emotional State Reporting

Early usability testing and in-app check-ins revealed a pattern. Week 1 emotional themes: Confusion, Frustration, Uncertainty about device performance. When ambiguity was not framed, confusion correlated with early abandonment. After introducing expectation-setting and neuroplasticity education: Confusion persisted, Frustration decreased, Continuation rates increased. The difference was not the removal of ambiguity. It was the removal of misinterpretation.

Time-to-First-Success

We identified "I felt a difference" at the end of Level 1 as a pivotal milestone. Level 1 lasted three days. Users who reported perceptible differentiation by Day 3 were significantly more likely to: Continue into Level 2, Return to the app consistently, Complete later stages of the program. Shortening time-to-first-success increased early retention.

Drop-Off Correlation with Confusion Reporting

Pre-intervention data showed that early-stage drop-off correlated strongly with user-reported signal ambiguity when no contextual framing was provided. Post-intervention: Expectation-setting reduced abandonment during the first two weeks. Users were more likely to interpret confusion as part of learning rather than device failure. Support friction related to perceived malfunction decreased. The design did not eliminate ambiguity. It recontextualized it.

Iterative Pacing Optimization

Through structured usability testing and behavioral analytics, we refined learning tempo and signal sequencing based on both usage data and perceived competence trends.

Early versions introduced too much signal variety, which increased cognitive overload during the first week. Users spent less time in Speech mode and reported lower confidence scores when exposed to broader signal ranges too quickly.

We reduced early complexity, increased repetition density, and staged signal exposure more deliberately across the first three levels.

After pacing adjustments: Time spent in Speech mode increased. Users were more likely to return for consecutive sessions. Perceived competence scores rose more consistently between Level 1 and Level 2. Users who spent more time in structured repetition reported stronger "I felt a difference" milestones at the end of Day 3. By correlating usage duration with competence reporting, we validated that tempo was directly influencing confidence formation. This shifted pacing decisions from intuition to measurable learning design.

Outcomes

Following the introduction of the structured learning system:

  • 25,000 users onboarded within the first year.
  • 3-month retention improved, particularly among first-time users.
  • 12-week program completion rates increased.
  • Early-stage abandonment during the first two weeks decreased.
  • NPS increased.
  • Qualitative interviews reflected rising confidence and perceived competence.
  • Users returned to the app consistently during structured practice phases.
  • Most importantly, perceived competence increased across stages. Users who reported growing confidence were measurably more likely to persist.

The system worked because it aligned with how people adapt, not how technology performs.

Business Impact

The learning redesign transformed Speech Recognition from an experimental mode into a core product feature.

By reducing early abandonment and increasing completion rates: Adoption stabilized. Engagement sustained beyond onboarding. Confidence in the device increased. Long-term user value improved. Retention shifted from fragile to structured.

Reflection

This project reshaped how I think about product design in adaptive systems. When products require users to build new perceptual or cognitive skills, interface clarity is necessary but insufficient. The system must scaffold psychological progression.

Three principles now guide my work:

  • Perceived competence predicts persistence more than early accuracy.
  • Micro-success moments determine long-term retention.
  • Trust can be measured and designed for.

Confusion is not a defect. It is a phase of adaptation. Design must prevent that phase from being misinterpreted as failure.

This work laid the foundation for how I approach complex, learning-dependent systems today. Whether in accessibility or AI-augmented tools, sustainable adoption depends on designing for transformation over time, not immediate performance.