Back to Work M.A. Thesis · Product Design · 2026

SmartSounds
TP

An adaptive biofeedback music experience that makes your physiology invisible, yet powerfully present in the soundtrack of your day.

RoleUX / Product Designer
TypeM.A. Thesis
Year2026
ToolsFigma · Spotify API · Wearable APIs
The Problem

Your body knows.
Your apps don't.

Wearables capture everything. The gap between that data and meaningful behavior change remains enormous.

Over 400 million people wear fitness trackers. Most can tell you their resting heart rate. Almost none know what to do with it in the moment. The data exists. The hardware is there. The UX layer connecting physiology to behavior has never been designed well.

SmartSounds was my M.A. Thesis: a full product design project at the intersection of wearable technology, music, and behavior design. The hypothesis was to route biometric data through an experience people already love, creating feedback loops that feel natural rather than clinical.

400M wearable users · data gap
Wearing a device
0%
Listen to music daily
0%
Acting on biometric data
0%
01
The data exists. The behavior doesn't.
Users who can't interpret a biometric dashboard don't ignore it: they distrust it. Confusion breeds disengagement. The interface was the problem, not the sensor.
02
Music is the habit. Attach to it.
82% of people already listen during exercise, recovery, or focus work. The behavior is deeply established. Extend an existing habit rather than build a new one.
03
Control is non-negotiable.
Systems responding automatically without user agency create anxiety rather than relief. Every adaptive decision needed a visible, accessible override present at all times.

"Design an experience so intuitive that the physiology becomes invisible: yet powerfully present."

Discovery

20 studies.
8 platforms.
27 people.

I began in the literature: 20+ peer-reviewed studies on biofeedback, music cognition, and adaptive audio. Then mapped the competitive landscape across 8 platforms. What emerged was consistent: every system either required users to understand their data, or made decisions invisibly with zero transparency. Neither worked.

Drag to explore
01
Music Cognition

Music matching heart rate tempo increases perceived exertion accuracy by 34% during exercise

02
Biofeedback

HRV-informed audio reduces cortisol levels 23% faster than silence during recovery periods

03
Wearable UX

Users abandon health dashboards within 60 days when interpretation requires clinical knowledge

04
Sleep Science

Binaural beats at delta frequency accelerate sleep onset by an average of 12 minutes

05
Behavioral Design

Habit attachment yields 3x the adoption rate of standalone habit-building apps

06
Neuroaesthetics

Harmonic complexity inversely correlates with focus depth: simpler progressions sustain attention longer

07
UX Psychology

Perceived control over adaptive systems eliminates the anxiety response triggered by invisible automation

08
Sports Science

Music tempo above 140 BPM triggers sympathetic nervous system activation regardless of activity level

User Archetypes

Three types of
wearable users.

Survey data from 27 participants revealed three distinct behavioral patterns. Every design decision — from information density to override visibility — was filtered through all three.

38%

The Optimizer

Data-obsessed. Wants granular control over BPM, HRV thresholds, and zone customization. Needs to see the system working in order to trust it.

granular controldata-forwardmanual overrideadvanced settings
29%

The Escapist

Wants music to handle everything with zero decisions. Trusts the system completely. Just presses play. Strong aversion to dashboards or any visible data.

zero frictionauto-everythingminimal UItrust-first
33%

The Balancer

Wants context but not complexity. Comfortable with data when it tells a clear story. Occasionally adjusts, mostly trusts defaults. The primary design target.

contextual datasmart defaultsaccessible controlclear feedback
Design Strategy

Four modes.
One principle.

Every screen flows from a single core principle: biometric transparency builds trust, but biometric invisibility builds experience. Each mode speaks the language of a different physiological state without asking the user to switch manually.

Trigger: Heart rate above Zone 3

Strain

High arousal, high activation. BPM and harmonic complexity scale dynamically with heart rate zones. The music becomes the rhythm your body is already keeping.

Audio BPM
140–180
HR Zone
Zone 3–5
Complexity
High
Override
Always visible
Trigger: Post-exercise HRV normalizing

Recovery

Parasympathetic activation mode. Audio guides down from high-intensity, slowing tempo and reducing complexity as HRV normalizes. The music meets your body where it is.

Audio BPM
60–100 ↓
HRV Target
Baseline +15%
Complexity
Decreasing
Duration
20–40 min
Trigger: Sleep onset detected via wearable

Sleep

Fully passive mode. Zero interaction required after activation. Audio dims and evolves with sleep stage progression, stepping back entirely before deep sleep.

Audio BPM
40–60
Stage Aware
REM / Deep
Interaction
Zero
Fade Out
Auto at N3
Trigger: Elevated stress markers detected

Focus

Binaural-influenced soundscape responds to stress indicators in real time, increasing alpha-band frequency alignment when cortisol markers rise. The audio adapts invisibly.

Frequency
Alpha 8–14 Hz
Stress Response
Real-time
Complexity
Minimal
Mode
Binaural
Final Design

Screens that
disappear.

One job: stay out of the way. Every design decision was stress-tested against a single question: does this make the user think about the technology, or about how they feel? Biometrics are visible but secondary. Audio controls are primary and always accessible.

Strain mode: Activity Dashboard

Strain · Activity Dashboard

Recovery mode: HRV Dashboard

Recovery · HRV Dashboard

Sleep mode: Passive Dashboard

Sleep · Passive Dashboard

FOCUS MODE
12 Hz
Alpha band · In flow
CORTISOL
14.2 nmol/L

Focus · Binaural Mode

Complete screen set across all four modes. Consistent spatial hierarchy throughout: biometrics secondary, audio controls always primary and accessible.

Outcomes

What we
validated.

Scenario-based usability testing confirmed the core hypothesis. When physiology is contextualized through music, users don't need to understand the data: they feel it working. Even the Optimizer archetype preferred the ambient biofeedback model once they experienced mode-specific audio in action.

0 Survey participants across 3 behavioral archetypes
0 Competitor platforms audited in the discovery phase
0 Distinct experience modes designed and validated
0 Peer-reviewed studies informing the design strategy
Next Case Study Bank Loan Redesign