TD Bank
Balance Transfer
Designing decision confidence in a revenue-critical feature — leading a cross-functional team of 8 to transform a complex financial flow from misunderstood to measurably clear.
A feature that worked.
An experience that didn't.
Balance Transfer is commercially significant — it lets TD Bank customers move external debt to a TD card under promotional APR terms. The business case was solid. The user experience was not.
The flow had grown organically over time, accumulating complexity with each iteration. Requirements lived in multiple documents. Design decisions had been made feature-by-feature without a governing framework. Research findings existed in reports but had never been operationalized into the product itself.
The surface-level problem: 75% of users initially misunderstood "Offer APR End Date." Behavioral hesitation spiked at the primary CTA. Only 40% of users correctly interpreted the "Canceled" status. But these weren't isolated UX bugs — they were symptoms of a deeper structural problem. Design intent wasn't surviving implementation.
"Design was not only responsible for the interface. It was responsible for ensuring that clarity survived every handoff."
Language without context
Promotional APR terminology — "Offer APR End Date", "Processing state", "Canceled" — was used without the contextual explanation that made it meaningful to a customer who had never done a balance transfer before.
Hesitation at the moment of commitment
The primary CTA — the moment users formally initiated a transfer — showed measurable behavioral hesitation. Mouse hover patterns and session recordings revealed users pausing, re-reading, and sometimes abandoning. The design wasn't giving them permission to proceed.
Status confusion creating support load
"Canceled" triggered anxiety because users couldn't distinguish between something they had done and something the system had done to them. The same word meant two different things — and the UI didn't differentiate them.
Requirements scattered across documents
Without a single source of truth, engineering and QE were making interpretive decisions that should have been design decisions. Mid-sprint clarification loops were a symptom of pre-sprint ambiguity — and that ambiguity was a design problem.
Alignment
is the design work.
As Design Manager for Credit Card Experiences, I owned the initiative from definition to release. This was not a typical individual contributor design role — it was orchestration.
My team included a UX Designer, Visual Designer, Content Designer, and UX Researcher. I also partnered with a Product Owner, Engineering Lead, and QE Lead. Eight people, four disciplines, one flow. My job was to ensure that what we learned in research became what shipped in production — without losing fidelity at any handoff.
The most important thing I contributed wasn't a specific screen design. It was a framework: a two-tier requirements structure that gave every team member a clear, shared source of truth before a single line of code was written.
Requirements architecture
Built a two-tier requirements framework — high-level decision contracts for all teams, detailed screen-level specs for engineering and QE — eliminating the interpretive gaps that caused mid-sprint disruption.
Research operationalization
Ensured every usability finding was translated into a specific design decision with a specific rationale — not just a summary in a report. If research identified a problem, the requirements had to specify the solution.
Cross-functional facilitation
Ran structured review sessions across Product, Engineering, Content, and QE at key milestones. Not status updates — alignment sessions with open decisions resolved and next steps owned.
Disclosure language oversight
Worked directly with Content Design and Legal to ensure promotional APR language was both compliant and comprehensible. Financial clarity and legal precision aren't naturally the same thing — closing that gap was a design problem.
Two tiers.
Zero ambiguity.
The fundamental insight behind the requirements framework: not all requirements need the same level of detail at the same time. But all requirements need to exist before sprint commitment.
Tier 1 — High-Level Requirements — established the shared contracts between all teams: the overall decision journey structure, offer comparison logic, calculator behavior standards, disclosure standards, and the definitions of each transfer status state. This was the document that eliminated the "what does Canceled actually mean?" conversation from happening in engineering.
Tier 2 — Detailed Requirements — specified everything that Tier 1 implied but didn't spell out: screen-level interaction rules, exact data display hierarchy, edge case handling for every state combination, and QE validation criteria that mapped directly to design intent. Engineers built from these. QE validated against them. No interpretation required.
The framework didn't slow the process down. It front-loaded the decisions that would have been made reactively in the middle of a sprint — at the worst possible time, under the most pressure, with the least context.
Reframing APR end date language
The phrase "Offer APR End Date" tested at 25% comprehension. The redesign added contextual supporting copy that explained what the date meant in plain language — specifically, what would happen after it. Comprehension: 95%.
Calculator hierarchy inversion
The original calculator led with monthly payment amount. Users actually cared about total savings. Inverting the hierarchy — making total savings the headline — immediately shifted the calculator from a math problem to a decision-support tool.
Pre-submission disclosure sequence
Before the primary CTA, we redesigned the disclosure sequence to surface the three questions users were actually asking: How much will I save? When does the promotional rate end? What happens if I miss a payment? 100% of users understood next steps before submitting.
"Canceled" microcopy precision
Two separate microcopy variants for Canceled — one for user-initiated, one for system-initiated. The distinction that seemed small in a spreadsheet was the difference between 40% and 93% correct interpretation in testing.
Cross-functional review cadence
Established bi-weekly structured reviews with Product, Engineering, and QE — not for status, but for open decisions. This cadence reduced mid-sprint clarification loops and maintained design intent through implementation.
Comprehension is
measurable.
The research methodology was built around a single principle: financial comprehension — not task completion — is the right metric for a feature where misunderstanding has real financial consequences.
Moderated usability sessions were structured across the full balance transfer lifecycle: discoverability, offer comprehension, calculator interaction, the commitment moment, and post-transfer status monitoring. Paraphrase accuracy was the primary success metric — not "did they complete the task?" but "did they understand what they were agreeing to?"
Sessions ran iteratively. Each content or hierarchy change was re-tested before being finalized. The research wasn't a single input at the start — it was a continuous feedback mechanism that ran in parallel with design iterations.
Discoverability
Could users find the Balance Transfer feature without guidance? 83% task success, average discovery time 18 seconds. Baseline established for navigation architecture.
Offer comprehension
Did users understand the promotional APR terms? Initial: 25% paraphrase accuracy. After language redesign: 95%. This single change was the highest-impact intervention in the entire project.
Calculator evaluation
Did users use the calculator to make a decision, or just to see a number? Post-hierarchy inversion, 80% of users could articulate their total savings before submitting — up from 34%.
Commitment moment
Did users hesitate at the CTA? Initial hesitation rate: measured via interaction patterns. Post-redesign: −32%. The disclosure sequence redesign was the primary driver.
Status interpretation
Could users correctly interpret Processing, Posted, and Canceled states? Initial: 40% correct on Canceled. Post-microcopy precision: 93%. The distinction between user-initiated and system-initiated cancellation was the entire fix.
Clarity is
measurable.
Every metric below maps directly to a specific design decision. Not general improvement — each number traces back to a specific intervention, a specific hypothesis, and a specific test.
The business impact was downstream and real: reduced launch risk in a revenue-critical flow, lower compliance exposure through validated disclosure language, and improved adoption confidence at the commitment moment. The operational impact was immediate: fewer mid-sprint clarification loops, stronger QE alignment, and a requirements framework that the team could build on for future features.
What I
learned.
This project reinforced something I think about in every design leadership role: the design manager's job isn't to make design decisions for the team. It's to build the structures that allow the right decisions to survive from research all the way through to production.
The two-tier requirements framework wasn't a process artifact. It was a design intervention — one that protected the flow's clarity through eight people, four disciplines, and several sprint cycles. Alignment is not soft work. It's the structural design work that makes everything else hold together.
The comprehension metrics are the proof. When the language was right, the hierarchy was right, and the disclosure was right — users didn't hesitate. They decided. That's the outcome a design manager is responsible for: not a beautiful prototype, but a feature that does what it was designed to do, in the hands of real people, under real conditions.