A digital question bank built for ophthalmology residents: filtered practice sessions, spaced repetition, timed exams, and per-topic progress tracking across 8 subspecialties. Designed and developed solo, entirely with Claude Code.
Live on RailwayOphthalmology residents faced specialty exams covering years of material across 8 topics. Their resources: scattered PDFs, old printouts, and handwritten notes. There was no structured way to practice, no way to know where they stood, and no way to target weak areas before an exam.
Past exam questions existed only as loose files and paper printouts from 2018 onward. There was no digitized library, no search, no organization by topic or year. Finding a relevant question meant manually sifting through folders.
Residents had no way to run timed practice sessions, filter by subspecialty, or track which question types they struggled with. Preparation was entirely passive: reading notes, hoping coverage was enough.
Exams draw from multiple question formats across all 8 subspecialties simultaneously. Without a structured bank, there was no way to simulate an actual exam or identify which areas needed the most work before exam day.
A React frontend over a FastAPI backend, deployed on Railway. The interface uses Cormorant Garamond for headings (referencing the academic register of medicine) and Onest for body text. Each of the 8 subspecialties has its own color identity, carried through every chip, bar, and tag in the app.
Practice mode runs through the question bank the way an exam does: one question at a time, answer visible only after submission. Residents choose how many questions, which topics, and whether to enable a countdown timer. The session ends with a per-question breakdown showing what they got right, what they got wrong, and why.
Spaced repetition surfaces questions answered incorrectly more often in future sessions, gradually shifting the focus toward weak areas without requiring any manual curation.
Browse the full digitized bank from 2018 onward. Multi-select filters let residents narrow by topic, year, question type (opcion multiple, falso/verdadero, completar, asociacion), or category. Any combination is valid; the results update live.
A live dashboard tracks overall accuracy, total questions answered, session count, and a streak counter. Per-topic breakdowns show which subspecialties are strongest and which need more sessions. No manual logging required.
A protected admin panel lets instructors upload and categorize new questions directly in the app. Questions are tagged by topic, year, type, and category. The bank grows without touching the codebase: instructors manage content, residents see it immediately.
The progress dashboard gives residents a single answer to the most important pre-exam question: which topics actually need work? The accuracy ring shows global performance; the per-topic bars make the gaps visible immediately.
Session history lets residents see how their accuracy has shifted over time on a specific topic, whether repeating a topic moved the needle, and which sessions were strongest. The streak counter adds a lightweight accountability layer without gamification friction.
All progress data is stored server-side through the FastAPI backend. Nothing is lost between sessions, and the algorithm uses session history to weight spaced-repetition scheduling for future practice.
The platform replaced scattered PDFs with a searchable, filterable question bank that mirrors the format and distribution of real specialty exams. Residents can now simulate exam conditions, track per-topic accuracy, and target weak areas before exam day.
"Having the entire bank filterable by topic and year changes how you prepare. You can actually target the areas you know are weak instead of hoping you covered everything."
Médico residente, Programa de Especialidad