Best Fit: hiring that feels like a conversation not a form.
Going into Lyrathon, I expected to contribute mainly as a designer but curiosity around the brief pulled me into a full-stack engineering role over the weekend. We built a prototype that combines LeetCode-style challenges with AI-assisted analysis of problem-solving patterns and reasoning depth.
Project Year
2025
Event
Lyrathon
Team
Doomscrollers
Stack
Next.js, TypeScript, Tailwind, AI prompting, (your backend)
The Brief
Project Snapshot
Lyra’s brief felt especially relatable as a student heading into the workforce. We framed the problem from an early-career perspective, uncertainty around readiness and role alignment, then had to reason from a recruiter’s viewpoint, where decisions are made with incomplete signals.
Success criteria
Surface how candidates think (reasoning + trade-offs), keep scoring deterministic, and ensure AI never becomes a cheating engine.
Chapter 2
What Was Broken
Hiring funnels over-index on outcomes: the final solution or score. But in real teams, the strongest signal is often the process — communication, trade-offs, and debugging.
Output-only scoring
The result is visible, but the reasoning isn’t.
— Team discussion
So we needed…
Capture decision-making, not just correctness.
Static experiences
Forms don’t adapt when candidates get stuck.
— Mentor feedback
So we needed…
Provide guided prompts that encourage explanation.
Opaque AI
AI scoring can feel arbitrary or untrustworthy.
— Judge / peers
So we needed…
Keep evaluation deterministic and transparent.
Our reframing
Treat evaluation like a conversation: code → run tests → explain choices → reflect. AI supports explanation — never the solution.
Chapter 3
My Role
I walked in expecting to contribute mostly as a designer, but I ended up making full-stack decisions about fairness, system boundaries, and what “good signals” actually look like.
Student viewpoint
As early-career candidates, we want clarity on readiness and role alignment — not just pass/fail.
Recruiter viewpoint
Recruiters need stronger signals earlier: reasoning depth, decision logic, and problem-solving patterns.
What I shipped
- • Story-based UX flow (progressive disclosure)
- • Guardrail copy + prompt patterns
- • Transparent results view
- • Micro-interactions and readable hierarchy
Chapter 4
Design → Build
We designed the evaluation flow as a story — not a dashboard — so candidates are never overwhelmed, and evaluators can still trace clear evidence.


Candidate flow
Choose a role + task → live code → run deterministic tests → explain trade-offs → reflect with AI prompts.
Evaluator flow
See test outputs + rubric highlights + explanation transcript — without AI overriding scoring.
Chapter 5
AI Guardrails
AI’s job is to surface possibilities and prompt explanation — not produce code. We separated deterministic evaluation (tests + rubric) from supportive conversation (reflection + clarity).
AI helps by
- • Asking clarifying questions
- • Encouraging structured explanations
- • Helping articulate trade-offs
- • Prompting reflection after test results
AI must NOT
- • Provide full solutions
- • Write candidate code
- • Override deterministic scoring
- • Hide how evaluation works
Deterministic pipeline stays in control
Tests and scoring should behave the same way every time. AI sits beside the pipeline — as a coach — not inside it.
Code Execution
Deterministic: run tests, return outputs.
Scoring
Deterministic: results + rubric.
Reflection
Non-deterministic: prompts & clarification.
Chapter 6
The Reveal
From “fill this out” to “show me how you think.” The flow is paced, readable, and designed to surface reasoning.


Candidate view (left) and evaluator view (right) — same evidence trail, different needs.
Chapter 7
Impact
The prototype reframes evaluation as an experience: deterministic fairness with a human-readable reasoning layer. It’s restraint, not novelty — AI supports reflection, while tests stay consistent.
Reflection
What surprised me
I expected to be mainly a designer — but I ended up reasoning like an engineer. The backend pipeline wasn’t just implementation… it shaped what “fair evaluation” could even mean.
Design–Dev mindset
This strengthened how I design with constraints: progressive disclosure, readable evidence trails, and guardrails that still feel supportive.
If I had more time…
I’d expand task libraries, improve evaluator filtering, and polish accessibility for keyboard-only + screen readers.
Key takeaway
Recruitment systems should surface understanding, not just outcomes. AI isn’t the product — transparency and restraint are.
Currently Open to Work
Let's connect
Internships, collaborations or creative ideas — I’d love to hear from you ✿

Chapter 8
Comments
External feedback — proof that clarity and craft landed with judges.
LinkedIn comment
— Anh Dao (Co-Founder & COO @ Lyra)
Highlight from Lyrathon 2025
“Best UI” feedback in preliminary judging