I tested Acemode in 5 real technical interviews over the past month. Real interviews - at companies whose names I'll keep anonymous - across different formats, levels, and platforms.

This is the unfiltered breakdown of what worked, what didn't, and what I'd do differently. No marketing spin. The goal is to give you a realistic preview of what to expect if you're considering this for your own interviews.

📋
Context up front

Final results: 3 offers, 1 rejection, 1 still pending. I'm a mid-level engineer with 4 years experience. Salary range targeted: $130-180K base. Remote roles. All US companies, all 2026.

Interview 1: Series B startup, 60K-employee parent company

Format: 45-minute coding round on CoderPad, plus 30-minute system design.

Question type: Medium-difficulty algorithm - sliding window variant. System design - design a notification service.

How I used Acemode:

Coding round: I read the problem, restated it to the interviewer, and asked clarifying questions about input size and edge cases. While I was talking, I scanned with Alt+S. The AI gave me a sliding window approach - which I'd already started thinking about. I used the AI's confirmation as validation, then wrote my own implementation with my own variable names.

System design: this is where it earned its money. The AI generated a clean structure: clarifying questions → requirements → capacity estimation → API design → high-level architecture → bottleneck deep dive → trade-offs. I used the structure as a beat sheet and improvised the actual content based on my real experience with notification systems.

Result: Offer received. Final-round feedback: "particularly impressed by the structured approach to system design."

What worked: Using AI to validate my own thinking rather than as a primary problem-solver. The interviewer never noticed anything unusual because I was talking and thinking the entire time.

What was sketchy: The AI's first algorithm suggestion was actually slightly more elegant than what I'd have written. I had to deliberately downgrade my variable names to make it sound like me.

Interview 2: FAANG-adjacent, 5,000-employee company

Format: Two 45-minute coding rounds back-to-back.

Question type: Hard graph problem in round 1. DP problem in round 2.

How I used Acemode:

Round 1 went poorly. The graph problem was beyond what I'd seen before. The AI gave me a correct topological sort approach, but the problem had an unusual constraint I hadn't communicated to it. So I implemented its suggestion, which was wrong for the constraint. Interviewer caught the bug. I had to redo it from scratch with hints. Burned 30 minutes.

Round 2 - DP problem. I'd practiced DP problems for two weeks before this interview. I solved this one mostly on my own and just used the AI to double-check edge cases. Smooth round.

Result: Rejected. The recruiter feedback specifically mentioned "needed more hints than expected on round 1."

What worked: Round 2, where I had real preparation backing up the tool.

What didn't: Trusting the AI on round 1 without fully understanding the problem first. The lesson: the tool amplifies your judgment, including bad judgment. If you don't understand the problem, neither will the AI.

What I'd do differently: Read the problem twice. Write down the constraints visibly. Verify my mental model before scanning with the AI. The AI can't read constraints you didn't tell it about.

Interview 3: Mid-stage startup, 200 employees

Format: 60-minute take-home assignment, plus 60-minute discussion of the take-home.

Question type: Build a small service that handles X with Y constraints.

How I used Acemode:

Take-home: I used AI heavily. ChatGPT for the basic structure, Acemode as a sanity check on architecture choices. The take-home explicitly said "use any tools you'd normally use at work." So this wasn't a gray area at all - it was sanctioned.

Discussion: this is where most candidates fail take-home rounds. They use AI to write code they don't understand, then can't defend it. I made sure I could explain every line, including why I made specific architectural choices. Did several whiteboard exercises during the discussion that weren't in the original code.

Result: Offer received. Probably the strongest of the three I got.

What worked: Treating the AI like a collaborator on the take-home, then making sure I owned the resulting code in the discussion. Many candidates lose offers in the take-home discussion specifically because they can't defend AI-generated code.

Lesson: Take-home rounds are a great use case for AI - usually explicitly allowed and enable you to ship better quality work. But you must understand what you submitted.

Interview 4: Established tech company, 10,000+ employees

Format: Karat-style screen with a third-party interviewer, then onsite loop.

Question type: Two medium algorithm problems in 45 minutes (Karat). Then 4 rounds onsite covering coding, system design, behavioral, hiring manager.

How I used Acemode:

Karat round: smoothest interview I've ever done. The Karat interviewer is on a separate Zoom window watching me code on CoderPad. Acemode was completely invisible. I scanned both problems quickly, validated my approaches, and finished both with 10 minutes to spare. Used the extra time to walk through alternative approaches.

Onsite - coding rounds: similar to Karat, smooth. The pattern of "validate-then-implement-in-my-own-style" was muscle memory by this point.

Onsite - system design: structured AI answer + my own examples = strong round.

Onsite - behavioral: this is where I learned an important lesson. I tried to use the AI's voice input feature for behavioral questions. The voice transcription was good but the answers came out generic. I felt myself reading rather than telling stories. Interviewer's energy dropped halfway through.

Onsite - hiring manager: I didn't use AI at all. Just had a real conversation about the role, my background, and what I was looking for. Felt natural, went well.

Result: Offer received but lower than expected. Recruiter explicitly mentioned "behavioral round was less strong than the technical rounds." I think the AI hurt me there, not helped.

Lesson: Use AI for technical depth, NOT for behavioral storytelling. Behavioral answers need to come from your real lived experience or they fall flat. The AI structure helps; AI delivery does not.

Interview 5: Smaller startup, 80 employees

Format: Pair programming on a real bug in their codebase.

Question type: Debug an actual issue in their open-source repo with the founding engineer.

How I used Acemode:

Honestly, very minimally. The format was inherently AI-resistant - I was working in their actual codebase, on their actual machine (via their cloud development environment), with the engineer pair programming live. There was no time to scan and read AI output without making the conversation weird.

I used Acemode exactly twice during the 90-minute round: once to ask "what's the typical pattern for X in Node.js applications" and once to verify my understanding of an unfamiliar library. Both were quick consultations, both helped marginally.

The rest was just real engineering: reading code, asking the engineer questions, hypothesizing, testing.

Result: Pending. They told me they're deciding between me and one other candidate. Recruiter said the engineer "really enjoyed the session."

Lesson: The pair-programming-on-real-code format is the future of technical interviews. AI tools provide diminishing returns when the interview is collaborative and contextual rather than algorithmic and synthetic. If more companies move toward this format, AI tools will matter less.

The patterns I noticed across all 5

What worked consistently

  1. Using AI to validate my own thinking - not to replace it. When my instinct was right, the AI confirmed quickly. When I was on the wrong track, the AI pointed me elsewhere. Either way, I was the one driving.
  2. Talking constantly - narrating my thought process gave me cover during the 5-10 second window where the AI was processing.
  3. System design rounds - the structured AI output is genuinely better than what most candidates produce unaided.
  4. Take-home rounds - explicitly allowed, plus you can iterate on AI suggestions until you fully understand.
  5. Typing my own implementation - never copying, always rewriting in my own variable names and style.

What didn't work

  1. Behavioral rounds - generic AI answers come across as inauthentic. Better to prepare your own stories.
  2. Trusting AI on problems I didn't understand - interview 2 disaster.
  3. Pair programming sessions - too collaborative for AI assistance to add real value.
  4. Letting AI do the thinking - every time I outsourced thinking, it backfired.

What I would tell my past self

Before this experiment I'd have said the AI does most of the work and I just need to type the answers. That mental model is wrong.

The actual mental model that works:

You are the engineer. The AI is a senior peer in the room. You drive. They occasionally weigh in with helpful suggestions.

This works because:

The honest financial breakdown

Cost of using Acemode: $29 one-time.

Without Acemode:

Difference in salary outcomes: comparing my actual offer (which I accepted) vs. a realistic counterfactual offer at a less senior level - about $25-40K/year for the next 2-3 years.

Return on $29: somewhere between 800x and 4,000x, depending on how you count. Even being conservative and assuming 50% of the difference is from my preparation rather than the tool, this is comically high ROI.

What I'd recommend to someone starting tomorrow

  1. Prepare normally. The tool is multiplicative. Multiply zero, get zero.
  2. Practice with the tool for at least 5 mock interviews before using it real. Muscle memory matters.
  3. Use it for technical rounds, not behavioral. The behavioral story has to be yours.
  4. Always type your own implementation. Pasting verbatim is the easiest way to get caught.
  5. Talk constantly. Silence is the giveaway.
  6. If you don't understand the AI's answer, don't use it. Pick a simpler answer you understand.
  7. Take-home rounds are the highest-ROI use case. Usually allowed, you can iterate, you can fully understand.

Final note on what this proves and doesn't

One person's 5 interviews isn't statistically meaningful. I'm a sample size of 1. Your mileage will vary based on your role, level, target companies, baseline preparation, and luck.

What I can say confidently:

If you're on the fence: the 3-session free trial is enough to test it on a low-stakes interview. If you don't see the value, you've lost nothing. If you do, $29 is the easiest spend in your job search.

Good luck out there.