My foolproof interview questions after AI cheating is everywhere
Some are more practical than others
In a world where candidates are using invisible software to cheat through job interviews, we've officially crossed from the era of résumé padding into full-blown espionage.
I can't go to a meetup anymore without this happening: I mention I work on an interviewing product, someone immediately then asks, "Have you heard of that tool Interview Cod…"
I hate to give them more press, but if you don’t know, it's an AI tool that whispers answers during technical interviews.
And we've clearly gone too far. Even in their own promotional video, you can see how faking everything creates an uncomfortable experience for everyone involved.
Failing technical interviews is painful and people want shortcuts. I get it—I've done LeetCode questions too. They're frustrating and often feel disconnected from real work. Much like how reciting ChatGPT in a monotone voice feels disconnected from genuine competence.
Here's the core question: Are there really masses of brilliant engineers who would be outstanding employees but simply can't pass coding interviews? Is that truly our biggest hiring problem? Or are we just creating a new generation of tech workers who believe "fake it till you make it" is a career strategy rather than a recipe for disaster?
We are witnessing the effects of a job market where everyone wants a job but few can get one. And instead this places desperation into candidates to do anything possible to get through the door.
The consequences have been predictable: unqualified hires who get fired within weeks, security risks from bad actors infiltrating organizations, and humiliating moments when candidates get caught cheating mid-onboarding.
So as interview fraud becomes more sophisticated, here are my absolutely foolproof, guaranteed-to-work tips for interviewers to avoid making catastrophic hiring mistakes:
Attention to Detail
The first line of defense nowadays is to trick the people using AI assistants in nefarious ways. You might have seen the classic small text at the end of a job description asking the candidate to “write I read the job description
to prove that you are not using AI”. There’s also the invisible ink trick with text written in white font so that no one notices it when they're copying pasting directly into ChatGPT.
This has been shown to work quite effectively. The Pragmatic Engineer featured a hiring manager who added a honeypot into their take-home that created an endpoint returning “uh-oh” when called. And uh-oh is right—if candidates can't even review ChatGPT's output once, they're sloppy. Next thing you know, they'll be leaving dirty dishes all over the office kitchen or writing code comments like "I don't know why this works but it does" in production.
Some argue the job search is exhausting—candidates might face ten take-homes weekly. But reading over the one-shotted output from ChatGPT is still bare minimum effort. I've never seen a great candidate skip this step, much like how I've never seen a great doctor skip washing their hands after sticking a finger up your butthole.
The Interview Polygraph
Remember that polygraph scene from Ocean's 13? One of the heist members has to pass a polygraph test, so they place a needle in his toe that he's supposed to press on to give him excruciating pain when telling the truth, so that it matches his same physiological indicators when he's lying.

Turns out that’s not a real way to trick a polygraph, but the same method can be applied to any interview to detecting strange behavior. Start with a coding question, then at the end of the interview, suddenly ask them what they had for breakfast that morning. Watch as their eyes dart frantically to the corner of their screen where the breakfast menu is definitely not being displayed. "Uh... toast? No wait, oatmeal! I mean eggs! Definitely eggs. With... toast?" Congratulations, you've caught someone who can't even remember what they ate without AI assistance.
The best candidates that you interview, potentially even using AI, would make this process seamless. In which they've merged with it, almost like a mental API to AI. These people can take a problem, instantly know what to ask AI, and seamlessly blend the answer with their own thinking. They're already living in 2030 while your interview process is stuck in 2022. These candidates are worth hiring immediately.
But for everyone else - you should probably screen them out.
What gets you up in the morning?
Some people believe recruiters will be completely automated—the texting, nagging, rejections, even phone screens. And sure, the worst recruiters will be replaced immediately. But they weren't doing their jobs anyway. The best recruiters are your first human filter, gauging capabilities and motivations so you don't hire a senior Google engineer expecting nap pods and "20% time" at your scrappy startup.
I would argue that one of the simplest yet most effective tests is to compare what candidates say motivates them at different stages of the process. Ask them in the application form or during the recruiter screen: 'What motivates you to get up in the morning?' Record their answer. Then, during the interview, ask the exact same question again without warning.
Watch their face carefully. Are they searching for the answer they previously wrote down? Are they frantically trying to remember what they said before? Or do they light up with genuine enthusiasm, giving you basically the same answer but with more energy and personal details because it's actually true?
If their spontaneous answer matches their prepared one in spirit but with more authentic emotion, you've found someone honest. If they stammer through an obviously different response while their eyes dart around like they're playing 'Find the Previous Answer' on their mental desktop, then you’ve gotten your answer.
But if you find them passing this motivational interview exercise but failing another, now you're in Better Call Saul-level territory. At this point, they're so deep into the grift that they’re not just candidates, they're professional fraudsters who decided tech pays better than credit card scams. You were warned.
The Timer Test
I had this post on LinkedIn go semi-viral because I contested that LeetCode interviews are just Big Tech's clever way of identifying which candidates have been properly conditioned through grinding algorithms to actually follow orders. Soldiers who, once hired, won't complain when their Stanford CS degree is used exclusively to move buttons 0.5 pixels at a time on a billion-dollar company's interface. It's not really about problem-solving; it's about finding people who can efficiently perform mind-numbing tasks while maintaining the illusion that they're doing something meaningful.
And so maybe if you work at this kind of company or if you're hiring for a position that requires someone to be an efficient code monkey, you might as well test for exactly that.
Ask them one simple question in the screener: "What work task can you perform efficiently in under 5 minutes?" Then, during the interview, give them 15 minutes to do exactly that task. Over and over and over again. You be the judge of their efficiency and their will to live after the twelfth repetition.
For example, if they say "data cleaning," hand them a stack of horrifically formatted CSVs and time how long it takes them to transform this digital garbage into something usable. Award bonus points if they maintain a smile while their soul visibly leaves their body around minute 13.
Putting their hands over their eyes while answering a question
This one's embarrassing—like a cop asking you to recite the alphabet backward during a DUI check. They already suspect you're drunk from your breath. If they're wrong, it's just a small misunderstanding with no penalties.
But giving the candidate the chance to fly out for an onsite visit in the office, only for them to get screamed out by every member on your team is far worse. But hey, you might actually inspire change if they cry from incompetence on the interview. Maybe they'll never want to face that humiliation again and will finally commit to genuine learning. Who knows?