What to actually test
Stop testing whether a candidate can use ChatGPT. Everyone can. Start testing whether they can review AI output, catch hallucinations, evaluate quality, integrate the result into a real workflow, and explain the risk to a stakeholder.
These are the skills that separate AI-capable candidates from AI-curious ones. They are also the skills most current screens fail to surface.
How to design the screen
Give candidates a generated artefact: a draft document, a draft analysis, a draft piece of code. Ask them to review it, critique it, improve it, and explain where it fails. The strong candidates will catch real issues. The weak candidates will accept it. This is much closer to the actual job.
Structured AI interviews on UnoJobs handle this format natively across tech and non-tech roles. Same questions, scored output, replayable for panel review.
Where speed wins
The best AI candidates are in active conversations with multiple companies. A two-week loop loses them. A two-day loop wins them.
Compress the screening layer with AI shortlisting and structured AI interviews. Spend senior hours only on candidates who already passed a defensible bar. The time saved goes back into closing.