Building Entervio: From Mock Interview Frustration to AI-Powered Platform
The story of how dissatisfaction with existing interview preparation tools led to building a comprehensive AI-powered voice interview platform
· 7 min read
I’m a junior software engineer in France. I finished my degree and apprenticeship in September 2025 and have been looking for a full-time software engineering position since then. Thankfully, I qualify for unemployment benefits in France, which comes with certain requirements, including visits to France Travail for accompaniment. They also signed me up for another organization called APEC, where I attended a session presenting their various services.
One service that caught my attention was interview coaching. They organized a day where people could come learn interview techniques and participate in mock interviews. The mock interview part intrigued me, but I honestly wasn’t feeling like dressing up and leaving my house just to take some mock interviews that may or may not actually benefit me. I thought it would be nicer if there was just a website or app I could use to practice on my own.
The sheet APEC passed around had a link to exactly that. I opened the website, but the “interview practice” was basically a multiple choice questionnaire where you select the “most appropriate” answer from a list. I checked France Travail’s offering—same thing. I did a basic search online, and honestly felt most results were mediocre.
And as any self-respecting software engineer would do after seeing these results, I decided it was time to build my own interview practice platform: Entervio.
The Stack #
For the tech stack, I knew I wanted something simple and efficient, but also something with a large ecosystem that would help me move fast in the development cycle and support many third-party libraries. As much as I love Go and find it one of the best server languages ever, I decided to give Python and FastAPI a shot for this one. I wanted something that would integrate easily with LLM providers for generating interview questions, as well as Supabase for auth among other things.
For the frontend, I decided to go with React using React Router v7. I store all the data in a Postgres instance. I’ve been thinking of adding caching through Redis or another key-value database, but I may leave that for a future sprint.
The Progression #
First Version: Getting Voice Working #
Immediately after I initialized the projects in the repo, I wanted to have at least a working voice chatbot by the end of the day. Then I could work on making that chatbot actually conduct interviews. With Claude’s help, I set up a very simple Python FastAPI WebSocket server and a React frontend. You could activate your microphone, talk to the chatbot, and it would stream its response back to you.
I used Gemini for the LLM, Edge-TTS for text-to-speech, and Groq Whisper for speech-to-text. It sort of worked, but the voice was too choppy as I streamed the response to the frontend word by word. I also felt like WebSockets weren’t as robust as I had hoped.
After some experimentation with stream length, I scrapped the entire WebSocket idea and decided to fall back to good old REST APIs with an endpoint to generate the audio and send it in one batch. While it’s somewhat slow, it’s honestly not that bad and perfectly usable.
Introducing Interviews #
After getting a working voice chatbot, I wanted to work on prompting the LLM to actually conduct interviews. Again with Claude’s help, I set up three routes: to start interviews, to respond, and to end interviews. The LLM would ask general interview questions, then the TTS would read them out loud so I could answer vocally.
Once that was working, I finally started setting up data persistence. I set up a quick SQLite database with SQLAlchemy and created my interview tables. Then I added interviewer personalities—you get a nice, neutral, and mean personality you can practice interviewing with:
- The nice interviewer is very helpful
- The neutral interviewer is professional
- The mean interviewer tries to throw you off your game with subtle comments or by disregarding your answers
After getting all that to work, it was time to actually make the website do something with more value than just prompting an LLM and reading questions out loud.
Introducing Feedback #
You’ve probably heard the phrase “practice makes perfect,” but if you don’t know what you’re doing wrong while practicing, you can’t really make the jump to being “perfect.” That’s why I believe that perfect practice makes perfect. If we apply that logic to Entervio, after each interview you should get feedback on how you performed and how you answered each question. This allows you to understand what you’re doing wrong and how to improve your interviewing skills—because at the end of the day, interviewing is just a skill.
That’s what I implemented next. After each question, I prompt an LLM with the question and your response and tell it to grade on a scale from 1-10, providing feedback on your answer and how to improve it. All of this is stored in the database. After your interview, you can see all questions, all answers, your grade on each answer, as well as feedback on the interview globally and on each individual answer. I made the global grade just an average of question grades for the sake of simplicity.
Customizing Interviews #
At this point, my colleague Jamal El Betioui joined me on the project. The first thing we worked on after finishing the feedback system was implementing auth so that multiple users could use the site and only access their own interviews and feedback.
Then, to customize the interviews, we added two more context sources:
1. Uploading Resumes #
First, we added the possibility for users to upload their resumes. We parse it and store all relevant information in the database—their experiences, education, skills, etc.—and use that as context when conducting interviews. This way, the AI actually knows a little about you and can ask direct questions about how you used technologies mentioned in your resume or challenges you encountered in past experiences. This made the interviews more personal and less generic.
2. Adding Job Descriptions #
We then added a job description field to send as context. If you want to prepare for a specific job posting that you have an interview for soon, you can paste the job description, and the interview becomes even more specific, asking you questions relevant to that position.
I personally use this feature a lot before interviews. I like to paste the job description and try out interviews with all three personalities before going into the real interview.
Adding a Job Board #
After we had the interview system working, Jamal suggested adding an integrated job board that uses the parsed resume data to perform smart searches using France Travail’s API. This suggests only relevant jobs based on your resume.
You get only relevant jobs on search, and there’s also a field where you can describe your dream job in natural text. We parse that too and input it into the MCP server so we can cater the results to you.
After that, we thought it might be nice to also adapt the resume to the job posting and generate a cover letter automatically. We added two buttons:
- One generates an adapted resume and returns a ready-to-use PDF with an ATS-friendly adapted resume for that job posting
- The other automatically generates a cover letter for the job posting
What’s Next? #
For now, we’re gathering feedback from alpha testers and improving the project. We launched a landing page at Entervio, and we’re slowly adding requested features and doing optimization work in anticipation of launching the first beta.
The journey from a simple frustration with multiple-choice questionnaires to a comprehensive AI-powered interview preparation platform has been incredible. What started as a personal need turned into something that could genuinely help other job seekers prepare better for their interviews. And working with Jamal has made the whole process not just faster, but more enjoyable and sustainable in the long run.