Roleplay chatbots for interviews

Context

FQD36306 - Food Fraud and Mitigation: See the study handbook for learning outcomes and further course details

  • Master
  • Around 50 students
  • No assumed AI knowledge

Rationale

 In the course Food Fraud and Mitigation students have to interview stakeholders from industry to assess the fraud vulnerability of a company. In the past five years, student numbers grew to around 50-60 students per year. This meant that it was undoable for our teachers to find professionals that were willing to be interviewed by all our students. In practice, this meant that students read previously conducted interviews and analyzed this data, instead of interviewing professionals themselves. However, by doing this, students could not create and ask their own questions to find out how vulnerable companies are for food fraud. The AI roleplay chatbots allowed students to gain some experience in interviewing, thereby giving students a more interactive experience in gathering data.  

Educational design

A step-by-step guide to build a roleplay chatbot for your course is provided in the PDF document below:

In summary, each of the four roleplay chatbots were built to have a clear persona, each with its own personality, a unique expertise and specific company knowledge. The chatbots were made as GPT’s in OpenAI’s ChatGPT. At the time, this was the only commercial AI tool that could be easily modified, allowing for not only custom instructions but also file attachments. Moreover, OpenAI’s GPTs were used as they can easily be shared with students.  For the assignment, students had to ask around 40 questions to the chatbots. This meant that paid premium accounts had to be used, because free have a limited prompt number. For that reason, students had the opportunity to interact with the chatbots via a university computer with a paid account during an assigned time spot.

Currently, Google Gemini’s Gems have the same capabilities as ChatGPT.

Students may opt not to use AI because of environmental, privacy or other ethical reasons. These considerations were openly discussed and respected.

Evaluation

After rigorous testing by the team and a pilot, the four chatbots were used by the students. The interview session was followed by an evaluation form. This feedback is used to better understand the effectiveness of the chatbot implementation. Overall, students said to prefer this approach over the previous method of reviewing pre-conducted interviews.

Tips / ideas for use in other courses

  • Start chatbot projects with thorough character development, to make sure the virtual actors have a distinct personality with distinct personality.
  • Make sure to rigorously test AI chatbots for vulnerabilities such as unauthorized access to the knowledge files.
  • The AI-landscape is continuously changing, which might pose practical challenges during the development; instructions that work one day, might be worthless the next day.
  • There are many more concrete tips for building a GPT in chapter 2.3. of the Manual.

Contacts

[email protected] (course coordinator) and [email protected]