Person interacting with a conversational assessment

How we transformed an exhausting form into a dynamic experience

How Caramelo AI replaced a static form with a smoother conversational journey, including automatic average scoring and low-score insights.

Result: ↑ smoother self-assessment and zero manual calculations

The Problem

The client had an important step in the student journey: a performance self-assessment applied both before and after the course, so each person could identify improvement points and understand their progress over time.

The idea was solid, but the user experience was poor. Users received a Google Form with around 10 statement-based questions and had to assign a score from 0 to 10 according to how much they identified with each sentence.

The friction was not only in answering. After finishing, the student still had to manually calculate the average score and write down which statements had the lowest ratings. Since this happened both before and after the course, the experience became tiring, repetitive, and far from fluid.

In practice, users spent energy on an operational step that was not the core purpose of the activity. Instead of focusing on reflection about behavior, priorities, discipline, goal clarity, and execution capacity, they also had to do math, review answers, and organize results manually.

In short, the main pain points were:

  • A tiring completion flow, because users had to answer a sequence of questions in a static form.
  • Extra effort during and after responses, since students had to manually calculate the average.
  • Manual identification of attention points, because users had to note low-score statements on their own.
  • Repetition across two journey moments (before and after the course), making the experience even more exhausting.
  • Part of the user’s cognitive energy was diverted from self-reflection to low-value operational tasks.

Understanding the Agent

To solve this problem, the agent needed to turn a cold form into a guided conversation while preserving the evaluation objective and removing the operational burden from users. The agent had to lead the experience naturally, record responses, process data automatically, and deliver a clear final result.

To solve the problem, the agent should:

  • Conduct the assessment as a conversation, asking one question at a time.
  • Make the interaction lighter and more natural than a static form.
  • Record each score assigned by the user throughout the conversation.
  • Automatically calculate the average score at the end.
  • Identify which statements received the lowest scores.
  • Deliver a clear feedback summary without requiring manual interpretation.
  • Keep the process consistent both before and after the course.

From form to guided experience

With this solution, we turned a bureaucratic task into a more dynamic, clear, and useful experience. Instead of filling, calculating, and interpreting everything alone, students only had to respond to the conversation while the agent handled processing and final feedback.

The result was a smoother self-assessment journey, with less friction and stronger focus on what truly matters: helping each person recognize improvement points and perceive their evolution throughout the course.