CASE STUDY

INTERNATIONAL FOOD CORPORATION

Rubrica's Assessment Program Exceeds AI-scored Tests Results and Lowers Overall Costs

Hiring Challenges

AI-scored Test for
Spanish Speaking

Retesting of
Passing Candidates

Increased Review
of Current Employees

Manual Intervention
Required at Every Step


Rubrica Difference

Practice Tests and
Listening Screeners

Human-rated Online
Exam for Speaking

Customized
Writing Addendum

Reliable Results
within 24 Hours

OVERVIEW

Our client, an international corporation that demands superior customer service, was using an AI-scored language proficiency exam to evaluate the bilingual ability of incoming applicants. A lack of quality and reliability in the scores resulted in the team having to manually review and reevaluate employees who had already passed the exam. The need to retest created a backup in the hiring process. Their goal was to eliminate their overly manual process while at the same time achieving superior results in the form of a more skilled candidate pool.

SOLUTIONS

  • Rubrica offered Vocia, our online speaking exam that minimizes costs by refining the initial speaking exam respondents to only those who can pass an initial listening screener.

  • The speaking assessments of these screener-passing candidates are evaluated by a language rating expert and results are available within 24-hours.

  • Additionally, Rubrica’s testing experts worked with this new client to create an additional custom writing component.

IMPLEMENTATION

  • Rubrica worked closely with the company to define a reliable implementation plan.

  • The company first tested the newly designed assessment on a smaller sample using a single department.

  • The assessment results far exceeded their prior exam in terms of both the reliability of the scores and the quality of the candidates.

  • After legal review, they elected to replace all of their AI-scored Spanish testing with Rubrica's assessment.

RESULTS

  • Increased Hiring Confidence

    The client was unable to trust the results of the AI-scored exam. They now feel confident in Rubrica's more reliable results.

  • Reduction in Post Hiring Costs

    After implementing Rubrica's assessments, the company has been able to significantly decrease the volume of post-hire manual employee reviews.

  • Lower Candidate Screening Costs

    The increase in assessment reliability and candidate quality has virtually eliminated manual intervention in the language evaluation portion of hiring. This has ultimately lead to faster final assessment results with less manual intervention.

CASE STUDY

BUSINESS PROCESS OUTSOURCING

How Rubrica's Exams Help BPOs and Contact Centers Reduce Costs and Improve Quality

Key Metrics

Our client has realized vast improvements in their overall hiring process. Additionally, they have seen financial savings and improved customer and agent satisfaction.

  • $120k saved on training costs annually

  • 20% filtered out through screeners

  • 12% identified with superior technical skills

OVERVIEW

Rubrica’s client, a BPO in Southeast Asia, was struggling to hire candidates with strong enough communicative English language skills to work with their UK-based clients. They needed a fast and reliable way to evaluate language proficiency for their agents. Before using Rubrica, they employed a manual interview process that produced inconsistent results.

SOLUTIONS

Rubrica is helping BPO's and contact centers hire better quality candidates, reduce training costs and attrition, and improve CSAT and speed to competency. All of this can be achieved with quality prehire skill testing.

The BPO recruitment team replaced their internal oral proficiency interview with Rubrica’s more reliable and consistent hands-off testing approach. They can easily identify qualified candidates and make better hiring decisions. The solution consisted of:

Reading and Listening Screeners

Online Voice Assessments

Online Chat Assessments

RESULTS

  • Reduced Training Costs

    The client found that 30% of the candidates they would have hired before using Rubrica's exams were not hirable. This saved them over $100K in training costs.

  • Reliable, Consistent Results

    The team no longer needs to spend hours interviewing each candidate for language proficiency. They trust the results they have received from Rubrica's human rating team.

  • Increased Speed to Competency

    The detailed scores and personalized feedback each candidate receives is used to place agents with the right accounts from the start and reduce time spent on training.

Questions to ask a Testing Provider

  1. What kinds of tasks will my candidates perform on the test?

    • Is there a clear relationship between the tasks on the test and the tasks my candidates will be performing on the job?

  2. What types of communicative language skills are being assessed? (e.g., listening, speaking, reading, writing, interaction, mediation, empathy, mechanics, fluency, social appropriateness, etc.)

  3. How is the assessment scored?

    • If scored by humans

      • How were the raters trained?

      • How do you ensure consistent scoring?

    • If scored by AI:

      • What data was used to train the scoring algorithm?

      • Will the results of this exam be legally defensible if a candidate contests their hiring decision?

  4. What type of results will I receive after the test is completed?

    • What does the score mean? How does it relate to the candidate’s ability?

    • How can the results be used to support candidate training?

  5. What features are available on the test platform to prevent cheating?

  6. Can the testing process be integrated into our workforce management system via API? (if needed)