3. QA Evaluations¶
In Critical Insights AI, two types of QA evaluations are featured, manual and scheduled. In a manual QA evaluation, agency supervisors can select a 911 call to evaluate “on-the-spot” and can view results within seconds. They can then validate the call by referencing QA Assistant’s evidence-based criteria. Supervisors can create scheduled QA evaluations for agent shifts. Once scheduled, the QA Assistant will complete QA evaluations in the background.
3.1. Manual QA Evaluations¶
Manual QA evaluations are evaluations carried out “on-the-spot”, as distinguished from Scheduled QA Evaluations. Multiple calls can be evaluated manually only by using a QA Form that has QA Assistant configured.
For supervisors, or other administrative personnel, running manual QA evaluations is a simple, ordered two-step procedure. The following steps represent the workflow required to run a manual evaluation, and assume that you are currently signed in with your credentials to the CI AI user interface from the URL you were provided, using a compatible browser.
To run a manual QA evalution:
Run a search query using the AI Research Assistant to locate and return recorded 911 calls for evaluation.
Run a manual QA evaluation by selecting a call-handling evaluation form appropriate to the call type.
3.1.1. Searching for Calls or Incidents¶
The first step is to run a search query for calls or incidents by using the AI Research Assistant, which is docked and visible at the top right-hand side of the user interface.
Fig. 3.1 Searching for Calls with AI Research Assistant¶
To search for calls with AI Research Assistant:
Open the AI Research Assistant (if not visible) by selecting either the wand icon or call-taker icon at the top right of the user interface.
Alternatively, you can use the Mini AI Assistant for quick searches. If not visible, turn it on by selecting Show Mini AI Assistant from the View Icons and Markers menu.
Fig. 3.2 AI Research Assistant Icons¶
In the AI Research Assistant text field, compose your search query text. (For search tips, refer to the AI Research Assistant User Guide).
Select the Send button. Search query results display in the top right pane.
Fig. 3.3 Manual QA Evaluations: Running a Search Query¶
3.1.2. Running Manual QA Evaluations¶
Once AI Research Assistant’s call results are displayed, the next procedure is to select a QA evaluation to run.
To run a manual QA evaluation:
From the call results, highlight any calls you wish to evaluate and right-select them.
Select an evaluation form. From the menu, select Evaluate Records, and then select the fly-out menu option that represents the appropriate question form, in this case, Call-taking for EMS Incidents, for a call involving an accident.
Fig. 3.4 Selecting a QA Evaluation Form¶
QA Assistant assumes the role of “Evaluator” with the title of “SuperEval” by default, and now begins to automatically evaluate the call in the background. QA Assistant moves down sequentially through the form questions and cross-references each question to the call transcript contents to determine its evidence-based findings.
The left pane displays the call evaluation form. The top right and bottom panes display the recording transcript and the QA Assistant, respectively. QA Assistant offers a short summary of the call.
Fig. 3.5 Automated QA Call Evaluations¶
As each question is addressed, a color-coded progress bar appears, indicating the QA Assistant’s “confidence level” in its responses. If the progress bar is green, it means that QA Assistant is confident that it has correctly evaluated the agent’s action and assigns it one of three statuses “Yes”, “No”, or “N/A”.
If the progress bar turns red, it means QA Assistant has a low confidence level in verifying that the question was adequately addressed and may return a status of “No”, “N/A”, or even no status. Partially-answered questions result in a yellow progress bar and and may receive a “No” “Yes” or even “N/A” status.
Next, a supervisor performs the validation and call scoring process. Whatever the status, the supervisor will validate QA Assistant’s determinations and will accept or modify the status of questions, adding appropriate comments in accordance with the call transcript evidence to score the call. The background evaluation can take a minute to complete.
Fig. 3.6 Validation and Call Scoring Process¶
Fig. 3.7 Evaluation Statuses¶
Fig. 3.8 Viewing Responses to QA Assistant Questions¶
In a similar example, QA Assistant sets its answer status for question 3 to “N/A” because the question was deemed irrelevant to the call.
Fig. 3.9 Evaluating Responses to QA Assistant Questions Select the hyperlink.¶
Select the hyperlink. In the call transcript, the evidence is now highlighted.
Fig. 3.10 Viewing Transcript Evidence¶
When evaluating more than one call at once, multiple hyperlinks are listed with different record IDs to represent each individual call.
Fig. 3.11 Transcript Evidence Of Multiple Calls¶
Optional step. Double-click the highlighted text to play the call audio content at the exact location of the transcript evidence.
At the bottom right of the evaluation form, in the text field provided, enter your call review comments.
Fig. 3.12 Entering Review Comments¶
The final step is to submit the evaluation.
Select the Submit Evaluation button.
Fig. 3.13 Submitting an Evaluation¶
Additional evaluation menu options are also available.
Select one of the following:
Save as in Progress. Use this button option to save incomplete evaluations that you wish to return to later for completion.
Fig. 3.15 Save As in-Progress¶
Escalate and Submit Evaluation. Use this button option to submit an evaluation as an escalation.
Fig. 3.16 Escalate and Submit Evaluation¶
Discard. Use this button option to discard a call evaluation.
Fig. 3.17 Discard a Call Evaluation¶
Once the QA evaluation is submitted, the call score page is displayed. The passing score for calls is 80%.
Fig. 3.18 Call Evaluation Score Page¶
Note
To submit an evaluation, not all questions need to be asked necessarily for all scenarios. QA Assistant may automatically designate a question response with a default status of “N/A”, depending on the context. The default status of the response to specific questions may be determined between Eventide Communications and the customer by mutual agreement. A status of “N/A” beside a question may require evaluation by a supervisor. All evaluation responses must display a radio button status before being permitted to submit an evaluation. If not, you must set the missing radion button status manually.
On the score page, in the Comments and Actions section, further menu options include:
Comment Only: <Add summary comments>.
Lock: <Locks the evaluation, preventing it from modification after being signed. Can also unlock>.
Protect: <Protects the evaluation from deletion. Can also un-protect>.
Re-open: <Re-opens the submitted evaluation to allow modification of answers or comments and re-submission>.
Fig. 3.19 Comments and Actions Options¶
If QA Assistant does not answer a question or give it a status, it may lack sufficient information, context, or criteria to provide an answer. If you believe an answer is incorrectly evaluated, you can disagree with QA Assistant and change its answer.
QA Assistant may ask if you want to improve how it answers a particular question. You can instruct it on how to deal with it by providing it with new conditional rules or guidelines.
Compose a new rule for QA Assistant to learn.
Fig. 3.20 Composing New Rules for QA Assistant¶
You can use natural language to instruct QA Assistant in the form of conditional if/then logic. For example, for a call involving an accident, for the question “Did the call-taker determine why an ambulance was needed?” You might say: “If the caller did not state that anyone was injured, or if there is no evidence in the call transcript that anyone was injured, then automatically set the evaluation status radio button to ‘N/A’ “.
3.2. Scheduled QA Evaluations¶
If the Scheduled QA Evaluations feature is activated, supervisors can create scheduled QA evaluations for agent shifts. Scheduled QA Evaluations are scheduled by a supervisor. A supervisor will select a group of calls for evaluation and will choose an appropriate call-handling QA evaluation form. Various evaluation forms are available, and each contains a specific question set appropriate to the call type (EMS, Police, or Fire).
Once scheduled, QA evaluations are initiated automatically at the specified date and time. QA Assistant automatically begins evaluating calls in the background. The QA Assistant displays a “confidence level” that the telecommunicator has adequately attempted to respond each form question and awards a score to the agent’s responses using a color-coded system based on the identified call transcipt evidence.
QA Assistant selects calls shortly after they terminate, enabling Scheduled QA Evaluations to complete within minutes. Supervisors will now have a group of pre-evaluated calls waiting for them to validate at their convenience. In a Reviewer role, supervisors need only review QA Assistant’s responses to each form question to score a group of call before submitting evaluations.
These calls can be viewed from the Scheduled Evaluations tab.
3.2.1. Running QA Scheduled Evaluations with QA Assistant¶
You can configure QA Assistant to run Scheduled QA Evaluations automatically in the background on a set schedule.
To run Scheduled QA Evaluations:
In the NexLog DX-Series recorder’s Web Configuration Manager, navigate to AI –> Assistants.
On the AI Assistants page, select the row corresponding to the QA Assistant form you wish to use.
Fig. 3.21 Running Scheduled QA Evaluations¶
Select the Edit Assistant button option that is now activated.
Fig. 3.22 Selecting Edit Assistant¶
The Edit Assistant page presents various options, configurable and also non-configurable, as shown in the following screenshot:
Fig. 3.23 Editing QA Assistant¶
Type: <Refers to the type of Assistant. Pre-populated. Non-editable>.
Description: <Describes what the Assistant does. Pre-populated. Non-editable>.
Version: <Refers to the version of the QA Assistant. Pre-populated. Non-editable>.
Name: <Name of the QA Assistant form selected. Pre-populated. Editable text field>.
Service: <Refers to the type of AI model selected>.
Select an AI Service model from the drop-down menu.
Fig. 3.24 Selecting an AI Service Model¶
Preamble: <refers to the QA Assistant text description field that instructs the Assistant on what its role is, and on what it should do>.
Enabled: <Selecting this checkbox enables QA Assistant to start analyzing the calls with the tasks that correspond to that assistant. You must first add a task to enable the Assistant>.
Select this checkbox to enable QA Assistant to start analyzing calls for Scheduled Evaluations.
Evaluation Form: <Refers to the type of QA Assistant form currently selected>.
In the Evaluation Form drop-down menu, select from the range of specific QA Assistant forms provided, from either the call-taking or dispatch category.
Enable Automatic Evaluations: <Selecting this checkbox enables Scheduled Evaluations>.
Select Enable Automatic Evaluations to enable Scheduled QA Evaluations, if not already selected. This action activates the central (previously inactive) Scheduled Evaluations configuration options. Automatic Evaluations is enabled separately and remains disabled until you enable QA Assistant.
Fig. 3.25 Enabling Automatic Evaluations/Setting Date and Time¶
Call Pointer Date/Time: <Refers to the start date (and time) from which Scheduled Evaluations will be evaluated>.
Select the calendar icon to the right of the date and choose a start date.
To the right of the start date, select a start time.
Fig. 3.26 Selecting the Calendar Icon¶
Enter the channel number or channel range for which you want to evaluate calls. Select either a specific Agent or All Agents.
Resource Group: <Here you can select a resource group (a.k.a. channel group) to evaluate a group of calls that form that group>.
Fig. 3.27 Selecting a Resource Group¶
Agent Group: <In this drop-down field, if Agent Groups have been configured, you can tag calls with the name of the Agent Group to select only those calls tagged for evaluation.>
In this field, select an Agent Group for which you want to run a Scheduled QA Evaluation. Optionally, to evaluate all calls instead, select “All Agents”.
Evaluate one per X calls for each agent <This field instructs QA Assistant to select one call to evaluate every X calls per agent, where x is the number of calls to skip>.
In this field, enter a numeric value.
Minimum call duration to be evaluated (secs): <Defines the minimum required call duration before a call will be evaluated>.
In this field, enter the call duration value in seconds.
Completion State for AI Evaluations: <Defines how Scheduled Evaluations will be saved, either as in-progress or complete>.
You can choose from one of the following drop-down menu options:
Always Save as In Progress
Always Save as In Progress and Assign Reviewers
Save as In Progress only when AI cannot complete
Filter: <This field is used to filter schedule evaluations for specific calltypes, for example, calltype = AUDIO only or calltype = ‘TEXT’>.
For example, you could filter calls to evaluate the percentage of all medical calls that mention the phrase ‘cardiac arrest’.
Reset All State and Re-Evaluate from Beginning: <Resets the current Scheduled Evaluations schedule>.
Select this checkbox to re-evaluate calls already evaluated using this form once again. If you do not select this checkbox, already evaluated calls will be skipped, and QA Assistant will continue to evaluate calls, starting from the last call evaluated.
Select Update to save your settings. This last step completes the configuration of Scheduled Evaluations.
3.2.2. Viewing Scheduled QA Evaluations¶
Once you have configured your Scheduled Evaluations, you can now sign in to the CI AI user interface and see QA Assistant automatically completing your Scheduled Evaluations.
To view scheduled QA evaluations:
Navigate to the Scheduled Evaluations tab.
3.2.3. Sending In-Progress Emails for QA Evaluations¶
From the Web Configuration Manager interface, you can configure QA/QMQA Assistant to send emails out to reviewers or evaluators for in-progress Scheduled QA Evaluations.
To configure and send distribution emails for in-progress QA evaluations:
In the left navigation menu, select Alerts and Logs –> Email to open the Email Settings page.
Enter the required settings, including credentials, host, and port information to set up and enable your distribution email.
Fig. 3.28 Enabling Distribution Emails for In-Progress QA Evaluations¶
Navigate to Utilities ‣ Schedules.
Select the Enabled checkbox.
Type a name for the email.
From the Action Type drop-down menu, choose QA in Progress Email.
Fig. 3.29 Sending Distribution Emails for In-Progress QA Evaluations¶
From the Frequency drop-down menu, choose the frequency of the email (daily or weekly, etc.).
Select the day(s) of the week you want to schedule the email.
Select the Save button. Your ‘QA in Progress’ email entry now appears listed on the Action page.
Fig. 3.30 QA “In-Progress” email entry listed in the Action Column¶
Select the ACTIVATION/EXPIRATION tab.
Under Enable Schedule, select the calendar icon to choose a start date.
To the right of Enable Schedule, choose a start time.
If you do not want to specify an expiry date or time, select the Never expires checkbox; otherwise, under Disable Schedule, select the calendar icon to choose an expiry date.
Select the Activate now button.
Fig. 3.31 Activating Distribution Emails for In-Progress QA Evaluations¶
Finally, select the Save button to activate and schedule your QA In Progress email.