Talking Laboratories.

View Original

Part 1 of our Auto-verification Rule Testing Hesitancy Blog Series: Validate your Rules with Confidence

The Obstacle

Acute lab personnel shortages continue to motivate our industry to replace manual processes with automation and automated processes. One of these automated processes is the implementation of auto-verification decision rules to filter and streamline result reviews and to replace unnecessary manual reviews. Auto-verification has another benefit: auto-verified results are delivered faster and more reliably than results that are manually validated.  We believe maximizing the use of auto-verification is paramount for sustaining and improving lab operations in an increasingly more complex testing environment that continues to suffer from depleted resources.

Yet despite the measurable return on investment, labs continue to procrastinate implementing auto-verification rules, even while they are paying for software or middleware that includes this yet-to-be-implemented capability. One area of hesitancy (even confusion) is: ‘If I set up auto-validation rules’, how will I validate or test them? How much testing do I need to perform? I’m worried it is a lot of work. Will it be worth it?’ These are just some of the questions that paralyze labs from evolving out of their highly manual-driven processes and into a more automated and ultimately more sustainable operation.  

Verifying rules can be made easier when it is broken into bite-sized chunks: a ‘dry phase’ and a ‘wet phase’.  In this 3-part blog series, we will give you a framework to efficiently and without apprehension, validate your rules as part of the ‘dry phase’. For further discussion about ‘wet testing’ read: 3-Part Blog Series: Strategies for Testing,

Why Dry Test?

Dry testing your rules before you move to clinical (‘wet phase’) testing can save you time, money, and re-work. It lets you evaluate that things are working in a relatively isolated environment without the distraction of too many variables. Ultimately, you will want to verify that your rules operate as expected when various systems and equipment are connected and when actual specimens are used. But know that these added variables make it very difficult to find and isolate rules issues. By verifying that your rules operate in an isolated (dry) environment, can help you identify issues before integrating them into the larger web of connected equipment and systems.

Quality-Based Dry Testing

To ensure that your dry rule testing is effective, you will want to:

  • Define goals and objectives for the performance of your rule system

What do you want to accomplish with your auto-validation rule system and how will you measure success? Are you after a certain turnaround time goal, are you trying to increase your auto-validation rate? Or maybe you want to avoid hiring more people by offloading human decisions to software rules. What are your goals and measurable objectives?  

  • Decide what needs to be tested, set your scope

Design your testing based on your risk assessment (see next section). Of course, you will want to perform positive and negative testing, including complex workflow scenarios. Select as many different variables in each rule category as time and risk assessment allow.

  • Create test scripts that line up with your intended workflow

Base test cases on your intended workflow and its relationship with the auto-validation rules. For example, if you are implementing critical call rules, design testing that challenges the boundaries of the critical ranges, and any documentation of the critical call (date/time of call, readback, and/or required comments) that meet your clinical objectives. The testing should follow your clinical workflow and procedural policies.

  • Use your test scripts to uncover the ‘unknowns’

Test complex clinical scenarios and decision points. If your rules include branching decisions, then design test cases that cover these possibilities. For example, if a rule ‘loops’ that is, it includes previous look-back decisions, be sure to create rule testing scenarios that cover the different combinations of operators, e.g., percentage change, absolute change, and/or presence or absence of a past test value that could trigger or not execute a rule.

  • Uncover workflow gaps or scenarios that require additional technical review

Design dry testing scenarios that will test the trigger of an additional rule or nullify a rule execution. These may include ‘sample quantity not sufficient’ circumstances, sample quality issues (lipemic, hemolysis, icteric), or instrument flags indicating an issue with either the sample or instrument.

Quality – Accessing Project Risk

The backbone of your dry testing program is performing a risk analysis. A risk analysis is not a ‘one-size-fits-all’ activity. It’s different for each lab. It’s a documented exercise that helps you determine how much testing is sufficient. A risk analysis is performed before dry testing begins and before resources are allocated. It will be reviewed, even modified, again throughout the dry testing project. By evaluating risk before dry testing begins you will be able to make decisions quickly as issues crop up, and you’ll be in a better position to avoid unnecessary work or re-work.

For any risk assessment, here are some typical questions to help you complete your analysis:

  • Will any auto-validation rule be triggered by or dependent on other information? Will those triggers or dependencies increase the complexity of the rule?

  • Will any auto-validation rule cross disciplines?

  • Will any auto-validation rule include multi-variable rules that require input from other software systems? Will those other systems also require additional verification? For example, specimen sources, hours of collection, and collection volume that may be provided by the EHR or EMR.

  • Is your team comfortable defining simple to complex rules without vendor support?

  • What training is needed for your team members to become self-sufficient in defining, supporting, and testing auto-validation rules?

  • What test verification methodologies are available to validate your rules? Is test automation available that would reduce testing effort and increase accuracy?

  • What are the criteria will you establish for approving your auto-validation rule testing program?

  • Do you have a process for evaluating rule changes or additions? How will you establish test cases to evaluate changes to your rules set? How will you determine if those changes will impact other rules in the set?

The next step is, for each area of risk, to identify its severity based on how that risk could impact your testing program. You will want to rate risks using a simple scale of low, medium, and high. Once you have rated each risk, you will want to decide an appropriate mitigation for each risk. Then you can start prioritizing the testing in terms of:

  • How much effort is required

  • What testing can be automated

  • What testing resources and skills will you need and are they available

  • What type of software is available to assist you in the testing process

  • What is the timeline allocated to testing to meet your go-live objectives

Your risk analysis will help you define where to concentrate the dry testing effort. It might be beneficial for example, to only test a sampling of your simple auto-verification rules and allocate more of your time to verifying the complex rules that include multiple inputs and trigger points.

Often, the inputs to your rules can trip up dry testing. For example, if a location or a physician ID ‘drives’ a rule, make sure you test most location codes or physicians using high-volume ordering patterns. When a rule includes more than one decision ‘branch’, create test cases that challenge each possible variation. For example, delta rules can have both a percentage and absolute value definition for both the low and high threshold. Your risk analysis should identify these workflow possibilities.

Quality – Dry Test Best Practices

Building a sustainable dry testing program also depends on following best practices for the best testing outcomes. These best practice recommendations can help you as you continue to dry test future rules changes and maintain your rules program.

Keep your rule maintenance up-to-date and stable during a testing period

If you are testing new or existing auto-verification rules, freeze your rule maintenance during the testing phases. If you identify logic or construction issues, record the changes, re-test, and document your maintenance update.

Break down the testing into achievable blocks to make progress.

Start your rule testing with one rule type and verify its performance outcomes to ensure your workflow performs as you intended before you proceed to more complicated rule types. For example, a good practice is to test simple rule ranges first to establish that there are no errors or omissions in the range rules.  It is good practice to verify the smallest unit of data e.g. decimal places, units, and ranges before you move to more complicated rule testing.

Identify test scripts that move from simple to more complex variables – progression in testing is important to maximize your rule base coverage

Proceed to testing more complex rules that may include repeat or reflex actions. Verify these actions in a stepwise fashion, starting with a simple repeat action and moving to reflex and comment actions that may add actions that build on each other. This low to high-complexity type of rule testing will challenge the system performance as you move through the validation testing.

Resolve failing rule test cases ‘quickly’  - failures that result from missing or invalid test data can impede downstream testing and derail your testing process

If you don’t resolve issues quickly, your testing could be flawed from that point forward. Prompt investigation and correction of the rule system, database, or design issue will reduce re-work and testing effort if it is found later during the testing process or even worse in a production environment.

Evaluate your clinical objectives – do the rules represent the laboratory clinical workflow and do they meet your program objectives?

As each stage or milestone of your dry testing project is take the time to re-evaluate how your rules are expected to perform. Your rule design and testing outcomes should represent your clinical practices and intended workflow for result qualification. A rule system is not useful if there are manual or off-system workarounds.

Quality – Making a Plan

As the saying goes, if you don’t know where you are going, you might end up somewhere else. Planning is the key to a quality project outcome. Creating a high-level dry testing validation plan is one of the first steps in the planning process. The effort to create a validation plan that can be re-used from project to project is minimal compared to undesirable output and faulty rules that could endanger patient care. Take the time to document the following to prevent errors, decrease the cycle time for testing and improve the quality of your rules.

A formal validation planning document should include the following:

  • Project and quality assumptions

  • Project background information

  • Resource

  • Schedule & Timeline

  • Entry and exit criteria

  • Test milestones

  • Test to be performed

  • Use cases and/or test cases

Quality - Setting Performance Objectives

Quality is the absence of defects of a rule system that meets your intended requirements and needs.

An important quality measure of an auto-verification rules system is to measure your auto-verification rate at baseline (go live) and then again at regular intervals. Setting an auto-verification goal at the initial launch of your rules system is an important measure of the rule system's success.  Continuously tracking this metric is a key determinant of rule system health. A dip in your auto-verification rate should be acted upon to determine the source of deterioration. Rule additions, modifications, and/or rule retirements may result in unintended rule collisions, and exclusions or negatively impact your system performance. Ongoing evaluation of your auto-verification rate will help you catch these issues before they become endemic within your rule system.

Another important measure of a rule system is the effectiveness to track your resource allocation and productivity for each discipline. Your auto-verification system should have enough capacity to adjust to increases in sample volume and you will want to understand at what point it will require changes to system capacity, that is it requires more instrument, technology, or rule additions to meet your production requirements.

right Size your Testing

Coming up in our next two installments in this 3-part series, we will discuss how to design your testing program to ensure you cover your rule base without over or under-testing. We will give you tips on how to optimize your testing regimen using automated and manual testing tools. We’ll discuss how to design testing that uncovers hidden rule issues that minimize your testing effort and give you the confidence to move to the next phase of rule qualification using patient samples.