written by Maree Stuart
I had a conversation with someone recently where we discussed what was meant by the concept “Quality”. The discussion quickly reached the subject of quality control and it was clear that there was a lack of understanding of what this term meant.
We recently wrote about Quality Basics and included some definitions for Quality Control and Quality Assurance. To recap:
Quality control is the part of quality management focused on fulfilling quality requirements.
Quality assurance is the part of quality management focused on providing confidence that quality requirements will be fulfilled.
You get mountains of different definitions of these terms when you do an internet search and of course, ChatGPT can also give you an answer.
But what do these concepts actually mean for your lab?
If you look at something like ISO 17025, you’ll find there are no definitive requirements on what to do. It falls within the requirements in section 7.7 of the Standard, which gives a series of activities presented like a smorgasbord for consideration.
One thing is for sure, it’s more than throwing the occasional standard into your testing or calibration regime!
Why is QC necessary?
Imagine you’re a pilot charged with the task of flying a plane from Melbourne to Sydney. Now imagine that you have no control dashboard and the airspace 200 km around Melbourne airport is surrounded by clouds causing poor visibility. You take off and somehow end up in Port Augusta.
This kind of outcome is like operating without any quality control. Those observations from dashboards and visual images of the landscape are the pilot’s QC system. They tell the pilot if the plane is heading in the right direction and whether there are any alarming conditions coming up.
What’s involved in lab QC?
QC in a lab is no different. The things we want to measure and view on our dashboard are attributes or performance characteristics of a method. These include aspects such as:
- Precision, including repeatability, within laboratory reproducibility and inter-laboratory reproducibility
- Limit of detection
Labs need to make sure that the day-to-day application of the method continues to conform to the criteria set for these attributes.
But it is a trade-off between efficiency and effectiveness of laboratory processes. If we were to always have a measure for quality control for all of these attributes for every sample or even batch of samples tested or calibrated, it would be a very expensive exercise and one that the lab client could not afford!
So, we compromise.
A typical regime for quality control for a batch of samples might include:
- Periodic retesting of a calibration standard as a measure of drift
- Periodic testing of duplicates as a measure of repeatability
- Once a week testing of a standard material or reference item, whether this is a reference material or a certified reference material, as a measure of accuracy and reproducibility
- For some tests, the lab might do spike recoveries, which are also a measure of accuracy.
Depending on the method, you might even do a blank sample, which helps labs understand elements of bias and correcting for the bias.
That’s all great, but what about all the other attributes of the method? How do labs know that the method can deliver the limit of detection for all types of samples?
There is a saying, “you cannot manage what you cannot measure”. And if you don’t measure it, how can you know how you’re doing?
This means that lab people need to give some thought to the QC activities employed, including the type of sample, the target quantitative value or attribute of that sample, as well as the frequency of QC. Just because you are performing a qualitative method won’t let you off the QC hook either!
How frequently do you have to do QC?
The question of frequency of QC comes down to a question of risk. What would happen if you had no or the incorrect kind of QC and the results that were reported were wrong? Like the pilot, you might end up at Port Augusta, or somewhere even worse, when you are meant to be in Sydney.
Hopefully, though, you already knew all that. If not, you’re welcome!
I suppose the QC regime in most accredited labs has developed through multiple assessments over time. You know, where the technical assessor says you really need to do “X”.
These really helpful suggestions can turn into a logistical and costly nightmare sometimes. Especially when there is the added bonus conversation around metrological traceability.
Let’s strip it back to basics.
Think about what a risk management system is meant to do, besides giving you a headache! It’s there to help us navigate successfully through life, preventing us from suffering losses by putting things in place to catch us before we fall (mitigation of risks).
It could be a big loss if the result that went out of the lab is wrong. In the worst case, people could die. Hopefully, that’s not the kind of business your lab is in.
It might be a small loss, such as the loss of a client who only spends a little bit of money with you. But what about all the people they tell about the loss they suffered?
Remember, QC is there to flag to labs that there is a wrong result, or something in the process that is not working as well as it should. And risk management processes help labs to identify and prevent those wrong results walking out the door to their unsuspecting clients. The latter helps us with the former.
The work done in activities like method validation and verification should help to identify the weak points in the method. Consider those weak points also as risks, a place in the process where something could go wrong. Run those identified weak points through your risk management tool of choice to determine if they are something you can live with, or if it is something that needs some mitigation or elimination.
From that point you can work out what you need to do to mitigate or eliminate the risk. The question of what includes not only what and how you will measure this, but also how often you will monitor for the existence of the identified risk.
That’s the makings of a good QC program.
But the NATA Assessor said……..
True, the NATA assessor could come back to you and insist you need to do more. After all, they are the “expert”.
If you have spent the time and effort digging into the question of QC for your lab methods and developed a risk-based program for performance of QC, then that can be a more robust, contextual demonstration of competence than simply doing something a technical assessor said. The reason for the insistence of a particular design of “we’ve always done it this way” doesn’t necessarily stack up. Perhaps we’ve always done it wrong!
In future weeks we’ll be covering the twin of QC, QA (Quality Assurance).
Need some help designing a QC program that makes sense to you and passes the NATA test? Why not give Maree a call on 0411 540 709 or email email@example.com for some ideas on where to start.
Download the article The Essential Lab Guide to Quality Control