Food fraud is the manipulation of a commodity or product in some manner, intentionally or unintentionally, that isn’t known to the consumer. Typically, it’s when a high-end expensive product is diluted or replaced with a lower end, cheaper product. This practice is on the rise, as premium ingredients are becoming more expensive but remain in high demand. Economically motivated food fraud is estimated at more than $10 billion annually in the U.S. alone. Additionally, manipulating or misstating ingredients in a food product may result in health consequences to some consumers, such as when allergens are present. Laboratories need to test for food authenticity, because consumers want confidence in the products that they’re buying.
One approach to food authenticity testing is to monitor the molecular composition of the foodstuff with liquid or gas chromatography coupled with mass spectrometry (LC/MS or GC/MS). Traditionally, food authenticity testing has been performed by searching for one or more adulterants or impurities, which are then quantified to determine fraud. However, this only works if the adulterants are known a priori. Further, fraudsters can always find new adulterants to add. An increasingly common approach for this analysis is to profile small molecules, or features, in a commodity with high resolution MS, using many of those features to indicate if a product is adulterated. Using authentic food samples, a statistical model is built; when a new sample is tested, its features are compared with the model and the sample is classified into a group. Because this method does not use information from specific adulterants and does not even need to identify the features, it’s nearly impossible for fraudsters to manipulate.
Although profiling, model building, and classification of samples sounds complicated, it’s becoming increasingly routine, with user-friendly workflows and software available for laboratories to begin this type of testing. But, before you start your analysis, there are some concepts and best practices you should understand.
Well-defined and verified samples of food products grouped by type are critical for building statistical models. These should include as many individual samples in each group as possible to capture enough sources of variability and to reduce potential non-measurement biases from the model. An example of this could be providing different lots of honey from different production sites for each type of honey grouped in the model. This would reduce bias in the model toward a specific lot of honey or toward a specific manufacturing line and focus only on features that are separating the different types of honey. You should also plan to acquire an additional number of authentic and adulterated samples to be withheld from the model at creation and used to test, or validate, the model later.
Samples must be extracted in a manner that is reproducible for the endogenous metabolites that are of interest. Try to maintain simple protocols, if possible. For example, a liquid extraction of a homogenized sample with an organic solvent is a good protocol to begin with, as this will extract the compounds of interest with few steps, avoiding the introduction of potential contamination and error. However, the complexity of some samples may require additional sample preparation. If a liquid extract is still too high in matrix for routine analysis, try altering the pH or temperature of the extraction to produce a cleaner extract before testing a solid phase extraction (SPE) approach. SPE protocols may inadvertently remove analytes of interest for the analysis or introduce too much sample handling variation for a robust model to be built.
Although there are other platforms that are desirable for authenticity testing, when beginning research for a model, consider a high-resolution instrument such as a quadrupole time-of-flight (Q-TOF) to ensure enough resolution to differentiate analytes and increase the specificity of the model. This instrument also allows for untargeted models that are harder to cheat than targeted models. A Q-TOF has an extended dynamic range, which is important for analyzing complex samples at a range of concentrations in a heavy sample matrix because it allows you to detect small amounts of analytes that are coeluting with high abundant analytes. Also, try to avoid instruments that use ion-trapping capabilities due to limitations in their dynamic range and ion capacity, which can leave critical analytes out in complex food matrices. Ultimately, in complex food matrices, a Q-TOF will generate the most reliable and robust data for model building and subsequent authenticity screening.
External and internal standards should be used to monitor instrument performance and help troubleshoot any acquisition issues that might arise. These standards are not intended for any peak area correction, but rather to monitor peak area and retention time reproducibility. During method development, mass accuracy, area counts, and retention time should be tracked and proven stable. Incoming data that does not meet quality standards may need to be discarded. If reliable quality characteristics are not initially achieved, sample preparation, acquisition parameters, or instrument maintenance should be reevaluated to achieve a stable data acquisition.
Quality control (QC) samples need to be created from the model samples. These are pools of samples from the different groups in the model, e.g., types of honey, and a matrix pool of all the samples, e.g., all honey samples. The samples should be pooled before sample preparation, and the QC should undergo the same sample preparation as the model samples. It is possible to also make an adulterated QC by mixing the group QCs in a known manner. Injecting the same pooled QC sample multiple times at the beginning of development and periodically through the development is advised to ensure that reproducible retention times, mass accuracies, and area counts are achieved. If not, it’s appropriate to adjust the methodology at this stage to make those values as reproducible as possible.
Consistent and reliable methods are required to produce robust measurements for using a model. For this purpose, MS-only data acquisition is sufficient when using high resolution mass spectrometers. Compound identifications generally aren’t required for food authenticity modeling, but if identification is required, MS/MS experiments can be done with a Q-TOF. The most important thing to optimize is the acquisition rate, or scan speed, so that enough data is collected across the chromatographic peak widths for robust integration.
Diverting the flow from the mass spectrometer to the waste line is an important aspect of an acquisition method that is often overlooked. In reversed phase LC, the first 0.5 min, the high percent organic and equilibration portions of the run are dirtier, irreproducible fractions. Diverting these to waste can go a long way toward maintaining the performance of the mass spectrometer. Besides this, features eluting at these time points can be inconsistent and not desirable for building the model.
Capturing variation in the method development is crucial to building a good model. Not only does variation in the model samples need to be captured, but so does variation in the sample prep and data acquisition. This is accomplished by acquiring your model samples in different batches processed on different days. Additionally, if you use more than one mass spectrometer, analyzing the model sample set on both systems is important.
The model is built by evaluating the relative intensity of only a certain number of features, which proved to be significantly different between the classes based on the statistical analysis. Feature extraction, statistics, and model building need to be done to develop the full method before moving on to validating. The model samples will go through this process as a batch of data, while the unknown samples will be processed individually using the developed method and routine software, MassHunter Classifier.
Features in the model samples should be extracted using a recursive extraction methodology, such as the one in Profinder, for a high-quality extraction of the features in your model samples. All the discovered features are moved into a chemometric software such as Mass Profiler Professional (MPP), where they are filtered down and a model is built. The statistics performed should result in very robust features that can resist instrument or method drift over time. Often, simple statistical methods such as t-test and fold change are all that is needed to figure out what features are significant to the groups. Using a high threshold at the fold change step is important to remove low abundant features, as these will likely be the least reproducible over time. Models then use only these features in a supervised fashion, using the groups of samples known to the model. Varying the filtering and statistical analysis parameters is suggested to optimize the separation of the classes. Using these strategies will help you build robust, longer-lasting models, but it cannot overcome variability from experimental design, sample preparation, and data acquisition as discussed above. Once a statistical workflow in MPP is established, these steps can all be automated and easily shared with colleagues and collaborators. It also permits you to execute and build models easily, more frequently, and with less error.
While there are many model types available, principle components analysis (PCA), partial least squares discriminant analysis (PLSDA), soft independent modelling of class analogy (SIMCA), and various types of decision trees are commonly used. In principle, the model type should be selected based on your experimental design and the desired validation outcome. Decision trees, among the simplest types of models, make a series of “if/then” statements about sample class and feature abundance. A PLSDA model will give one prediction per sample, assigned as authentic or non-authentic, and a confidence score for the prediction. A SIMCA model gives each sample a distance score for each group, rather than a confidence score. The lower the distance score, the more closely it resembles that group. The distance scores can also indicate if the sample is pure or may be adulterated. If adulterated, the distance scores of the other groups could indicate with what substance it is adulterated. For routine use, it’s best to calibrate confidence and distance values with known QCs or known authentic and adulterated samples.
Validation, or rigorous testing of the model, is important to understand the sensitivity and specificity of the model. The QC samples for each group should be processed several times and treated as test samples. Additionally, if a new set of authentic samples is procured, those can be used as test samples in the model. These pure test samples should be used to determine the confidence value of a pure sample. Similarly, adulterated QCs or authentic adulterated samples should be used to determine the confidence of classifying a sample as adulterated. Running several of these samples after your model samples will allow you to set the confidence or distance value for your model and provide a manner to calculate the sensitivity and specificity of the method.
When using the model in a routine manner to run unknown samples, it is important to run the same acquisition method and the same feature extraction steps. It is easy to give an analyst the acquisition method and analysis model and have them use a routine software, like MassHunter Classifier, to produce the adulteration results. There is no need for an analyst to do any statistics, feature extraction, model building, or plot interpretation; the answer is given only by the class label and confidence or distance value reported in the software (see Figure 1).
Rebuilding a model is common practice in classification, and model longevity will vary from project to project as new data is gathered and new components used for adulterations are discovered. Over time, the model needs to be tested to determine if it is still working by running pure QC samples and adulterated QC samples, along with any unknown samples. If the QC samples are classified correctly, then the model is still working for known sample groups. If there is a discrepancy in the QC classification or the confidence is out of bounds, then an investigation into the data needs to occur. In this case, the internal standards in your samples can be interrogated easily to see if a data acquisition error took place. If the internal standards are good, the model may need to be rebuilt to account for other variables in the data. Authentic samples injected regularly throughout the batches of unknowns is strategic so that the model can be rebuilt quickly and efficiently using these new model samples. The analysis likely remains the same, and the automation in MPP allows for quick reproduction of the initial analysis on the new data.
The need for food authenticity testing will continue to grow as adulteration becomes more prevalent and manufacturers need to protect their brands from consumer safety issues and the cost of fraud. For any lab considering getting into food authenticity, authenticity models must be built with an experimental design that maximizes longevity and robustness. Key components of that design are leveraging software that is not only easy to use but makes authenticity testing routine and LC/Q-TOF instrumentation that performs reliably and robustly in difficult food matrices.
Source: www.foodqualityandsafety.com