An approach for implementing statistical process control and other statistical methods as a cost-savings measure in the treated-wood industries is outlined. The purpose of the study is to use industry data to improve understanding of the application of continuous improvement methods. Variation in wood treatment is a cost when higher-than-necessary chemical retention targets are required to meet specifications. The data for this study were obtained in confidence from the American Lumber Standard Committee and were paired, normalized assay retentions for charges inspected by both the treating facility and auditing agencies. Capability analyses were developed from this data for three use categories established by the American Wood Protection Association (AWPA), including UC3B (above ground, exterior), UC4A (ground contact, freshwater, general use), and UC4B (ground contact, freshwater, critical structures, or high decay hazard zones). Agency and industry data indicate that between 4.45 and 9.82 percent of the charges were below the lower confidence limit of the passing standard (LCLAWPA), depending on use category. A Taguchi loss function (TLF), which is quadratic based and decomposes the monetary loss into shift and variation components, was developed to estimate the additional cost due to process variation. For example, if a treatment input cost of $1.00/ft3 is assumed for UC3B, reducing the variation in total retention allows lowering treatment targets, e.g., 1.45 to 1.38, reducing costs to $0.76/ft3. The study provides some important continuous improvement tools for this industry such as control charts, Cpk, Cpm capability indices, and the one-sided TLF.Abstract
Currently, wood treatment facilities must meet minimum passing standards for wood preservative penetration and retention of treated wood (as defined by the American Wood Protection Association [AWPA] and governed by the American Lumber Standards Committee [ALSC]). This article focuses only on the preservative retention aspect of quality control for treated lumber and not penetration. Treating facilities determine the retention of each charge by removing 20 or more increment cores from different pieces in the charge (“batch”) and combining them to obtain a single composite assay sample. The preservative content (retention) value of this sample must be equal to or greater than the minimum for that product, as stated in the relevant standard. Third-party agencies sample treatment charges in the same way, but the key metric used for third-party agencies evaluating retention compliance over a range of charges is the lower confidence limit (LCLAWPA) as described by the AWPA M22-18 standard (AWPA 2019). The LCLAWPA is the lower confidence limit of the median retention of recent charges calculated using a one-tailed 95 percent critical value. This LCL is compared with the standardized minimum retention. This standard LCL or “minimum specification” is derived as a statistical lower bound assuming the standard normal distribution and is based on the theory of parametric confidence intervals (i.e., x̄ ± (s/)zα). For the typical monitoring situation in the standard (AWPA M22), the previous 20 samples are considered and a small sample adjustment is used (tα = 0.05, n − 1 = 19 = 1.729) for a one-sided bound, which is to provide an indication if the typical, immediately preceding production is above specification. As more samples are included using this small sample interval adjustment, the long-term behavior of the LCLAWPA standard derived using a Z-score of 1.729 should contain a cumulative probability of p(Z) = 0.9581 above the LCLAWPA, assuming normal data distribution. LCLAWPA derived from confidence limits used for enumerative studies will be narrower than those of prediction intervals, which are common for process or analytical studies. Prediction intervals are wider than confidence intervals given the incorporation of process variation for future sampling (Deming 1975, Hahn 1995). Resampling of retention values after a failure can result in a nonnormal distribution if the resampled values are included in the original data. Resample retention values should be maintained in a separate data file, and should be flagged as a resample. This will avoid artificially skewing the distribution underlying the determination of the LCLAWPA.
There are approximately 140 plants and roughly 700 active production categories in the treated-wood industry that are monitored by inspection agencies using this LCLAWPA standard (Vlosky 2009). The LCLAWPA standard and producer metrics of performance are used as quality-control techniques for adhering to a conformance standard, and do not necessarily promote continuous improvement and variation reduction (e.g., statistical process control [SPC]). This makes the use of the LCLAWPA a conformance test and creates a treatment process that is reactive to problems but not preventative. This conformance test and reactive actions, such as the retreatment of charges, may result in additional costs.
SPC methods can benefit manufacturing industries by identifying common sources of variation that influence product quality and by promoting proactive actions for continuous improvement. The fundamental premise of continuous improvement is the reduction of product and process variation. The goal of this study was to quantify the natural variation (also called common-cause variation) and the special-cause variation associated with the measure of the average retention for treated residential lumber. This article builds upon the study by Young et al. (2017) by providing a more detailed assessment of distribution fitting and variability analyses, and provides a monetary assessment of cost using the Taguchi loss function (TLF). The purpose of the study is also to highlight statistically-based approaches in manufacturing that can help producers reduce risk from warranty claims, reduce rework, and reduce costs. These methods are necessary to improve the short-term competitiveness of the industries and are crucial to ensuring a viable sustainable strategy for long-term success.
Historical Perspective
Improving product quality and reducing sources of process variation that lead to unnecessary costs are common goals for many companies. As Lawless et al. (1999) notes, “Fundamental to the improvement process of reducing product and process variations is to first quantify variations[…]”; also see Young (2008) and Young and Winistorfer (1999). Many statistical methods exist for quality improvement through the quantification and understanding of sources of variation (Hahn 1995, Lawless et al. 1999, Woodall 2000). However, Deming (1975) urges the distinction between enumerative and analytical studies. Enumerative studies deal with characterizing an existing, finite, unchanging target population by sampling from a well-defined frame, e.g., analysis of variance, confidence intervals, etc. (Hahn 1995). In contrast, the analytical studies more frequently encountered in industrial applications focus on action that is taken on a process or system, the aim being to improve the process in the future, e.g., statistical prediction intervals, control charts, etc. (Deming 1975, Hahn 1995). SPC uses control charting and other statistical methods to define improvement of the process and final product quality (Stoumbos et al. 2000, Woodall 2000, Young 2008). As Deming (1975) articulated, “Predicting short-term process outcomes is a powerful aspect of the control chart and SPC.” Shewhart control charts (Shewhart 1931) have been used extensively for over 50 years. However, as noted by Stoumbos et al. (2000), “The diffusion of research to application is sometimes slow.” Following Deming's study characterization, we view this study as providing an initial evaluation of statistical analytical methods that can lead to process improvement and improved product quality.
An extensive review of the published literature did not reveal any studies that document the application of SPC and other statistical methods as improvement tools for the treated-wood industries. Even though there is considerable literature on the application of SPC for the lumber industry (Brown 1982; Maness et al. 2002, 2003;Young et al. 2007), this study addresses a noteworthy gap in the literature. The SPC Handbook for the Treated Wood Industries by Young et al. (2019) provides more detailed information about implementing SPC.
Material and Methods
Data sets
The data for this study were from the period 2014 to 2016 and were obtained in confidence from the ALSC auditing agencies accredited for treating facilities. The agencies provided paired assay retentions for charges that had been inspected and measured by both the treating facility and the auditing agencies. The assay retentions were normalized to protect confidentiality. “Paired” in this study is defined as matched charges of industry- and association-tested wood. Retention was rescaled to the AWPA standard retention to protect confidentiality but maintain variation, i.e., the analyses were performed by use category and did not reveal any source (chemical type). The three use categories with the largest N in the data set were analyzed; this includes UC3B (above ground, exterior, exposed, or poor water runoff, general use), UC4A (ground contact or freshwater, general use), and UC4B (ground contact or freshwater, used in critical structures or high decay hazard zones). The sample sizes of the three use categories were: (1) N = 4,259 records for UC3B; (2) N = 2,942 records for UC4A; (3) N = 196 records for UC4B.
Estimating the probability density functions
A probability density function (pdf) is used to specify the probability that a random observation associated with a random variable (e.g., total retention) will fall within a specified range of values. For example, for a random observation from the standard normal pdf, N(0,1), the probability of it falling between −3 and 3 is 0.9973, the probability of it falling between −2 and 2 is 0.9545, and the probability of an observation falling below (or, alternatively, falling above) a sample mean is 0.5. Information criteria can be used to compare potential pdfs that may represent a sample's underlying distribution (Anderson 2008). Two criteria, the Akaike information criterion (AIC) and the Bayesian information criterion (BIC), were used to assess the appropriateness of various pdfs as selected from those commonly seen in industrial settings. The AIC (Akaike 1974) is:
where L̂ is the maximized value of the likelihood function of the pdf model and k is the number of free parameters to be estimated. The preferred model is the one with the minimum AIC. AIC rewards goodness of fit but it also includes a penalty that is an increasing function of the number of estimated parameters. A second-order version that adjusts for sample sizes (n = number of observations), AICc, is a common output in statistical software, and can be calculated from AIC (Anderson 2008):
The BIC (Findley 1991) or Schwarz criterion (also known as the SBC or SBIC) value is:
where L̂, n, and k are as defined previously. Similarly to AIC, the preferred model is the one with the minimum BIC. The information criteria and estimated parameters for each pdf were obtained as part of the maximum likelihood procedures using JMP software (JMP version 14 2019). Ranking and calculation of Akaike weights (model probabilities) from the information criteria values help differentiate the hypothesized pdfs (Anderson 2008).
Control charting methods
Individuals and moving range charts were used to quantify the natural variation (or common-cause variation) and detect special-cause variation (or “events”) of the retention values (Shewhart 1931). The Shewhart control chart is based on the theory of the statistical prediction interval for application in processes (i.e., analytical studies). The control chart is a simple but powerful tool that distinguishes not only between two types of variation, but also is a temporal graphic of the state of the process and is very helpful in detecting shifts in the process that may cause the manufacture of defective product. The Shewhart control chart general form is:
where X̄ is the process average and s is the process standard deviation (Fig. 1). Ideally, assuming an underlying normal distribution, this interval would contain 99.73 percent of the process values. Given that s is a biased estimator for the population standard deviation (σ), the unbiased estimator of σ is used: σ̂ = /d2, where |xi − xi−1| / (n − 1) and d2 = 1.128 for a subgroup size of two for estimating a moving range value. Therefore, Equation 4 reduces to:
where X̄ is the process average and is the average moving range. The LCL = X̄ − 2.66() and upper control limit (UCL) = X̄ + 2.66 (). The moving range chart offers further assessment of process variability. The center line is given by and control limits are constructed and reduced to LCL = 0 and UCL = 3.267() for subgroups of size two (Montgomery 2012).
Capability analyses
Capability analyses assess the potential for product conformance to specifications by comparing the natural tolerances (NT) of the product with the engineering tolerances (ET), i.e., ET = upper specification limit (USL) − lower specification limit (LSL) and NT = 6s, where s is the process standard deviation. Specification limits are typically established externally to the process and are not a mathematical function of the control limits, although capability analyses are most useful when the data are in a state of statistical control, i.e., data are within the upper and lower control limits as defined in Equation 5. Capability analyses are summarized by indices and indicate if a product is capable of meeting the desired specifications. The most common capability indices are:
and the Taguchi index below (Taguchi et al. 2005) is recognized by many (Boyles 1991, Taguchi 1993),
where T = target. Cp does not accommodate a process that is not centered between the LSL and USL; thus, the other indices were introduced to better indicate process performance in these types of situations. Cp, Cpk, and Cpm compare engineering tolerances to short-term, or within-, process variation. Note, process performance indices (Pp, Ppk = min(Ppl,Ppu), Ppm) are similar to those of Equations 6, 7, and 8, where s is used instead of /d2 (i.e., s represents long-term or overall variation). Only values for the Cp, Cpk, and Cpm are discussed in this article. A simulation is presented from the results of the capability analyses to estimate chemical dosing target changes necessary for 100 percent conformance to the LCLAWPA standard. Even though 100 percent conformance may not be achievable in the short term, the simulation highlights the importance of reducing variation to sustain business competitiveness.
Taguchi loss function
The TLF quantifies the monetary loss incurred by variation in the product. The economic loss for treated wood is a function of the amount of extra chemical treatment used (target treatment level above specification) and the amount of time for retreatment (if it has been determined that a charge has treated below specification). Undertreatment may represent a higher monetary loss (warranty claims) than overtreatment (additional chemical costs). Both represent direct variable costs due to poor quality and are influenced by the variability in the process and raw material. Taguchi et al. (2005) developed a two-sided loss function “nominal-the-best,” where the target is centered within a specification range, which estimates economic loss for a quality attribute that has both lower and upper specifications, e.g., chemical retention (Fig. 2). Taguchi's nominal-the-best loss function is:
where L is the economic loss; k is the cost constant, k = A0/(SL – m)2; A0 is the cost of operating at a specification limit, SL; m is the operational target value of the quality characteristic (e.g., retention); and y0 is the actual value at the SL (e.g., 0.15 lb/ft3). Nutek Inc. (2014) illustrates an approach to estimating A0. Taguchi et al. (2005) also developed a one-sided loss function “smaller-the-better” with only one lower or upper specification (e.g., the desired value of retention percentage should be as small as possible near the LCLAWPA standard; see Fig. 3). Taguchi's smaller-the-better loss function is:
Operational targets can be reduced assuming the smaller-the-better TLF only if the variance of the process or product (e.g., total retention) is first reduced (Young et al. 2015). Most producers run the smallest possible target to minimize cost, but they must also avoid producing a product below the LCLAWPA standard, which adds lost time due to retreatment, and may be necessary to avoid warranty claims. Further asymmetric or discontinuous loss functions are possible that can address unbalanced costs associated with nonequidistant specification limits (e.g., Metzner et al. 2019), but are not explored at this time given that LCLAWPA is a one-sided LSL under the assumption of smaller-the-better TLF.
Results
Probability density functions
There were distinct differences between the industry and agency values of normalized retention, including the agencies measuring fewer below-specification measurements but more just above specification, and the agencies also measuring lower at the higher end of the measurement scale, except at the extreme (Fig. 4). A quantile–quantile plot of the industry and agency values of normalized retention reveals some distinct differences in retention values >3 (Fig. 5). Industry treating plants calibrate their instruments to agency standards; however, it is plausible that additional variability is introduced at the plants that is not incorporated in measurements made by the regulating agency instruments. Each plant presumably has its own operators and instruments, whereas the agencies have fewer total operators/instruments. There could also be differences in the collection or grinding of samples in preparation for the measurement process. The differences may be due to the resampling without replacement that occurs with the industry samples when the first, second, or third sample falls below the LCLAWPA standard and additional samples are then taken from the same batch of treated wood. Young (2012) highlighted that skewed distributions can occur from resampling for the engineered-wood industries. The difference may also be due to batches of treated lumber that are retreated and the original measurements were not maintained at the plant.
On the basis of the minimum AICc and BIC values, the best pdf for total retention for each of the use categories UC3B, UC4A, and UC4B was the loglogistic (or Fisk distribution); see Table 1. For UC4B, the loglogistic had lower minimum AICc and BIC, but on the basis of Akaike weights, there is also supporting evidence for the logistic pdf. The loglogistic tends to be skewed, whereas the logistic is symmetric but heavier tailed than the normal distribution. For each use category, the commonly assumed normal, or Gaussian, pdf was ranked low among the nine different distributions that were tested. Depending on the goal of a project, methods robust to deviations from the normality assumption may need to be considered for any statistical analyses, including grouping data into subgroups if appropriate. It should be kept in mind that the data are a mixture across manufacturers, preservatives, and product sizes, and the resulting industry-wide distribution could be a result of the amalgamation of heterogeneous distributions. This type of data mixture was noted by Zeng et al. (2016) for the wood composites industries, but as previously noted, the literature does not document this for the treated-wood industries.
The loglogistic pdf has been shown to arise as a result of mixture distributions and is useful in modeling survival data (Crowder et al. 1991); for example, it is applicable in modeling situations where the rate at which something is occurring increases initially and then after some time begins to decrease (Al-Shomrani et al. 2016). It may reflect that the left tail of the distribution drops more abruptly because of retreatment of underretention charges. Selecting the best fit for a pdf for total retention is important when establishing useful standards, which are typically derived by applying parametric estimates and confidence intervals to an industrial data set. For example, this is illustrated by comparing the 5th percentile estimates across the pdfs. The failure probabilities at the LCLAWPA by pdf are distinctly different and illustrate the usefulness of fitting the appropriate pdf for the data; this is especially important when developing accurate standards for producers. Use categories UC4A and UC4B have higher failure probabilities relative to UC3B and can be useful to direct continuous improvement efforts for reducing variation to lower the failure probability at the LCLAWPA. It is important to note for this data that a normal pdf has higher failure probabilities relative to the loglogistic.
Quantifying the process variation
Control charts were developed for use category UC3B to illustrate the different signals from control charts for long-term and short-term process variation over time (see examples in Figs. 6 and 7). Long-term variation may typify the variation experienced across the broader consumer markets, where short-term variation may exemplify variation experience by a more regionalized or local market group. Eliminating or reducing special-cause variation is typically the starting point of any continuous improvement effort where root-cause analyses should reveal the events, e.g., shift change, startup from downtime, sensor failure, etc., that are not part of the normally expected system variation. The process in the short term is predictable, illustrating the usefulness of the control chart, whereas the process in the long term is not predictable when special-cause variations and statistical runs are occurring. Control charting is an important first step for the treated-wood industry to quantify variation, identify special-cause variation, and to prevent both overtreatment and the need for retreatment.
Process capability
Capability analyses were performed for three use categories to assess the capability of the total retention samples relative to the LCLAWPA standard. Since the LCLAWPA standard is defined in quality management as a one-sided lower specification (LSL = LCLAWPA), the Cpl index is used to determine capability; see Equation 7. A capability index value of 1 or greater indicates that the process meets specifications. For the agency data set the indices by use category were: UC3B, Cpl = 0.807 (5.91% out-of-specification); UC4A, Cpl = 0.723 (4.45% out-of- specification); and UC4B, Cpl = 0.587 (8.26% out-of-specification). For the industry data set, the indices by use category were: UC3B, Cpl = 0.937 (5.16% out-of-specification); UC4A, Cpl = 0.802 (4.44% out-of-specification); and UC4B, Cpl = 0.694 (9.82% out-of-specification); see Figures 8, 9, and 10, respectively. Montgomery (2012) indicated that an acceptable Cpk for a one-sided limit (e.g., LCLAWPA standard) is Cpk ≥ 1.25. Harry and Schroeder (2000) provided an example for two-sided limits (e.g., moisture content) where Cpk ≥ 1.33 is an acceptable standard to achieve “Six Sigma” quality. The Cpk indices developed for the use categories in this study illustrate a significant gap as noted by Harry and Schroeder (2000).
The differences in the Cpl indices and the percent out-of-specification between the industry and agency data are due to the differences in the variance estimates. Variation displayed as StDev in Figures 8, 9, and 10 is the overall standard deviation that is used for the process performance indices, PPL (=Ppl) and PPK (=Ppk), given the large sample sizes, and is represented in the normal curve as a solid line; CPL(=Cpl) and Cpk use the /d2 to estimate short-term or within-group variation (agency or industry) and are represented in the normal curve as a dotted line.
Capability analyses are a useful tool for estimating the required shift in operating target or process mean to attain essentially 100 percent conformance. As an example, a shift in the process mean of normalized retention to 1.853 (∼30% increase from 1.435) for the normalized agency data set for use category UC3B would result in 100 percent conformance to LCLAWPA standard (Fig. 11). A shift in the process mean to 1.698 (27% increase) for the normalized agency data set for use category UC4B would result in 100 percent conformance to LCLAWPA (Fig. 12). Although this shift would ensure consistent adherence to the AWPA standard (LCLAWPA), such a shift is not an appropriate long-term strategy for business competitiveness; such an increase in the chemical additive target would greatly increase costs. An increase in target retention would also have other detrimental effects such as higher leaching amounts, increased disposal, etc. Capability analyses are an essential early step to assess NT relative to ET. As Ohno (1988) noted, “Where there is no standard there can be no Kaizen (improvement).”
Taguchi loss function
The additional costs from variation were estimated using the one-sided TLF for the three use categories (Table 2). The operational targets used in Table 2 were equated with the process average and the distance from the average to the LCLAWPA standard, which is a function of the size of s, i.e., the higher the s, the greater the target window. The initial s and X̄ = Target in Table 2 (upper cells highlighted in bold) were calculated from the original data set values for the three use categories. The TLF costs illustrate that for all three use categories, substantial savings can be attained by focusing on variation reduction; see Metzner et al. (2019). For example, lowering the treatment target for UC3B from 1.45 to 1.38 given a variation reduction of 5 percent from s = 0.277 to s = 0.263 results in a cost savings of 24 percent. If the s can be reduced further to s = 0.249, costs savings of 44 percent occurs. The same improvement scenarios are applicable for both the UC4A and UC4B use categories (Table 2).
A treatment producer does not necessarily control all input costs. The costs of chemicals and wood are dictated by market conditions and a producer's volume. However, the producer's continuous improvement efforts are under management's control, and influence the variation at the plant. Some things the treater can control that might influence variation include source of supply (treatability can vary by geographic source), moisture content (verifying proper drying), grouping by similar dimension, and pressure-treatment parameters. The TLF used in this study illustrates the economic justification for dedicating resources at the plant level toward variation reduction and continuous improvement.
Conclusions
This study provides an example of applying SPC tools to wood-treatment industry data to identify strategies for increased standard conformance and lowering production costs. A large paired data set of normalized assay retentions for charges from industry and agency samples indicated that the best distribution was the loglogistic pdf. This may have implications when using methods with strict normality assumptions and may be important for agencies when establishing accurate standards when using the lower quantiles of a distribution. The capability analyses indicated that the agency and plant data had anywhere from, respectively, 4.45 to 8.26 percent and 4.44 to 9.82 percent below the LCL of the passing standard (LCLAWPA), depending on use category. A TLF was used to estimate the additional costs due to process variation. The TLF illustrated that if a focus on variation reduction led to operational retention target reductions, substantial cost savings could be realized.
The study addresses a research gap in documenting the applications of SPC and other statistical methods as improvement schemes for the treated-wood industries. Application of the methods outlined in this article will be dependent on the company and strategies that include continuous improvement. Control charts are a straightforward tool for implementation at the operations level of the plant and allow operators to monitor the stability of the process, and provide useful alerts for process instability and unanticipated events. The capability indices such as the Cpk and Cpm are useful methods for the quality managers to assess improvement relative to specifications. The TLF is an accepted method for the quality management and senior executives to quantify the cost of poor quality due to variation. A useful handbook for implementing SPC was developed as part of this study as cited earlier and it is a possible template for the treated-wood industry. An SPC workshop was conducted at the 2018 AWPA annual meeting and more are anticipated in the future. It is feasible that affordable customized software will be developed for this industry that will include control charting, TLF, and other continuous improvement tools.
Contributor Notes
The authors are, respectively, Professor, Dept. of Forestry, Wildlife and Fisheries, Center for Renewable Carbon, Univ. of Tennessee, Knoxville (tmyoung1@utk.edu [corresponding author]); Mathematical Statistician and Research Forest Products Technologist, USDA Forest Serv. Forest Products Lab., Madison, Wisconsin (patricia.k.lebow@usda.gov, stan.lebow@usda.gov); and Professor, Dept. of Forestry, Wildlife and Fisheries, Univ. of Tennessee, Knoxville (mtaylo29@utk.edu). This paper was received for publication in December 2019. Article no. 19-00067.