Your quick reference to statistical process control for manufacturing quality management systems.
Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.
Any member of a family of continuous probability distributions that arises when estimating the mean of a normally distributed population in situations where the sample size is small, and population standard deviation is unknown.
An action taken to compensate for variation within the control limits of a stable system. Tampering increases (rather than decreases) variation, as in the case of Over Control.
The maximum and minimum limit values a product can have and still meet customer requirements.
The graphical representation of a variable’s tendency, over time, to increase, decrease, or remain unchanged.
A control chart in which the deviation of the subgroup average, X-bar, from an expected trend in the process level is used to evaluate the stability of a process.
An incorrect decision to reject something (such as a statistical hypothesis or a lot of products) when it is acceptable.
An incorrect decision to accept something when it is unacceptable.
Count-per-unit chart.
An object for which a measurement or observation can be made; commonly used in the sense of a unit of product or piece, the entity of product inspected to determine whether it is defective or non-defective.
Control limit for points above the central line in a control chart.
Measurement information. Control charts based on variable data include average (X-bar) chart, range (R) chart, and sample standard deviation (or s) chart.
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean. Informally, it measures how far a set of (random) numbers are spread out from their average value.
A change in data, characteristic or function caused by one of four factors: special causes, common causes, tampering, or structural variation.
Named after Swedish mathematician Waloddi Weibull, the Weibull Distribution is a continuous probability distribution. Commonly used to assess product reliability, analyze life data, and model failure times.
A control chart used for process in which individual measurements of the process are plotted for analysis. Also called an Individuals chart or I-chart.
A control chart used for processes in which the averages of subgroups of process data are plotted for analysis.
A management tool aimed at the reduction of defects through prevention. Directed at motivating people to prevent mistakes by developing a constant, conscious desire to do their job right the first time. Developed by quality expert Philip B. Crosby.
ANSI/ASQ Z1.4-2003 (R2013): Sampling Procedures and Tables for Inspection by Attributes is an acceptance sampling system to be used with switching rules on a continuing stream of lots for the acceptance quality limit (AQL) specified.
ANSI/ASQ Z1.9-2003 (R2013): Sampling Procedures and Tables for Inspection by Variables for Percent Nonconforming is an acceptance sampling system to be used on a continuing stream of lots for the AQL specified.
Learn all about SPC for manufacturing.
To begin evaluating the type of variation in a process, one must evaluate distributions of data—as Deming plotted the drop results in his Funnel Experiment. The best way to visualize the distribution of results coming from a process is through histograms. A histogram is frequency distribution that graphically shows the number of times each given measured value occurs. These histograms show basic process output information, such as the central location, the width and the shape(s) of the data spread.
There are three measures of histogram’s central location, or tendency:
When compared, these measures show how data are grouped around a center, thus describing the central tendency of the data. When a distribution is exactly symmetrical, the mean, mode and median are equal.
To estimate a population mean, use the following equation:
The two basic measures of spread are the range (the difference between the highest value and the lowest value in the sample) and the standard deviation (the average absolute distance each individual value falls from the distribution’s mean). A large range or a high standard deviation indicate more dispersion, or variation of values within the sample set.
To estimate the standard deviation of a population, use the following equation:
Specification limits are boundaries set by a customer, engineering, or management to designate where the product must perform. Specification limits are also referred to as the “voice of the customer” because they represent the results that the customer requires. If a product is out of specification, it is nonconforming and unacceptable to the customer.
Remember: The customer might be the next department or process within your production system.
Control limits are calculated from the process itself. Because control limits show how the process is performing, they are also referred to as the “voice of the process.” Control limits show how the process is expected to perform; they show the variation within the system or the range of the product that the process creates.
Control limits have no relationship to specification limits.
If a product is outside the control limits, it simply means that the process has changed; the product might be in or out of specification. The shift could be caused by a decrease or increase in variation but has no relation to the specification limits.
Control limits are typically set to +3 standard deviations from the mean. For variable data, two control charts are used to evaluate the characteristic: one chart to show the stability of the process mean and another to describe the stability of the variation of individual data values.
Control limits must never be calculated based on specification limits.
In acceptance sampling, one or more individual units (pieces) of product drawn from a lot for purposes of inspection to reach a decision regarding acceptance of the lot.
The number of units (pieces) in a sample.
The s chart tracks subgroup standard deviations; the plot point represents the calculated sample (n-1) standard deviation of the subgroup.
As commonly used in acceptance sampling theory, the process of selecting sample units so all units under consideration have the same probability of being selected.
Note: Equal probabilities are not necessary for random sampling; what is necessary is that the probability of selection be ascertainable. However, the stated properties of published sampling tables are based on the assumption of random sampling with equal probabilities. An acceptable method of random selection with equal probabilities is the use of a table of random numbers in a standard manner. A simple random sample is a set of n objects in a population of N objects where all possible samples are equally likely to happen.
Example: 100 objects (n) in a population of 10,000 objects (N). In Acceptance Sampling, the Lot size combined with the AQL defines how many “random” samples to inspect.
The probability distribution of a statistic. Common sampling distributions include t, chi-square (c2), and F. Also known as finite-sample distribution, sampling distribution is the probability distribution of a given random-sample-based statistic. Sampling distributions are important in statistics because they provide a major simplification en route to statistical inference.
Sampling inspection in which the decision to accept or reject a lot is based on the inspection of one sample. A single sampling plan is specified by the pair of numbers (n,c). The sample size is n, and the lot is rejected if there are more than c defectives in the sample. It is referred to as single, because the decision is made on one inspection (visual or measured) of 1 or more pieces.
Example: Lot size = 500, AQL is 0.25, sample size (n) = 50, c=1. If any piece is outside specification, the lot (or sample) fails.
Sequential sampling inspection in which, after each unit is inspected, the decision is made to accept a lot, reject it or inspect another unit. See Single Sampling above.
Example from the web: In the context of market research, a sampling unit is an individual person. The term sampling unit refers to a singular value within a sample database. For example, if you were conducting research using a sample of university students, a single university student would be a sampling unit.
A graphical technique used to visually analyze the relationship between two variables. Two sets of data are plotted on a graph: the y-axis indicates the variable to be predicted, and the x-axis indicates the variable to make the prediction.
Adaptations made to control charts to help determine meaningful control limits when only a limited number of parts are produced, or when a limited number of services are performed. Short-run techniques usually focus on the deviation (of a quality characteristic) from a target value.
One standard deviation in a normally distributed process.
A rigorous, data-driven approach (and methodology) for analyzing and eliminating the root causes of business problems.
Also known as Lean Six Sigma Black Belt and Black Belt Six Sigma.
Certified Lean Six Sigma designation. A full-time team leader responsible for implementing process improvement projects—define, measure, analyze, improve and control (DMAIC) or define, measure, analyze, design and verify (DMADV)—within a business to drive up customer satisfaction and productivity levels.
An employee who has been trained in the Six Sigma improvement method and can lead a process improvement or quality improvement team as part of his/ her full-time job.
Also known as Lean Six Sigma Master Black Belt.
A problem-solving subject matter expert responsible for strategic implementations in an organization. This Six Sigma pro is typically qualified to teach other facilitators the statistical and problem-solving methods, tools, and applications to use in such implementations.
The problem-solving tools used to support Six Sigma and other process improvement efforts: voice of the customer, value stream mapping, process mapping, capability analysis, Pareto charts, root cause analysis, failure mode and effects analysis, control plans, statistical process control, 5S, mistake proofing, and design of experiments.
Refers to someone who has attained Six Sigma yellow belt certification. A team member who supports and contributes to Six Sigma projects, often helping to collect data, brainstorm ideas, and review process improvements.
Asymmetry in a statistical distribution. Skewed data may affect the validity of control charts and other statistical tests based on the normal distribution.
Cause of variation that arise because of special circumstances. They are not an inherent part of a process. Special cause is also referred to as assignable cause. Also see Common Cause.
A document that states the requirements to which a given product or service must conform.
Also known as dispersion, variability, or scatter.
The extent to which a distribution is stretched or squeezed.
A stable process is said to be in control. A process is considered stable if it is free from the influences of special causes.
A measure that is used to quantify the amount of variation or dispersion of a set of data values.
A single measure of some attribute of a sample—used to make inferences about the population from which the sample came. Sample mean, median, range, variance, and standard deviation are commonly calculated statistics.
An industry-standard methodology for measuring and controlling quality during the manufacturing process.
The application of statistical techniques to control quality. Includes acceptance sampling, which statistical process control does not.
A branch of mathematics dealing with the collection, organization, analysis, interpretation, and presentation of data.
Another name for a sample from the population.
Confidence that a supplier’s product or service will fulfill its customers’ needs; achieved by creating a relationship between the customer and supplier that ensures the product will be fit for use with minimal corrective action and inspection.
According to quality management guru Joseph M. Juran, nine primary activities are needed: 1) define product and program quality requirements; 2) evaluate alternative suppliers; 3) select suppliers; 4) conduct joint quality planning; 5) cooperate with the supplier during the execution of the contract; 6) obtain proof of conformance to requirements; 7) certify qualified suppliers; 8) conduct quality improvement programs as required; and 9) create and use supplier quality ratings.
A system in which supplier quality is managed by using a proactive and collaborative approach. The costs of transactions, communication, problem resolution, the impact of switching suppliers, and overall cost. Also focuses on factors that impact supply-chain performance, such as the reliability of the supplier delivery, and the supplier’s internal policies regarding inventory levels.
The system of organizations, people, activities, information, and resources involved in moving a product or service from supplier to customer.
Also known as the 80-20 rule.
A graphical tool for ranking causes from most significant to least significant. It is based on the Pareto principle, named after 19th century economist Vilfredo Pareto, and suggests that most effects come from relatively few causes; that is, 80% of the effects come from 20% of the possible causes.
A metric reporting the number of defects normalized to a population of one million for ease of comparison.
A ratio often used to refer to the concentration of solutes in solutions, such as salts in water (i.e., salinity).
See Percent Chart.
Also referred to as a proportion chart.
A control chart for evaluating the stability of a process in terms of the percentage of the total number of units in a sample in which an event of a given classification occurs.
Percentiles divide the ordered data into 100 equal groups. The kth percentile pk is a value such that at least k% of the observations are at or below this value and (100-k)% of the observations are at or above this value.
Also known as PDCA Model.
A four-step process for quality improvement. In the first step (plan), a way to effect improvement is developed. In the second step (do), the plan is carried out. In the third step (check), a study takes place between what was predicted and what was observed in the previous step. In the last step (act), action should be taken to correct or improve the process.
A discrete probability distribution that expresses the probability of a number of events occurring in a fixed time period if these events occur with a known average rate and are independent of the time since the last event.
The amount of variation that exists in the values of multiple measurements of the same characteristic or parameter. Greater precision means less variation between measurements.
The likelihood of occurrence of an event, action, or item.
A set of interrelated work activities that transform inputs into outputs.
Expected or average value of process quality.
A statistical measure of the inherent process variability of a given characteristic.
The value of the tolerance specified for the characteristic divided by the process capability. The several types of process capability indexes include the widely used Cpk and Cp.
The method for ensuring that a process meets specified requirements.
Actions taken to increase the effectiveness or efficiency of a process in meeting specified requirements.
A subjective term for which each person or sector has its own definition. In technical usage, quality can have two meanings: 1) the characteristics of a product or service that bear on its ability to satisfy stated or implied needs; 2) a product or service free of deficiencies. According to Joseph Juran, quality means “fitness for use;” according to Philip Crosby, it means “conformance to requirements.”
Quality assurance is all the planned and systematic activities implemented within the quality system that can be demonstrated to provide confidence that a product or service will fulfill requirements for quality. Quality control is comprised of the operational techniques and activities used to fulfill requirements for quality. Quality Assurance and Quality Control are often used interchangeably, referring to the actions performed to ensure the quality of a product, service, or process.
The ability of a product, service, or process to meet its design specifications. Design specifications are an interpretation of what the customer needs.
See First Pass Yield.
Quartiles divide the ordered data into 4 equal groups. The second quartile (Q2) is the median of the data.
A cause of variation due to chance and not assignable to any factor.
A commonly used sampling technique in which sample units are selected so all combinations of n units under consideration have an equal chance of being selected as the sample.
The measure of dispersion in a data set (the difference between the highest and lowest values).
Also known as Range Control Chart.
A control chart in which the range (R) of a subgroup is used to track instantaneous variations and to evaluate the stability of the variability within a process.
A set of statistical processes for estimating the relationships among variables.
The smallest number of defectives (or defects) in the sample or samples under consideration that will require rejection of the lot.
The variation in measurements obtained when one measurement device is used several times by the same person to measure the same characteristic on the same product.
The variation in measurements made by different people using the same measuring device to measure the same characteristic on the same product.
A factor that caused a nonconformity and should be addressed with corrective action.
The method of identifying the initiating cause of a problem, which leads to preventing it from occurring again.
A consecutive number of points consistently increasing or decreasing. A run can be evidence of the existence of special causes of variation that should be investigated.
A chart showing a line connecting numerous data points collected from a process running over time.
Also known as a Batch.
The value of percentage defective or defects per hundred units in a lot.
Also referred to as N.
The number of units in a lot.
Expressed in percentage defective, the poorest quality in an individual lot that should be accepted.
Note: LTPD is used as a basis for some inspection systems and is commonly associated with a small consumer risk.
Control limit for points below the central line in a Control Chart.
The arithmetic average of a discrete set of values in a data set.
The criteria, metric, or means to which a comparison is made with output.
The act or process of determining a value. An approximation or estimate of the value of the specific quantity subject to measurement, which is complete only when accompanied by a quantitative statement of its uncertainty.
All operations, procedures, devices, and other equipment, personnel and environment used to assign a value to the characteristic being measured.
In metrology, a non-negative parameter characterizing the dispersion of the values attributed to a measured quantity.
The center value of a set of data in which all the data are arranged in sequence.
The value occurring most frequently in a data set.
A measure used to help calculate the variance of a data population; the distance or difference between consecutive points. The moving range chart is typically used with an Individual X (IX) chart for single measurements.
A measure used to calculate variation using the standard deviation between two consecutive points from an IX control chart. The calculations are then plotted and analyzed on a time-ordered Moving-s control chart.
A control chart for evaluating the stability of a process in terms of the levels of two or more variables or characteristics.
The number of units in a sample.
The number of units in a population.
A unit with one or more nonconformities or defects. Also called a reject.
A specified requirement that is not fulfilled. Also see Blemish, Defect, and Imperfection.
Testing and evaluation methods that do not damage or destroy the test specimen.
All tests involving ranked data (data that can be put in order). Nonparametric tests are often used in place of their parametric counterparts when certain assumptions about the underlying population are questionable.
The charting of a data set in which most of the data points are concentrated around the average (mean), thus forming a bell-shaped curve.
A control chart based on counting the number of defective units in each constant size subgroup. The np-chart is based on the binomial distribution.
Also known as Operating Curve.
A graph to determine the probability of accepting lots as a function of the lots’ or processes’ quality level when using various sampling plans. There are three types: type A curves, which give the probability of acceptance for an individual lot coming from finite production (will not continue in the future); type B curves, which give the probability of acceptance for lots coming from a continuous process; and type C curves, which (for a continuous sampling plan) give the long-run percentage of product accepted during the sampling phase.
Unusually large or small observations relative to the rest of the data.
A process in which the statistical measure being evaluated is not in a state of statistical control. In other words, the variations among the observed sampling results cannot be attributed to a constant system of chance causes. Also see In-Control Process.
A term that indicates a unit does not meet a given requirement or specification.
Used to measure manufacturing productivity; identifies the percentage of manufacturing time that is truly productive. An OEE score of 100% means you are manufacturing only Good Parts, as fast as possible, with no Stop Time. In the language of OEE that means 100% Quality (only Good Parts), 100% Performance (as fast as possible), and 100% Availability (no Stop Time).
An element often introduced into a process by a well-meaning operator or controller who considers any appreciable deviation from the target value as a special cause. In this case, the operator is incorrectly viewing common-cause variation as a fault in the process. Over control of a process can actually increase the variability of the process and is viewed as a form of tampering.
A diagram consisting of rectangles whose area is proportional to the frequency of a variable and whose width is equal to the class interval. Gives a rough sense of the density of the underlying distribution of the data and is often used for density estimation—that is, estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1.
A procedure that is used on a sample from a population to investigate the applicability of an assertion (inference) to the entire population. Hypothesis testing can also be used to test assertions about multiple populations using multiple samples.
A quality characteristic’s departure from its intended level or state without any association to conformance to specification, requirements, or to the usability of a product or service. Also see Blemish, Defect, and Nonconformity.
A process in which the statistical measure being evaluated is in a state of statistical control; in other words, the variations among the observed sampling results can be attributed to a constant system of chance causes. Also see Out-of-Control Process.
A single unit or a single measurement of a quality characteristic, usually denoted as X. This measurement is analyzed using an Individuals Chart, CUSUM, or EWMA chart.
Also called an I-chart or X-chart. A control chart for processes in which individual measurements of the process are plotted for analysis.
A verification activity. For example, measuring, examining, testing, and gauging one or more characteristics of a product or service and comparing the results with specified requirements to determine whether conformity is achieved for each characteristic.
The cost associated with inspecting a product to ensure it meets the internal or external customer’s needs and requirements; an appraisal cost.
This is the lot or batch of product to be inspected for acceptance.
>In the ANSI/ASQ and ISO Acceptance Sampling Standards there are three Inspection States (or statuses): Normal, Tightened, and Reduced. The definitions for each state are found in the applicable standard under a heading called Switching Rules.
An independent, nongovernmental international organization with a membership of 161 national standards bodies that unites experts to share knowledge and develop voluntary, consensus-based, market-relevant international standards, guidelines, and other types of documents.
A set of international standards on quality management and quality assurance developed to help organizations effectively document the quality system elements to be implemented to maintain an efficient quality system. The standards, initially published in 1987, are not specific to any particular industry, product, or service. The standards were developed by the International Organization for Standardization (ISO). The standards underwent major revision in 2000 and now include ISO 9000:2005 (definitions), ISO 9001:2008 (requirements), ISO 9004:2009 (continuous improvement) and ISO 9001: 2015 (risk management).
A voluntary quality management system standard developed by the International Organization for Standardization (ISO). First released in 1987 and one of several documents in the ISO 9000 family.
Also known as Just-In-Time Production. A methodology aimed primarily at reducing flow times within a production system, as well as response times from suppliers and to customers.
A process parameter that can affect safety or compliance with regulations, fit, function, performance or subsequent processing of product.
A product characteristic that can affect safety or compliance with regulations, fit, function, performance or subsequent processing of product.
A non-parametric test for determining whether samples originate from the same distribution. It is used for comparing two or more independent samples of equal or different sample sizes. While analysis of variance tests depends on the assumption that all populations under comparison are normally distributed, the Kruskal-Wallis test places no such restriction on the comparison. It is a logical extension of the Wilcoxon Mann-Whitney Test.
Collected facts. There are two basic kinds of numerical data: measured or variable data (such as 12 ounces, 10 miles, and 0.50 inches) and counted (or attribute) data (such as 112 defects).
The process to determine what data are to be collected, how the data are collected, and how the data are to be analyzed.
A set of tools that help with data collection and analysis. These tools include check sheets, spreadsheets, histograms, trend charts, and control charts.
A product’s or service’s nonfulfillment of an intended requirement or reasonable expectation for use, including safety considerations. There are four classes of defects: Class 1, very serious, leads directly to severe injury or catastrophic economic loss; Class 2, serious, leads directly to significant injury or significant economic loss; Class 3, major, is related to major problems with respect to intended normal or reasonably foreseeable use; and Class 4, minor, is related to minor problems with respect to intended normal or reasonably foreseeable use. Also see Blemish, Imperfection, and Nonconformity.
A unit of product that contains one or more quality characteristic defects.
Also known as the Plan-Do-Study-Act cycle, popularized by W. Edwards Deming. Also see Plan-Do-Check-Act Cycle.
The difference or distance of an individual observation or data value from the center point (often the mean) of the set distribution.
A mathematical model that relates the value of a variable with the probability of the occurrence of that value in the population.
Also known as Six Sigma DMAIC. Define, Measure, Analyze, Improve, and Control. A data-driven quality strategy for improving processes, and an integral part of a Six Sigma quality initiative.
Also known as EWMA Control Charts. An Exponentially Weighted Moving Average control chart uses current and historical data to detect small changes in the process. Typically, the most recent data are given the most weight, and progressively smaller weights are given to older data.
The F distribution is the probability distribution associated with the F statistic.
An F statistic is a value you get when you run an Analysis of Variance (ANOVA) test or a regression analysis to find out whether the means between two populations are significantly different.
The inability of an item, product, or service to perform required functions on demand due to one or more defects.
See Characteristic.
Also referred to as the quality rate. The percentage of units that completes a process and meets quality guidelines without being scrapped, rerun, retested, returned, or diverted into an offline repair area. Calculated by dividing the units entering the process minus the defective units by the total number of units entering the process.
Also known as First Time Quality Formula. Calculation of the percentage of good parts at the beginning of a production run.
The degree to which a product or service meets the requirements for its intended use.
W. Edwards Deming’s 14 management practices to help organizations increase their quality and productivity: 1) Create constancy of purpose for improving products and services; 2) Adopt the new philosophy; 3) Cease dependence on inspection to achieve quality; 4) End the practice of awarding business on price alone; instead, minimize total cost by working with a single supplier; 5) Improve constantly and forever every process for planning, production and service; 6) Institute training on the job; 7) Adopt and institute leadership; 8) Drive out fear; 9) Break down barriers between staff areas; 10) Eliminate slogans, exhortations, and targets for the workforce; 11) Eliminate numerical quotas for the workforce and numerical goals for management; 12) Remove barriers that rob people of pride of workmanship and eliminate the annual rating or merit system; 13) Institute a rigorous program of education and self-improvement for everyone; and 14) Put everybody in the organization to work to accomplish the transformation.
A list, table, or graph that displays the frequency of various outcomes in a sample.
A gauge R&R indicates whether the inspectors are consistent in their measurements of the same part (repeatability) and whether the variation between inspectors is consistent (reproducibility).
A language of symbols and standards designed and used by engineers and manufacturers to describe a product and facilitate communication between entities working together to produce something.
State of a unit or product. Two parameters are possible: Go (conforms to specifications) and No-Go (does not conform to specifications).
See Six Sigma Green Belt.
Have you heard about statistical process control (SPC) but aren’t quite sure what it is or how it could improve your bottom line? We’ve put together this short guide to answer some of the most common SPC manufacturing questions.
At its most basic, statistical process control (SPC) is a systematic approach of collecting and analyzing process data for prediction and improvement purposes. SPC is about understanding process behavior so that you can continuously improve results.
As you learn about SPC, you’ll encounter terms that describe central tendency:
You will also come across terms that describe the width or spread of data:
Dr. Walter A. Shewhart (1891–1967), a physicist at Bell Labs who specialized in the use of statistical methods for analyzing random behavior of small particles, was responsible for the application of statistical methods to process control. Up until Shewhart, quality control methods were focused on inspecting finished goods and sorting out the nonconforming product.
As an alternative to inspection, Shewhart introduced the concept of continuous inspection during production and plotting the results on a time-ordered graph that we now know as a control chart. By studying the plot point patterns, Shewhart realized some levels of variation are normal while others are anomalies.
Using known understandings of the normal distribution, Shewhart established limits to be drawn on the charts that would separate expected levels of variation from the unexpected. He later coined the terms common cause and assignable cause variation.
Dr. Shewhart concluded that every process exhibits variation: either controlled variation (common cause) or uncontrolled variation (assignable cause). He defined a process as being controlled when “through the use of past experience, we can predict, at least within limits, how the process may be expected to vary in the future.”
He went on to develop descriptive statistics to aid manufacturing, including the Shewhart Statistical Process Control Chart—now known as the X-bar and Range (Xbar-R) chart. The purpose of the Shewhart Statistical Process Control Chart is to present distributions of data over time to allow processes to be improved during production. This chart changes the focus of quality control from detecting defects after production to preventing defects during production.