Learn all about SPC for manufacturing.
Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.
Statistical process control can help manufacturers achieve continuous process improvement—when it is implemented properly. Watch out for the following obstacles, which can sideline your SPC efforts.
If management (or others within the company) believe that company circumstances are so unique that statistical process control cannot be applied to processes, they are likely to argue that even considering SPC would be a waste of time. This obstacle tends to crop up for manufacturers that experience the following:
To overcome this obstacle: Explain that if a process creates output, then SPC can be applied. The first step is to start collecting data to show how the process behaves. After metrics are defined and data are collected and plotted, it is easy to see that the process does have measurable characteristics. Educating employees in short-run process control methods is a great way to show them that they are not alone. While one likes to feel special, the truth is that most companies that feel too special for statistical process control are the ones that can benefit the most from using SPC.
SPC isn’t a cure-all. If no action is taken pursuant to the knowledge gained from SPC analysis, then implementing SPC software for manufacturing or setting up dozens of control charts is not going to improve anything. A control chart can’t eliminate variation and won’t solve all your quality problems.
SPC is the foundation of an effective process-improvement methodology, but there are numerous other tools that should be used. Management teams that expect to solve all their quality problems simply by implementing SPC but doing nothing with the data typically abandon the initiative when it doesn’t miraculously solve every problem.
To overcome this obstacle: SPC education must include an understanding of what SPC does. SPC brings to light common cause and special cause variations, but other tools are needed to reduce or eliminate variation. Train employees to use other process-improvement tools to help reduce variation and create a Corrective Action or Process Improvement team to work on projects.
Before SPC implementation, many manufacturers collect product data and compare them to specification limits. If the product is within the boundaries set by the customer, the manufacturer assumes that the process is performing fine…in-control. This use of data and limits is called product control, not process control.
When SPC is implemented, you use control limits that are based on process behavior to truly control the process. However, some companies keep specification limits on their control charts, base control limits on something other than true process variation, or set control limits to a standard other than +3 sigma. If control limits do not accurately represent the process, they are useless and can cause more harm than good.
To overcome this obstacle: Ensure that employees understand that control limits are the voice of the process and show how the process is performing, whereas specification limits are the voice of the customer and are independent of process stability. Specification limits do not belong on a control chart. Control charts always use control limits, which are set at 3 sigma units on either side of the central line and are based on data. Drill into all employees that control limits are never based on any calculation using the specification limits.
When a process is in state of statistical control, with primarily common cause variation present, any adjustment to the process is tampering and will only increase the variation. Operators often adjust machines that don’t need adjustment; good operators have a natural tendency to tinker with a process to try and make it perform at its best. Management can aggravate tampering by insisting that operators adjust a process when process data aren’t where management wants them.
These impulsive reactions create uncontrollable gyrations in the process. When the process deteriorates, management tends to blame the operator, resulting in distrust and damaged morale that can ruin an SPC initiative—and do irreversible harm to employee/management relations.
To overcome this obstacle: All employees, especially management, must be trained to understand variation and the dangers of tampering. Each data point on a control chart is independent of the previous one. Processes must be allowed to operate in their natural state if you are to understand the common cause variation. There is a saying in the SPC community, “Don’t do something, just stand there.” Training must include how tampering creates bias and nullifies control charts.
Employees who are expected to implement SPC without adequate training and resources will undoubtedly cause the initiative to fail. In many cases, management attempts to save money by scrimping on training, but the money saved will be outweighed by the wasted cost of an unsuccessful SPC program.
In some cases, employees get adequate training, but supervisors and management do not—and so do not support the initiative. If management is uncomfortable with SPC concepts, they will either avoid necessary actions (because they are uneasy with the changes) or recommend process changes based on a misunderstanding of process control. Either way, the SPC initiative suffers.
To overcome this obstacle: Management must provide the necessary resources to conduct thorough training for every employee and every level of the organization—including all levels of management. This training must be repeated at regular intervals, as new employees must be trained, and experienced employees need refresher courses.
Management must be involved with the SPC initiative so that employees know that management believes in and understands SPC. Management must set realistic goals for process improvement and base their analysis on solid metrics. Executive management should also involve front-line management in the selection of the areas to which to apply SPC. Doing so will increase the likelihood that front-line management will take ownership of the system and help it to gain acceptance with employees.
All managers must understand how decision-making should change after SPC is implemented. Remember Shewhart’s Fourth Foundation of Control Charts: Control charts are effective only to the extent that the organization can use, in an effective manner, the knowledge gained. Management must empower employees to make decisions gained from SPC analysis.
Data that lacks integrity has a devastating effect on analysis and decision-making. Using “bad” data can be worse than having no data. Data can be biased in many ways: Operators might be “rounding off” values before recording data. Subgrouping might not be rational. A measuring instrument might not be suited for the task or might be damaged or out of calibration.
To overcome this obstacle: Before the SPC initiative, set rules for data collection and analysis. Criteria should include the least number of significant digits for the measurement system, how much error (including gauge Repeatability and Reproducibility, bias, and linearity studies) is acceptable, calibration frequency for measurement instruments, rules for determination of outliers, and which actions to take with outliers. Sampling practices must be evaluated to prove rationality, and the sampling frequency must be sufficient to detect shifts in the process.
All the tools you need to get the job done.
When it comes to real-time Statistical Process Control (SPC), most solutions begin—and end—with control charts. Although control charts are excellent shop-floor tools, you’ll need other analysis tools to extract maximum information from your data.
InfinityQS® takes you farther. Our sophisticated analysis tools give you the ability to view data across product codes, lines, or sites—all on one report. And that’s just the beginning. Regardless of your manufacturing process—high volume/low mix, or low volume/high mix—InfinityQS has the right analysis tools for your unique situation.
This is real-time, real-life SPC.
Get the flexibility to meet your needs now and into the future, both in terms of functionally and implementation. You can choose from an on-premises solution or a cloud-based platform. Do it yourself or allow our experts to help you maximize the return on your SPC investment.
InfinityQS serves the real-time quality needs of all industries. We have designed our SPC solutions with flexibility to meet the widest possible range of scenarios and to support a big-picture view of your entire operation.
Your real-time SPC solution shouldn’t slow down your production line. Our solutions are plant-floor friendly and provide fast setups, fast data collection, and even faster data analysis. Plus, we help you expose process improvement opportunities you never knew existed, so you can save even more time and resources.
You have enough on your plate. Your real-time SPC solution should reduce burdens—not add to them. InfinityQS quality solutions help reveal the most important information for you automatically, so you can act immediately.
At InfinityQS, all of our salespeople, engineers, and quality experts hold a Six Sigma Green Belt certification. And we employ statisticians and Six Sigma Black Belts. We are committed to providing our clients real-world experience in quality technologies, manufacturing, statistics, and process control. With nearly 30 years of expertise in the real-time SPC market, InfinityQS understands your needs and how to solve your greatest challenges.
What to Expect
Your quick reference to statistical process control for manufacturing quality management systems.
Any member of a family of continuous probability distributions that arises when estimating the mean of a normally distributed population in situations where the sample size is small, and population standard deviation is unknown.
An action taken to compensate for variation within the control limits of a stable system. Tampering increases (rather than decreases) variation, as in the case of Over Control.
The maximum and minimum limit values a product can have and still meet customer requirements.
The graphical representation of a variable’s tendency, over time, to increase, decrease, or remain unchanged.
A control chart in which the deviation of the subgroup average, X-bar, from an expected trend in the process level is used to evaluate the stability of a process.
An incorrect decision to reject something (such as a statistical hypothesis or a lot of products) when it is acceptable.
An incorrect decision to accept something when it is unacceptable.
Count-per-unit chart.
An object for which a measurement or observation can be made; commonly used in the sense of a unit of product or piece, the entity of product inspected to determine whether it is defective or non-defective.
Control limit for points above the central line in a control chart.
Measurement information. Control charts based on variable data include average (X-bar) chart, range (R) chart, and sample standard deviation (or s) chart.
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean. Informally, it measures how far a set of (random) numbers are spread out from their average value.
A change in data, characteristic or function caused by one of four factors: special causes, common causes, tampering, or structural variation.
Named after Swedish mathematician Waloddi Weibull, the Weibull Distribution is a continuous probability distribution. Commonly used to assess product reliability, analyze life data, and model failure times.
A control chart used for process in which individual measurements of the process are plotted for analysis. Also called an Individuals chart or I-chart.
A control chart used for processes in which the averages of subgroups of process data are plotted for analysis.
A management tool aimed at the reduction of defects through prevention. Directed at motivating people to prevent mistakes by developing a constant, conscious desire to do their job right the first time. Developed by quality expert Philip B. Crosby.
ANSI/ASQ Z1.4-2003 (R2013): Sampling Procedures and Tables for Inspection by Attributes is an acceptance sampling system to be used with switching rules on a continuing stream of lots for the acceptance quality limit (AQL) specified.
ANSI/ASQ Z1.9-2003 (R2013): Sampling Procedures and Tables for Inspection by Variables for Percent Nonconforming is an acceptance sampling system to be used on a continuing stream of lots for the AQL specified.
To begin evaluating the type of variation in a process, one must evaluate distributions of data—as Deming plotted the drop results in his Funnel Experiment. The best way to visualize the distribution of results coming from a process is through histograms. A histogram is frequency distribution that graphically shows the number of times each given measured value occurs. These histograms show basic process output information, such as the central location, the width and the shape(s) of the data spread.
There are three measures of histogram’s central location, or tendency:
When compared, these measures show how data are grouped around a center, thus describing the central tendency of the data. When a distribution is exactly symmetrical, the mean, mode and median are equal.
To estimate a population mean, use the following equation:
The two basic measures of spread are the range (the difference between the highest value and the lowest value in the sample) and the standard deviation (the average absolute distance each individual value falls from the distribution’s mean). A large range or a high standard deviation indicate more dispersion, or variation of values within the sample set.
To estimate the standard deviation of a population, use the following equation:
Specification limits are boundaries set by a customer, engineering, or management to designate where the product must perform. Specification limits are also referred to as the “voice of the customer” because they represent the results that the customer requires. If a product is out of specification, it is nonconforming and unacceptable to the customer.
Remember: The customer might be the next department or process within your production system.
Control limits are calculated from the process itself. Because control limits show how the process is performing, they are also referred to as the “voice of the process.” Control limits show how the process is expected to perform; they show the variation within the system or the range of the product that the process creates.
Control limits have no relationship to specification limits.
If a product is outside the control limits, it simply means that the process has changed; the product might be in or out of specification. The shift could be caused by a decrease or increase in variation but has no relation to the specification limits.
Control limits are typically set to +3 standard deviations from the mean. For variable data, two control charts are used to evaluate the characteristic: one chart to show the stability of the process mean and another to describe the stability of the variation of individual data values.
Control limits must never be calculated based on specification limits.
In acceptance sampling, one or more individual units (pieces) of product drawn from a lot for purposes of inspection to reach a decision regarding acceptance of the lot.
The number of units (pieces) in a sample.
The s chart tracks subgroup standard deviations; the plot point represents the calculated sample (n-1) standard deviation of the subgroup.
As commonly used in acceptance sampling theory, the process of selecting sample units so all units under consideration have the same probability of being selected.
Note: Equal probabilities are not necessary for random sampling; what is necessary is that the probability of selection be ascertainable. However, the stated properties of published sampling tables are based on the assumption of random sampling with equal probabilities. An acceptable method of random selection with equal probabilities is the use of a table of random numbers in a standard manner. A simple random sample is a set of n objects in a population of N objects where all possible samples are equally likely to happen.
Example: 100 objects (n) in a population of 10,000 objects (N). In Acceptance Sampling, the Lot size combined with the AQL defines how many “random” samples to inspect.
The probability distribution of a statistic. Common sampling distributions include t, chi-square (c2), and F. Also known as finite-sample distribution, sampling distribution is the probability distribution of a given random-sample-based statistic. Sampling distributions are important in statistics because they provide a major simplification en route to statistical inference.
Sampling inspection in which the decision to accept or reject a lot is based on the inspection of one sample. A single sampling plan is specified by the pair of numbers (n,c). The sample size is n, and the lot is rejected if there are more than c defectives in the sample. It is referred to as single, because the decision is made on one inspection (visual or measured) of 1 or more pieces.
Example: Lot size = 500, AQL is 0.25, sample size (n) = 50, c=1. If any piece is outside specification, the lot (or sample) fails.
Sequential sampling inspection in which, after each unit is inspected, the decision is made to accept a lot, reject it or inspect another unit. See Single Sampling above.
Example from the web: In the context of market research, a sampling unit is an individual person. The term sampling unit refers to a singular value within a sample database. For example, if you were conducting research using a sample of university students, a single university student would be a sampling unit.
A graphical technique used to visually analyze the relationship between two variables. Two sets of data are plotted on a graph: the y-axis indicates the variable to be predicted, and the x-axis indicates the variable to make the prediction.
Adaptations made to control charts to help determine meaningful control limits when only a limited number of parts are produced, or when a limited number of services are performed. Short-run techniques usually focus on the deviation (of a quality characteristic) from a target value.
One standard deviation in a normally distributed process.
A rigorous, data-driven approach (and methodology) for analyzing and eliminating the root causes of business problems.
Also known as Lean Six Sigma Black Belt and Black Belt Six Sigma.
Certified Lean Six Sigma designation. A full-time team leader responsible for implementing process improvement projects—define, measure, analyze, improve and control (DMAIC) or define, measure, analyze, design and verify (DMADV)—within a business to drive up customer satisfaction and productivity levels.
An employee who has been trained in the Six Sigma improvement method and can lead a process improvement or quality improvement team as part of his/ her full-time job.
Also known as Lean Six Sigma Master Black Belt.
A problem-solving subject matter expert responsible for strategic implementations in an organization. This Six Sigma pro is typically qualified to teach other facilitators the statistical and problem-solving methods, tools, and applications to use in such implementations.
The problem-solving tools used to support Six Sigma and other process improvement efforts: voice of the customer, value stream mapping, process mapping, capability analysis, Pareto charts, root cause analysis, failure mode and effects analysis, control plans, statistical process control, 5S, mistake proofing, and design of experiments.
Refers to someone who has attained Six Sigma yellow belt certification. A team member who supports and contributes to Six Sigma projects, often helping to collect data, brainstorm ideas, and review process improvements.
Asymmetry in a statistical distribution. Skewed data may affect the validity of control charts and other statistical tests based on the normal distribution.
Cause of variation that arise because of special circumstances. They are not an inherent part of a process. Special cause is also referred to as assignable cause. Also see Common Cause.
A document that states the requirements to which a given product or service must conform.
Also known as dispersion, variability, or scatter.
The extent to which a distribution is stretched or squeezed.
A stable process is said to be in control. A process is considered stable if it is free from the influences of special causes.
A measure that is used to quantify the amount of variation or dispersion of a set of data values.
A single measure of some attribute of a sample—used to make inferences about the population from which the sample came. Sample mean, median, range, variance, and standard deviation are commonly calculated statistics.
An industry-standard methodology for measuring and controlling quality during the manufacturing process.
The application of statistical techniques to control quality. Includes acceptance sampling, which statistical process control does not.
A branch of mathematics dealing with the collection, organization, analysis, interpretation, and presentation of data.
Another name for a sample from the population.
Confidence that a supplier’s product or service will fulfill its customers’ needs; achieved by creating a relationship between the customer and supplier that ensures the product will be fit for use with minimal corrective action and inspection.
According to quality management guru Joseph M. Juran, nine primary activities are needed: 1) define product and program quality requirements; 2) evaluate alternative suppliers; 3) select suppliers; 4) conduct joint quality planning; 5) cooperate with the supplier during the execution of the contract; 6) obtain proof of conformance to requirements; 7) certify qualified suppliers; 8) conduct quality improvement programs as required; and 9) create and use supplier quality ratings.
A system in which supplier quality is managed by using a proactive and collaborative approach. The costs of transactions, communication, problem resolution, the impact of switching suppliers, and overall cost. Also focuses on factors that impact supply-chain performance, such as the reliability of the supplier delivery, and the supplier’s internal policies regarding inventory levels.
The system of organizations, people, activities, information, and resources involved in moving a product or service from supplier to customer.
Also known as the 80-20 rule.
A graphical tool for ranking causes from most significant to least significant. It is based on the Pareto principle, named after 19th century economist Vilfredo Pareto, and suggests that most effects come from relatively few causes; that is, 80% of the effects come from 20% of the possible causes.
A metric reporting the number of defects normalized to a population of one million for ease of comparison.
A ratio often used to refer to the concentration of solutes in solutions, such as salts in water (i.e., salinity).
See Percent Chart.
Also referred to as a proportion chart.
A control chart for evaluating the stability of a process in terms of the percentage of the total number of units in a sample in which an event of a given classification occurs.
Percentiles divide the ordered data into 100 equal groups. The kth percentile pk is a value such that at least k% of the observations are at or below this value and (100-k)% of the observations are at or above this value.
Also known as PDCA Model.
A four-step process for quality improvement. In the first step (plan), a way to effect improvement is developed. In the second step (do), the plan is carried out. In the third step (check), a study takes place between what was predicted and what was observed in the previous step. In the last step (act), action should be taken to correct or improve the process.
A discrete probability distribution that expresses the probability of a number of events occurring in a fixed time period if these events occur with a known average rate and are independent of the time since the last event.
The amount of variation that exists in the values of multiple measurements of the same characteristic or parameter. Greater precision means less variation between measurements.
The likelihood of occurrence of an event, action, or item.
A set of interrelated work activities that transform inputs into outputs.
Expected or average value of process quality.
A statistical measure of the inherent process variability of a given characteristic.
The value of the tolerance specified for the characteristic divided by the process capability. The several types of process capability indexes include the widely used Cpk and Cp.
The method for ensuring that a process meets specified requirements.
Actions taken to increase the effectiveness or efficiency of a process in meeting specified requirements.
A subjective term for which each person or sector has its own definition. In technical usage, quality can have two meanings: 1) the characteristics of a product or service that bear on its ability to satisfy stated or implied needs; 2) a product or service free of deficiencies. According to Joseph Juran, quality means “fitness for use;” according to Philip Crosby, it means “conformance to requirements.”
Quality assurance is all the planned and systematic activities implemented within the quality system that can be demonstrated to provide confidence that a product or service will fulfill requirements for quality. Quality control is comprised of the operational techniques and activities used to fulfill requirements for quality. Quality Assurance and Quality Control are often used interchangeably, referring to the actions performed to ensure the quality of a product, service, or process.
The ability of a product, service, or process to meet its design specifications. Design specifications are an interpretation of what the customer needs.
See First Pass Yield.
Quartiles divide the ordered data into 4 equal groups. The second quartile (Q2) is the median of the data.
A cause of variation due to chance and not assignable to any factor.
A commonly used sampling technique in which sample units are selected so all combinations of n units under consideration have an equal chance of being selected as the sample.
The measure of dispersion in a data set (the difference between the highest and lowest values).
Also known as Range Control Chart.
A control chart in which the range (R) of a subgroup is used to track instantaneous variations and to evaluate the stability of the variability within a process.
A set of statistical processes for estimating the relationships among variables.
The smallest number of defectives (or defects) in the sample or samples under consideration that will require rejection of the lot.
The variation in measurements obtained when one measurement device is used several times by the same person to measure the same characteristic on the same product.
The variation in measurements made by different people using the same measuring device to measure the same characteristic on the same product.
A factor that caused a nonconformity and should be addressed with corrective action.
The method of identifying the initiating cause of a problem, which leads to preventing it from occurring again.
A consecutive number of points consistently increasing or decreasing. A run can be evidence of the existence of special causes of variation that should be investigated.
A chart showing a line connecting numerous data points collected from a process running over time.
Also known as a Batch.
The value of percentage defective or defects per hundred units in a lot.
Also referred to as N.
The number of units in a lot.
Expressed in percentage defective, the poorest quality in an individual lot that should be accepted.
Note: LTPD is used as a basis for some inspection systems and is commonly associated with a small consumer risk.
Control limit for points below the central line in a Control Chart.
The arithmetic average of a discrete set of values in a data set.
The criteria, metric, or means to which a comparison is made with output.
The act or process of determining a value. An approximation or estimate of the value of the specific quantity subject to measurement, which is complete only when accompanied by a quantitative statement of its uncertainty.
All operations, procedures, devices, and other equipment, personnel and environment used to assign a value to the characteristic being measured.
In metrology, a non-negative parameter characterizing the dispersion of the values attributed to a measured quantity.
The center value of a set of data in which all the data are arranged in sequence.
The value occurring most frequently in a data set.
A measure used to help calculate the variance of a data population; the distance or difference between consecutive points. The moving range chart is typically used with an Individual X (IX) chart for single measurements.
A measure used to calculate variation using the standard deviation between two consecutive points from an IX control chart. The calculations are then plotted and analyzed on a time-ordered Moving-s control chart.
A control chart for evaluating the stability of a process in terms of the levels of two or more variables or characteristics.
The number of units in a sample.
The number of units in a population.
A unit with one or more nonconformities or defects. Also called a reject.
A specified requirement that is not fulfilled. Also see Blemish, Defect, and Imperfection.
Testing and evaluation methods that do not damage or destroy the test specimen.
All tests involving ranked data (data that can be put in order). Nonparametric tests are often used in place of their parametric counterparts when certain assumptions about the underlying population are questionable.
The charting of a data set in which most of the data points are concentrated around the average (mean), thus forming a bell-shaped curve.
A control chart based on counting the number of defective units in each constant size subgroup. The np-chart is based on the binomial distribution.
Also known as Operating Curve.
A graph to determine the probability of accepting lots as a function of the lots’ or processes’ quality level when using various sampling plans. There are three types: type A curves, which give the probability of acceptance for an individual lot coming from finite production (will not continue in the future); type B curves, which give the probability of acceptance for lots coming from a continuous process; and type C curves, which (for a continuous sampling plan) give the long-run percentage of product accepted during the sampling phase.
Unusually large or small observations relative to the rest of the data.
A process in which the statistical measure being evaluated is not in a state of statistical control. In other words, the variations among the observed sampling results cannot be attributed to a constant system of chance causes. Also see In-Control Process.
A term that indicates a unit does not meet a given requirement or specification.
Used to measure manufacturing productivity; identifies the percentage of manufacturing time that is truly productive. An OEE score of 100% means you are manufacturing only Good Parts, as fast as possible, with no Stop Time. In the language of OEE that means 100% Quality (only Good Parts), 100% Performance (as fast as possible), and 100% Availability (no Stop Time).
An element often introduced into a process by a well-meaning operator or controller who considers any appreciable deviation from the target value as a special cause. In this case, the operator is incorrectly viewing common-cause variation as a fault in the process. Over control of a process can actually increase the variability of the process and is viewed as a form of tampering.
A diagram consisting of rectangles whose area is proportional to the frequency of a variable and whose width is equal to the class interval. Gives a rough sense of the density of the underlying distribution of the data and is often used for density estimation—that is, estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1.
A procedure that is used on a sample from a population to investigate the applicability of an assertion (inference) to the entire population. Hypothesis testing can also be used to test assertions about multiple populations using multiple samples.
A quality characteristic’s departure from its intended level or state without any association to conformance to specification, requirements, or to the usability of a product or service. Also see Blemish, Defect, and Nonconformity.
A process in which the statistical measure being evaluated is in a state of statistical control; in other words, the variations among the observed sampling results can be attributed to a constant system of chance causes. Also see Out-of-Control Process.
A single unit or a single measurement of a quality characteristic, usually denoted as X. This measurement is analyzed using an Individuals Chart, CUSUM, or EWMA chart.
Also called an I-chart or X-chart. A control chart for processes in which individual measurements of the process are plotted for analysis.
A verification activity. For example, measuring, examining, testing, and gauging one or more characteristics of a product or service and comparing the results with specified requirements to determine whether conformity is achieved for each characteristic.
The cost associated with inspecting a product to ensure it meets the internal or external customer’s needs and requirements; an appraisal cost.
This is the lot or batch of product to be inspected for acceptance.
>In the ANSI/ASQ and ISO Acceptance Sampling Standards there are three Inspection States (or statuses): Normal, Tightened, and Reduced. The definitions for each state are found in the applicable standard under a heading called Switching Rules.
An independent, nongovernmental international organization with a membership of 161 national standards bodies that unites experts to share knowledge and develop voluntary, consensus-based, market-relevant international standards, guidelines, and other types of documents.
A set of international standards on quality management and quality assurance developed to help organizations effectively document the quality system elements to be implemented to maintain an efficient quality system. The standards, initially published in 1987, are not specific to any particular industry, product, or service. The standards were developed by the International Organization for Standardization (ISO). The standards underwent major revision in 2000 and now include ISO 9000:2005 (definitions), ISO 9001:2008 (requirements), ISO 9004:2009 (continuous improvement) and ISO 9001: 2015 (risk management).
A voluntary quality management system standard developed by the International Organization for Standardization (ISO). First released in 1987 and one of several documents in the ISO 9000 family.
Also known as Just-In-Time Production. A methodology aimed primarily at reducing flow times within a production system, as well as response times from suppliers and to customers.
A process parameter that can affect safety or compliance with regulations, fit, function, performance or subsequent processing of product.
A product characteristic that can affect safety or compliance with regulations, fit, function, performance or subsequent processing of product.
A non-parametric test for determining whether samples originate from the same distribution. It is used for comparing two or more independent samples of equal or different sample sizes. While analysis of variance tests depends on the assumption that all populations under comparison are normally distributed, the Kruskal-Wallis test places no such restriction on the comparison. It is a logical extension of the Wilcoxon Mann-Whitney Test.