Page 1/Lect. 11
Statistical Process Control (SPC)
(Continuation)
Components of Shewharts’ model in question:
Previous steps:
- Step 1: Process observation (sampling selected feature(s))
- Step 2: Data evaluation (data preprocessing and analysis via X, R diagrams)
Next steps:
- Step 3:Diagnostics (identification of failure origins)
Diagnostics – basic approaches and methods
- Typical application of simple but efficient graphical methods
- Main purpose: Assistance at search for failure origins, mostly in combination of all the methods
(1)Scatter diagrams
(2)Pareto diagrams
(3)Cause-and-effect diagrams
1. Scatter diagrams
Provide support in identification of dependencies between parameters, which characterize quality or efficiency on a process output (2) and factors (1), which influence the output.
So that:
(1) Stands for process input parameter (material properties, manpower, process internal parameters, etc.)
(2) Denotes process output (quality level, process throughput, process costs per product unit, etc.)
- Scatter diagrams provide quantitative measures of input/output dependencies
Ex. 1: The aspects of x and y correlation:
- Weak or strong? (= ratio between the hull axes)
- Direction? (= direction of the hull main axis)
Ex. 2: (a negative case)
- Assume having X, R diagrams corresponding to smoothness of a product quality
- Also assume situation without SPC, so that the X values (the smoothness) might run out of the admissible interval (as there is no force to keep the X within the given bounds)
But the value R(range) can satisfy the requirements (irrespective of what is happening with the X value as there is no SPC loop closed)
? What may cause this sort of situation?
Explanation: Presume, the surface smoothness depends on status of a particular tool (e.g. sharpness) used for the production, so the situation might look like:
- The range of smoothness R for “poor sharpness” or “good sharpness” is relatively low (the requirements are satisfied)
- The “smoothness” value X is large for both the possible states of the tool, so that not compliant with the requirements (deviated mean of X)
Conclusion:It is always necessary to examine dependency of the observed parameter on the system overall status
(System status ~ combination of other input parameters, which can influence the system behavior) □
2. Pareto diagrams
- Supports selection of events that occur more often than other ones → these ale likely to have more influence on the system behavior
- Helps to identify priorities for problem-solving, targeted selection of the problems with highest occurrence…..
- Based on, so called Pareto principle: If a certain number of occurrences of a particular event originates from multiple resources, it is highly probable that the majority of the event occurrences is coming from relatively small no. of these resources (possibly also from a single one)
- The previous idea for identification of these sources of failures provides an efficient way of failure extinguishing as:
- Localize the highest rate failure #1 process
- Extinguish the failure #1, what brings also improvement respecting occurrence of the other type of failures
- Go for localization of the next failure (respecting its’ rate), proceed as for previous case, etc.
Ex.:Failure rate and application of the Pareto principle
□
3. Cause-and-effect diagrams
- Whenever the Pareto diagrams help to indicate priorities for problem-solving, the cause-and-effect diagrams support search for origins of failures
- Gives a basic framework for systematic search for causes of various effects (mainly failures)
- Similar to state-space search, stands in systematic (graphical) buildup of a path to the origin of the problem
- Backward approach, building the graph from effect → cause
Ex.:
Process: Vegetables storage (potatoes)
Problem identified: Stored product decay
□
The processof sampling
- Standard approach to sampling under satisfaction of the Shannon’s’ theorem (not very efficient in this context)
- Often used method called: “rational sampling” - takes into account the following features:
- Sample size
- Sampling rate (frequency)
- Sample gathering method
- Examples of improper (wrong) sampling in real production processes:
- Too dense or sparse sampling rate
- Periodic sampling or sample gathering @ extraordinary time points
- Sampling from multiple sources (sample stratification & sample mixing)
- Choice of a correct sampling method is the key issue for proper operation of the Shewharts’ model of SPC.
What is considered for a correct sampling of particular process?
The concept of rational sampling ~ rational subset of the obtained measurements (in a sample) is a set of measurements having their variance originating purely from random variations in the system which is in a balanced state (~ no extraordinary events occur – e.g. transient phenomenon, technology crashes, etc.)
General criteria for sample choice
Basic motivations:
- To gain maximum of information within a single sample (can be derived e.g. from sample scatter or sample range, etc)
- To maximize differences between subsequent samples for the case, an extraordinary even occurred in the mean time (to provide opportunity for detection of this event)
The basic rules for rational sampling selection are:
- The sample has to represent only standard variances in the system
- This invokes requirement for “small” sample (more sensitive to fast changes of the observed features in the system).
vs.
- Large sample includes also extraordinary effects, e.g. mean value shift (large sample causes drop of the SPC model sensitivity to fast or extraordinary changes).
- The samples should guarantee normal distribution of the observed feature mean value
- As buildup of an X-diagram is based on normal distribution of mean(X)the larger sample provides better approximation of distribution of mean(X).
- Recommended sample size ≥ 4 is typically satisfactory for the most of practical cases.
- The samples should be capable to detect “extraordinary effects”: the larger is the sample → the better is detection of an extraordinary effect (e.g. mean value drift).
- The samples should be small enough:
- To allow the sample gathering at all
- To satisfy economic requirements (sample price)
Sample subset choice
- How to sample at all (under the Shannon’s’ theorem and its’ limits)
- Optimal sampling selection (multiple possibilities):
- Periodic sample gathering at a certain time point or very short time interval (all the measurements creating the sample)
- Random choice of the time-snap for the sample gathering
Consequences:
- Minimized within sample variances, good possibility to detect extraordinary effect in between the samples
- Minimized influence of the process random events (variances) as e.g. drifts in mean value, tool swapping, and lifetime)
Ex.: Sampling rate choice: 1hour ±15minutes □
Sampling rate (frequency)
- Frequent mistake (!) –under sampling causes loss of meaning of the X, R diagrams (hard to identify this type of principal mistake)
Rules for sampling rate choice:
- Sampling rate vs. process stability – procedure to set up an optimal sampling rate and ensure SPC stability.
Initial situation: A process without previous SPC (open loop) or process being unstable (oscillations or chaotic behavior):
- Begin with high sampling rate
- Apply the SPC closed loop followed by adjustment of the systems’ behavior (bring the system to stable performance)
- Decrease the sampling rate at operating SPC, observe the systems’ and SPC responses.
- Occurrence of particular event frequencies in the process – to take into account possible need to observe occurrences of such events.
- Event ~ new material delivery, shift change, technology adjustment, startup, etc.
- Has to be compliant with the constrains set by the Shannon’s’ theorem
- Costs of sampling
- Costs for samples (proportional to no. of samples)
Vs.
- Loss from process failures (or insufficient quality), sparse sampling causes loosing control
Accumulated vs. distributed sampling
Both the approaches provide different behavior while drifts in observed feature levels appear, in short or long time interval. The interval length definition is related to the interval for taking a sample)
- For long time interval of the level variation → advantageous use of accumulated sampling
- Short time interval of the level variation → better to apply distributed sampling approach
Ex. 1:
Equidistant sampling vs. mean variation → good detection of slow variations:
□
Ex. 2:
Random sampling vs. mean variation → good detection of rapid variations:
□
Some other negative sampling effects
- Some sampling setups offer to combine output of parallel processes (as these are assumed to be identical)
Problems of:
- Sample stratification(knowing the situation setup)
- Sample mixing (the situation setup is hidden)
Sample stratification:
- Grouping of sample values into level layers (stratus), note the X, R diagram shaping
- Origin of each sample measurement is known and can be tracked
Situation:
The obtained sample offers possibility for detection of process mean deviation (process 1-4) via increase of the expected sample range
Nevertheless, the stratified sample observes the following setup with all its’ consequences:
Moreover, the sample stratification causes situations similar to oscillations as observable in the X, R diagrams:
□
Sample mixing:
- Sampling of measurements after previous aggregation
- Similar consequences as for sample stratification
- Sample mixing is a hidden process (not known at its’ appearance), main difference form the stratification: it is not possible to track origin of particular samples
Situation:
□