Reading

Evans, J. R., & Lindsay, W. M. (2013). Managing for quality and performance excellence (9th ed.). Mason, OH: South-Western Cengage Learning.ISBN: 9781285069463

Read

  • Chapter 6

W5 Lecture 1 "Statistical Methods Part 1"

Content

  • Performance Management

Statistical Methods Part 1

  • This lecture will discuss statistical methods in quality. Statistics is a science concerned with the collection, organization, analysis, interpretation, and presentation of data and has extensive applications in quality assurance.
  • Statisticalmethods are fundamental to Six Sigma practice. In fact, Six Sigma has led to a renaissance of statistics in business; workers at all organizational levels are receiving statistical training. This has never been done before.
  • An experiment is a process that results in some outcome. The outcome of an experiment is a result that we observe. The collection of all possible outcomes of an experiment is called the sample space. A sample space may consist of a small number of discrete outcomes or an infinite number of countable outcomes or real numbers.
  • Probability is the likelihood that an outcome occurs. An event is a collection of one or more outcomes from a sample space, such as finding 2 or fewer defectives in the sample of 10, or having a bulb burn for more than 1000 hours. If A is any event, the complement of A, denoted as Ac, consists of all outcomes in the sample space not in A. For example, if A is the event of finding 2 or fewer defectives in a sample of 10, then Ac is the event of finding 3 or more defectives. Two events are mutually exclusive if they have no outcomes in common. For example, if A is the event “2 or fewer defects in a sample” and B is the event “5 or more defects,” then clearly A and B are mutually exclusive.
  • Conditionalprobability is the probability of occurrence of one event A, given that another event B is known to be true or have already occurred. In general, the conditional probability of an event A given that event B is known to have occurred, is:
  • The multiplication rule of probability is derived from formula :
  • P(A and B) = P(A | B) P(B) = P(B | A) P(A)
  • Two events A and B are independent if P(A | B) = P(A). If two events are independent, then we can simplify the multiplication rule of probability in equation by substituting P(A) for P(A | B): P(A and B) = P(B) P(A) = P(A)P(B).
  • A random variable is a numerical description of the outcome of an experiment. A probability distribution is a characterization of the possible values that a random variable may assume along with the probability of assuming these values. The cumulative distribution function, F(x), specifies the probability that the random variable X will assume a value less than or equal to a specified value, x.
  • A probabilitydistribution can be either discrete or continuous, depending on the nature of the random variable it models. Several common types of probability distributions, such as the binomial, Poisson, normal, and exponential distributions, are commonly used in quality assurance applications.
  • Several types of sampling techniques exist. They include simple random sampling, stratified sampling, systematic sampling, cluster sampling, and judgment sampling. Understanding the purpose of sampling and the statistical issues underlying each technique is important in choosing the appropriate one to use.
  • A continuous random variable is defined over one or more intervals of real numbers, and therefore, has an infinite number of possible outcomes. A curve that characterizes outcomes of a continuous random variable is called a probability density function, and is described by a mathematical function f(x).
  • Lecture Summary:
  • This lecture describes concepts of statistics, statistical thinking, statistical methodology, sampling, experimental design, and process capability. You are encouraged to take a “big picture” perspective on this framework, rather than the approach of: “How do I get the right answer?”

W5 Lecture 2 "Statistical Methods Part 2"

Content

Performance Management

Statistical Methods Part 2

Part 2 of this week's lecture will focus on a few more key topics for statistical methods in quality management.

Anyone who has researched continuous improvement or quality assurance programs, such as Six Sigma or Lean Manufacturing, understands the need for statistics. Statistics provide the means to measure and control production processes to minimize variations, which lead to best practices.

You need to know what to measure, and access the numbers; don’t let the numbers do the managing for you, or of you. Before using statistics, know exactly what you are looking for in the data. Understand what each statistical tool can and can’t measure; use several tools that complement one another.

I would like to start out with Histograms as an example. A histogram is a bar graph of raw data that creates a picture of the data distribution. The bars represent the frequency of occurrence by classes of data. A histogram shows basic information about the data set, such as central location, width of spread, and shape.

Use histograms to assess the system’s current situation and to study results of improvement actions. The histogram’s shape and statistical information help you decide how to improve the system. If the system is stable, you can make predictions about the future performance of the system. After improvement action has been carried out, continue collecting data and making histograms to see if the theory has worked.

Please see the histogram example below:

Next, we will focus on some statistical methodologies. The three basic components of statistical methodology are descriptive statistics, statistical inference, and predictive statistics. Statistical methods are used in many areas of quality assurance.

The most common types of samplingschemes are:

  1. Simple random sampling: Every item in the population has an equal probability of being selected.
  2. Stratified sampling: The population is partitioned into groups, or strata, and a sample is selected from each group.
  3. Systematic sampling: Every nth (4th, 5th, etc.) item is selected.
  4. Cluster sampling: A population is partitioned into groups (clusters) and a sample of clusters is selected. Either all elements in the chosen clusters are included in the sample or a random sample is taken from each of them.
  5. Judgment sampling: Expert opinion is used to determine the sample.

Samplingerror and systematic (non-sampling) error are the two major sources of errors in statistics. Every effort should be made to reduce these errors through careful planning and administration of statistical studies.

A population is a complete set or collection of objects of interest; a sample is a subset of objects taken from the population.

Common measures of location include the mean, median, and mode. Common measures of dispersion are the range, variance, and standard deviation. The proportion is used as a key statistic for categorical data.

Lecture Summary:

Although this lecture reviews many of the basic concepts and techniques of statistics that are relevant to the technical areas of statistical process control (SPC), it is by no means comprehensive. It is the basics. Remember to be encouraged to consult a statistics textbook for further insights on the topics in this week and chapter. These topics are typically covered inbusiness statistics courses and you should have had prior exposure to this material prior to taking a course using this text.

Key objectives for this lecture include:

  • To introduce the use of charts and histograms as a tool for drawing conclusions regarding controllable process factors and/or comparing methods for process development or improvement.
  • To help you to understand the concept of process capability and its effects on quality and conformance to specifications.