Pareto Principle page 1

Decision Criteria Consolidation:

A Theoretical Foundation of Pareto Principle to

Michael Porter’s Competitive Forces

Jason C.H. Chen, P. Pete Chong, and Y.S. Chen

Journal of Organizational Computing and Electronic Commerce, Vol. 11, No. 1, 2001, pp. 1-14

Abstract

Also known as the 80/20 rule, the Pareto Principle separates a class of significant few from trivial many. With this classification, Pareto Principle has managerial and strategic implications in many disciplines. Recent mathematical modeling of the Pareto Principle identifies several important factors that cause such separation; they are the probability of new entry (can be viewed as “entry barrier”) and the other is the recency of usage. However, the probability of new entry determines the upperbound of the usage concentration, therefore it is deemed to be the most important factor. Since Porter’s five competitive forces are all closely related to the barrier of entry, based on these factors, it is apparent that the theoretical model of Pareto Principle can be applied to be the theoretical foundation for Porter’s Five Competitive Forces. Furthermore, we argue that, similar to that of microeconomics, the barrier of entry is the most important factor that determines the market structure be it monopoly or pure competition. Thus, the decision criteria in strategic planning can be greatly simplified to its effect on the barrier of entry. Furthermore, we argue that the recency of usage (i.e., a product not recently in use may be forgotten by customers thus reduce its future usage), though not emphasized in Porter’s theory, should also be part of the strategy formation.

1.Introduction

Business competitive strategy development is vital for the successes of both large corporations and small businesses. In his well-known book on competitive strategy, Michael Porter (1980) proposes a five-force model for business competition strategies: (1) bargaining power of buyers, (2) bargaining power of suppliers, (3) rivalry among existing competitors, (4) threat of new entrants, and (5) threat of substitute products or services.

------

Insert Figure 1 Here

------

Porter’s proposal is basically a strategy of making one’s product a monopoly in its class, and making the company’s suppliers’ market a pure competition market. Conceptually, since monopolies are price makers and pure competitions are price takers (Lipsey and Courant 1996), a firm has better control over its revenues and expenses and can greatly increase the profit margin (Carlton and Perloff 1994). The four market structures of monopoly, oligopoly, monopolistic competition, and pure competition are classified according to the number of participant and the level of concentration in terms of business transactions; and their formation has long been attributed to the level of barrier of entry. For example, monopoly is characterized by having a high barrier of entry whereas pure competition requires “perfect information” and open access to the market for all (Lipsey and Courant 1996). Thus, Porter’s competitive forces can be probed further through studying the formation of market structures, which, in turn, rests on the study of market concentrations.

The simplest way to describe a concentration pattern is to assign some quantitative measurement to it. Vilfredo Pareto (1909) first reports that in Italy about 80% of wealth is in the hands of about 20% of the population. Since then, many other sociological, economic, political, and natural phenomena have been observed to follow the similar pattern. J. M. Juran claims credit for coining the term Pareto Principle (Sanders 1987), which is better known as the 80/20 Rule. The Pareto Principle has wide applications (see Table 1), but its importance lies in its separation of the significant few from trivial many (Chen and Chong 1998, Chen et al. 1994, 1993). For example, in the ABC inventory control, we concentrate our efforts on those significant 10 to 20 percent of high-value items that typically account for 70-80 percent of the total dollar value (Monks 1977).

------

Insert Table 1 Here

------

When the Pareto Principle is used to describe the firm-size distributions, we find that while there are only a few very large firms, numerous small firms exist (80% of business assets are in the top 20% of firms). Common sense tells us that when customers prefer a firm’s products, this firm is more likely to grow, thus it implies that the firm size is determined by the way customers allocate their resources among products, which in turn translates to business assets. Therefore, the customers’ product usage pattern (we call it usage concentration in this paper) determines the concentration of assets among firms, and consequently the firm-size distribution and the market structure. Ijiri and Simon’s 1977 book, Skew Distribution and the Sizes of Business Firms, collected many studies of business concentrations, and included in this book is a theory that models how business concentrations are formed. Since Herbert Simon is the center of this collective effort of many, in this paper we will refer to this theory as the Simon’s Model. Simon’s original model has two assumptions: (1) there is a constant probability of new entrants into the system, and (2) the more an item is used, the more likely it will be used again. Based on these two assumptions, Simon and Von Wormer (1963) also provide an algorithm that successfully simulates this usage pattern.

The purpose of this paper is to provide a theoretical foundation, in terms of Pareto Principle, for Porter’s five competitive forces. Section 2 is a brief overview of the Pareto Principle (or the 80/20 rule). Section 3 describes the mathematical model for the Pareto Principle by Chen et al. (1994, 1993), indicating that the probability of new entry is the primary factor in determining the level of concentration. Section 4 discusses the assumptions used in Simon’s two models on usage concentration, and the interpretation of the parameters in usage analysis to make the connection between usage concentration and different market structures. Section 5 shows how these findings can be used to simplify the decision criteria in strategic planning and thus supports the validity of Porter’s approach that the main goal is to control the barrier of new entrants – or the probability of new entry in Simon’s terms. Section 6 goes beyond Porter’s five forces and describes the other significant factor of usage concentration in Simon’s later model, i.e., the decay rate, and its implication to strategic planning. Finally, Section 7 is the conclusion.

  1. Pareto Principle and Market Concentration

Recently a mathematical model (Chen et al. 1994, 1993) has been developed to describe the behavior of the Pareto Principle. This model uses the slope and distance to fully describe the usage concentration curve (we call it the Pareto Curve in this paper) demonstrated in the Pareto Principle. It shows that the upper-bound of the usage concentration is determined by the slope formed by the group of the least-used items (the trivial many). Furthermore, this slope is the inverse of the usage per item – which can be the proxy for the probability of a new entrant to the selection process. A simulation model based on Simon and Von Wormer’s algorithm has been used to verify this mathematical model.

As described in Section 1, the Pareto Principle has very wide applications across many disciplines. We will follow that traditional study of the Pareto Principle and use Kendall’s (1960) study on 1763 papers published on operations research (Table 2) to describe the Pareto Curve.

------

Insert Table 2 Here

------

If we tabulate the number of authors who have published n papers and arrange this list in ascending order of n, we would find that n does not run consecutively at places, especially when n is large. We would also find that there are m different clusters of authors who publish the same number of papers, and m  max{n}. To take into account the scatter of the larger values of n, Chen and Leimkuhler (1987) introduced an index i = 1,2,...,m, for the m successive observations of n and let ni denote the i-th nonzero value of n where ni < ni+1. Using this Index Approach, We define

f(ni) = the number of authors with ni papers,

T = = total number of authors,

R = = total number of papers,

 = R/T = the number of published paper per author.

Note m is the maximum index, indicating that f(nm) is the number of authors who are the most productive; and nm is the productivity of this cluster’s authors. Similar to Kendall’s data, typically there is only one author in this cluster. Thus f(nm) is usually 1. For each index level, let xi be the fraction of total authors and i the fraction of total papers, then

xi = (1)

andi = .(2)

Plotting xi on the x-axis and i on the y-axis, and we obtain a Pareto Curve. Figure 2 shows the Pareto Curve based on Kendall’s data. In more general terms, we can substitute the words “item” or “company” for “author” and the words “usage” or “business” for “paper,” and the curve shows the usage or market concentration.

------

Insert Figure 2 Here

------

Using notations above, if we define the curve formed by (xi,i), i = 1,2,...,m, to be Pareto Curve, then the Pareto Principle (the 80/20 rule) states that there exists some i that (xi,i) = (0.20, 0.80). Table 2 shows that the top 22.7% of authors (84) published 77.4% of papers (1365). It is clear that this Pareto Curve is more like 77/23 than 80/20. Using another example, Table 3 is the transaction data collected from a state university library. It shows that there are 103 different groups of usage (m = 103) ranging from 31,113 books being checked out once to 1 book being checked out 619 times. The total number of books checked out was T = 61,606, and total number of transactions was R = 154,703. Thus, the average number of times a book was checked out was  = 2.511. Figure 3 is the Pareto Curve, plotting (xi,i), i = 1,2,...,m. Note that the curve has a concentration of approximately 68/32, and not 80/20.

------

Insert Table 3 and Figure 3 Here

------

3.Theoretical Foundation of the Pareto Principle

By arranging equations (1) and (2) stated in Section 2, Chen et al. obtained

i = xi,(3)

where i is the usage per item at that particular point while  is the overall average.

Chen et al. defined si and di to be the slope and distance, respectively, of the line segment between (xi-1, i-1) and (xi, i), i = 1,2,...,m, and (x0,0) = (0,0), and they derived

si = (4)

anddi = .

For now we will only discuss this slope si. First, let us look at the starting point, i.e., from the origin. Note that since the data are cumulated from the most productive ones first, this segment contains the value of nm and f(nm) = 1 (the single most productive author). We designate this slope s1. Since different data sets would have different m, there is no obvious application at this point. The second observation is more important. The “terminal” segment that leads to (100%, 100%) contains the data of “trivial many” (many authors with 1 paper each, or n1 = 1), and we will designate its slope sm. Since at this point n1 has a unique value of 1, sm = 1/, or the inverse of the usage per item. For verification, in Kendall’s data sm= (1-0.885)/(1-0.451) = 0.21, which is the same as 1/ (1/4.77 = 0.21). In the Library example this slope is (1-0.79889)/(1-0.49497) = 0.398, which is the same as 1/ (1/2.511 = 0.398).

This inverse of average T/R may be viewed as the constant rate of success in a binomial distribution when the population is large. In terms of usage, it is the probability that the next items will be an item that has not been used before. In terms of market structure, it is the probability that the next business transaction will involve a new company. When this “probability of new entry” is viewed from the other side, it is called “the entry barrier” – the center of discussion in market structures. Geometrically, if this segment is extended to intersect the y-axis, the y-intercept would indicate the greatest concentration available in this distribution, implying that this barrier of entry dictates the market concentration. The next section will tie this probability of new entry to Simon’s model to examine the plausibility of the theory.

4.Simon’s Models: the Entry Barrier and the Rate of Decay

The Basic Model and the Probability of New Entry

The work by Chen et al. only describes the properties of the Pareto curve; and we will address the reason behind the formation of this usage concentration through Simon’s model. As a result of studying firm sizes, Simon (1955) proposed that this selection process, be it for business transaction or a word to write, is a stochastic process that can be separated into two parts. He assumes that (1) there is a constant probability that the k-th selection will be a new item (an item that has not been selected in the first k-1 selections), and (2) the probability that the k-th selection that has been selected i times is proportional to its previous usage. Subsequently, Simon and Von Wormer (1963) developed an algorithm that can generate distributions to simulate the Pareto Curves.

Recall that the terminal slope sm equals to the inverse of the average usage. Using the expected value of a binomial distribution, we can see that this slope is equal to the probability of new entry indicated in Simon’s assumption 1. Chen et al. (1994) generated a series of Pareto curves varying only the probability of new entry, ranging from 0.01 to 0.99. Figure 4 summaries the results.

------

Insert Figure 4 Here

------

Figure 4 shows that the probability of new entry has inverse relationship with the level of usage concentration. When the probability is 0.01 (very low), the usage concentration is very high (95% of usage are concentrated in about 1% of items), and when the probability of new entry is high, the usage concentration disappears (50% of usage involve 50% of all items). We may extend this observation to the market concentration. When the barrier of entry is high (low probability of new entries) a highly concentrated market structure is formed (nearly monopoly), whereas the lower entry barrier (high probability of new entry) spreads the usage more evenly among participants – that is, forming a pure competition market.

Studies by Chen et al. (1994, 1993) show that the probability of new entry is the primary factor that determines the usage concentration, just as economists have claimed all along. The amount of trivial many determines how competitive a market may be, thus determines whether a market is a monopoly or pure competition. While this entry barrier has been cited as a characteristic of the market structure in microeconomics, Simon’s model provides a model to explain how the structure is formed.

The Autoregressive Model and the Decay Rate

The probability of usage of “old” items is assumed to be proportional to its previous usage in Simon’s basic model. However, the rarely used items have a tendency of being “forgotten.” In other words, the probability of the usage will decrease with time if a book has not been checked out, or an author has not published lately. Popular library books can be neglected for years to come once out of fashion. Ijiri and Simon (1977) refined the second assumption of the Simon’s model to take this phenomenon into account. Thus, the probability of an already-accessed information being used again decreases geometrically at a certain “decay rate.”

While the terminal slope sm helps us assess the probability of new entry and draws out the upperbound for concentration, the leading slope s1 also contributes to determine the level of concentration within bound. Base on Simon’s algorithm, Chen et al. (1994, 1993) find that s1 is determined by the level of Simon’s decay rate. When the probability of new entry is held constant, say at 0.20, varying this decay rate generate Pareto Curves like those in Figure 5. Note that terminal slopes of all these curves are the same, though the length of segments may vary. On the other hand, the initial slopes are different, and the curve with no decay has the highest concentration. It is logical that a higher decay rate reduces concentration because the environment then allows the previously low-usage items not to be overwhelmed by items that have their 15-minutes fame. On the other hand, unused items may lose their potential of being used again rapidly, no matter how historically active it has been before. When there is no decay at all, the results of autoregressive model are the same as the basic model.

Chen et al. further determine that while the changing decay rate affects the shape of the usage concentration (thus changes the Pareto Curves), it does not affect the probability of new entry. Therefore, these two rates can be assessed independently of each other. Unfortunately, unlike the probability of new entry, there has not been an easy method to assess this decay rate.

------

Insert Figure 5 Here

------

5.The Role of the Entry Barrier in Porter’s Five Competitive Forces

Porter (1980) suggests that “the goal of competitive strategy for a business unit in an industry is to find a position in the industry where the company can best defend itself against [the five] competitive forces or can influence them in its favor.” Furthermore, “the crucial question in determining profitability is whether firms can capture the value they create for buyers, or whether this value is competed away to others” (Porter 1985). The following is a brief description of these five forces and their connections with the probability of new entry.

  • “Bargaining Power of Buyers” refers to the ability for customers to force down prices, reduce product delivery cycle time, demand higher quality, and require better service. Porter identifies seven factors, which suggest that whenever many alternatives available, customers have high bargaining power. Thus, lower barrier of entry for new products (more alternatives, or high probability of new entry) increases Bargaining Power of Buyers. Conversely, higher entry barrier – which may result in higher concentration of market – leads to lower Bargaining Power of Buyers.
  • “Bargaining Power of Suppliers” refers to the ability for suppliers to increase input material prices, increase product delivery cycle time, and reduce the quality of goods supplied without losing customers. Whenever alternative suppliers are available in an area, competition lowers supplier power. Thus, lower entry barrier for new suppliers (more alternatives, or high probability of new entry) leads to lower Bargaining Power of Suppliers.
  • “Rivalry among Existing Competitors” is the degree to which companies respond to competitive moves of other companies in the same industry, e.g., price cutting, new product introduction, and advertising slugfests. This may be viewed as an extension of the Bargaining Power of Suppliers. Though the focus here is the “degree” of the fierceness, with more suppliers, competition tends to increase the fierceness of rivalry. We will discuss this force more in the next section.
  • “Threat of New Entrants” is the number and quality of potential competitors that may enter the industry. It is obvious that with higher barrier of entry, the number of new entry would reduce and thus affect the bargaining power of customers and suppliers.
  • “Threat of Substitute Products” refers to other products that can be used to satisfy the same need. Since substitute products have the same effect as direct competition in taking business away (therefore reduce the usage), the principle of entry barrier applies equally. In terms of usage of a company’s product, the threat of substitute products should be treated the same as the threat of new entrants.

Therefore, Michael Porter’s five forces can be summarized to a strategy that: