Size Adjustments: Getting with the Program

SIZE ADJUSTMENTS: GETTING WITH THE PROGRAM

VERSION 7.04 11/94

Gene Dilmore

SIZE ADJUSTMENTS: GETTING WITH THE PROGRAM

Why Do We Need Size Adjustments?

When a purchaser buys a 40acre tract of land, he usually pays a lower price per acre than does the purchaser of a 20acre tract, everything else being equal. Putting the principle in the more formal terms of the land economist, size adjustments are an expression of the principle of marginal utility, defined in the DICTIONARY OF REAL ESTATE APPRAISAL, 2n Ed., Chicago, American Institute of Real Estate Appraisers, 1989, as follows: "The addition to total utility by the last unit of a good at any given point of consumption. In general, the greater the number of items, the lower the marginal utility; i.e., a greater supply of an item or product lowers the value of each item." The IAAO publication, PROPERTY APPRAISAL AND ASSESSMENT ADMINISTRATION, Chicago, IAAO, 1990, pp 41-43, discusses marginal utility theory as an explanation of the demand side of the market. It is this declining marginal value assigned to each additional square foot of land that we seek to quantify with size adjustments.

Limiting Conditions

First off, let us acknowledge that size adjustments do not apply to every sample of comparable sales. Some specific instances:

  1. We are comparing two apartment sites; one has 20,000 square feet, and the second has 19,800 square feet. In most markets, the primary pricing unit for apartment sites is the number of units which can be built on the lot. In this case, the zoning allows one unit per 1,000 square feet of land and the apartments are typically 2-story, in order to maximize parking area. Thus 18 units would be feasible on the 19,800 square foot lot, and 20 on the 20,000 square foot site. The 19,800 square foot parcel would not sell for a higher price per square foot. In fact, from the apartment developer's point of view, he is pricing a site for 20 units as compared with one for 18 units. The additional 200 square feet, therefore, are worth the same amount as 2,000 square feet, since he can built two more apartment units on the second site. So the 19,800 square foot lot would be worth only a little more than a similar 18,000 square foot site.
  2. 2. In the analysis of timberland values, we find that most markets will not adjust a smaller size property upward on a per acre basis; often, in fact, a premium might be paid (on a unit basis) for the larger tract, since the operation of the timber enterprise can take advantage of economies of scale.

3. In analyzing land sales in the central business district of my home city, and testing with both the Size Adjustment program and regression analysis using "size" as one of the variables, I have never (so far) found a size adjustment to be applied in the market. This may or may not be the case for your downtown area; you can find out by running the size adjustment program, to see if the adjustments reduce the dispersion of the data, as represented by the coefficient of variation (COV), the standard deviation as a percentage of the mean of the adjusted sale prices.

There may well be other exceptions to the declining marginal price concept in land sales. I have found that In some areas with a relatively unsophisticated market, the concept does not apply, not for any arcane technical reasons, but simply because buyers in the local market do not, even subconsciously, analyze prospective purchases to this extent.

In any event, if size adjustments are not appropriate for the data set, the program will tell you so.

Birth of the Size Adjustment Tables

It had always bothered me that I so often had to discard potential comparable sales simply because I did not know how to account systematically and consistently for the price differential generated by a size differential in land sales. In reviewing numerous appraisals by others, I often encountered "plus 10% for size," or "minus 5% for size" obviously reflecting completely arbitrary adjustments that could not pass the test of consistency. Discussion with other appraisers indicated that this was a universally frustrating problem; the appraisers were in some cases reluctantly applying arbitrary adjustments, and in many cases even more reluctantly discarding land sales that they knew would have been usable had they only known what the pattern of differential was.

One day in 1975, a pair of sales showed up, side by side on a major highway, with the only major difference being that one was a 5acre parcel, and the other a 10acre parcel, the 10-acre parcel selling for about 90% of the per-acre price of the 5-acre parcel. (To the best of my recollection, excepting inside versus adjoining corner lot sales, this was the first and last instance I've encountered of a set of real-world "paired sales," as opposed to the sales with 12 variables identical, and only one variable different, as described in many appraisal textbooks and articles.)

The teasing puzzle was too tempting to resist. I set to work over the weekend to find an underlying relation between the two sales that could be generalized to other sales. That is, what form of curve could make a consistent change when the size was doubled, regardless of its absolute size? I tried a number of trial and error approachessquare roots, various adaptations of the standard depth table formulas, logarithmic curves, and so on.

The differential is obviously not directly proportionate to the size, since that would result in every parcel selling for the same price: If a 40 acre tract sold for half the per acre price of a 20 acre tract, then the total price would be exactly the same, and as every doubling of size halved the unit price, the total sale would remain the same. Thus we are obviously looking for a curve that constantly decreases in its steepness, and that begins with an adjustment that is less than a 50% adjustment for a doubling of size.

There were obviously a number of curves that could be fitted to the sales; we could even fit a different curve to each batch of sales, but the objective was the simplest, most elegant type of curve that would be totally consistent from one appraisal to the next.

The shape that made the most sense to me was an adaptation of the Airframe Learning Curve. During WW2, Stanford Research Institute made a study of airframe production efficiency. They found that, generally speaking, the 200th production item cost approximately 80% as much as the 100th item; that is, a doubling of the number of airframes (plane fuselages) produced resulted in a unit cost reduction of 20%. Since this reduction in cost ratio was a result of learning from a repeated process, it was called a learning curve, and it was soon discovered that many phenomena followed this pattern.

The formula for this curve, adapted to the land price problem is:

Y = [Ac.67808/.67808/Ac]/ [As.67808/.67808/As].

Where Y = the size adjustment factor

Ac = area of the comparable

As = area of subject

This original formula can be simplified to:

Y = Ac1.67808/As1.67808

And simplifying one more step, we raise the area of the comparable to the 0.32192 power, and divide it by the area of the subject also raised to the 0.32192 power. This is the formula for the "80%" curve, meaning that, if this curve fits the data, a parcel twice the size of another parcel will, other things being equal, sell for about 80% of the unit price of the smaller parcel.

An example: Our subject is a 200acre tract, and the first comparable is a 150acre tract. Applying the formula:

150.32192/200.32192 = .91.

Thus, if Comparable No. 1 sold for $10,000 per acre, the sale (assuming other factors have been adjusted for) indicates a price for subject of $9,100 per acre.

The obvious next step was to adjust the formula to produce a set of curves of varying steepness, covering the likely range of the curves. The adjustment factors for steeper or less steep curves are calculated in the same way; all that changes is the value of the power. Thus, a "90% curve," meaning that a parcel twice as large as another would sell for about 90% of the unit price of the smaller parcel, follows the same calculations, but with an exponent of 0.152 rather than the 0.32192 for the 80% curve; an 85% curve uses an exponent of 0..23445, and so on.

The Size Adjustment Program

You don't need to memorize these numbers; the Size Adjustment program, which was the next logical step, asks for the input data, consisting of (1) the size of subject, (2) the number of sales, then (3) the size and unit price of each sale. The unit price is the selling price adjusted for all factors except size. The program calculates the potential size adjustment factors using fourteen curves, from 65% to 97.5%, in 2.5% steps.

It then prints out, for each of the fourteen curves, a set of adjustment factors, followed by the mean of the adjusted selling prices for each curve, the standard deviation, and the coefficient of variation, which is the standard deviation as a percentage of the mean. A coefficient of variation of 20%, for example, means that in a normally distributed set of adjusted sale prices, about 68% of the adjusted sale prices would lie within about 20% plus or minus of the mean, or average of the adjusted prices.

Next the program selects the curve that results in the smallest coefficient of variation, and applies the adjustment factors for that curve to the comparable sales. The rationale here is that a major purpose of adjusting sales is to reduce the dispersion in the raw data. In other words, if all of your sellers and purchasers were perfectly knowledgeable and rational, and all of your adjustments perfectly reflected every possible difference in price, all of your adjusted sale prices would be identical.

So much for the appraiser's fantasy life; now back to the real world: In that world, the real estate market itself is far from perfect, to say nothing of appraisers. We generally assume, though, that a reduction in the dispersion of our data represents a closing in on the indicated value of our subject property.

The reduction in the coefficient of variation by application of the final optimum curve represents more reduction in dispersion than may appear at first glance. For example, reducing the standard deviation from 45% of the mean to 35% of the mean is not a 10% reduction, but a 22% reduction in the dispersion of your data.

The program next calculates a preliminary value indication based first on the mean of the adjusted prices, then a second indication based on the median of the adjusted indications. When the remaining dispersion is still somewhat larger than you would like, you may feel that the median has more appeal than the mean, since it could be considered to represent a better measure of centrality.

The next step calculates a modal value indication. Clearly, with our comparable sales constituting a quite small sample in most cases, and being in the form of continuous values rather than discrete values, a directly observable mode will very rarely occur. We therefore find an equivalent calculated mode, by simply taking the average of the two observations that are closest to each other. The mode is obviously not a highly reliable value indicator; the purpose of including it is to give a more complete representation of the distribution of adjusted sales.

Along with the three measures of centrality, we also have measures of dispersion for each. The coefficient of variation, which is the standard deviation as a percentage of the mean, is used in the calculations rather than the raw standard deviation, since the absolute amount of the standard deviation varies in accordance with the magnitude of the mean. Thus, a standard deviation of 2.34 doesn't really tell us much, but if we know that this deviation is 35% of the mean, we do know something, namely that we have a problem.

The coefficient of dispersion of the median is a simpler calculation: We subtract the value of each observation from the median, sum the absolute differences, and divide by the number of observations less one. The dispersion of the mode is calculated the same way.

The program also calculates two other characteristics of the distribution of adjusted sales: the skewness and kurtosis. The skewness reflects a preponderance of values on either side of the midpoint of the distribution.

Skewness may be calculated several ways; the method I am using is one that is intuitively comprehensible: We measure the number of standard deviations that the mean lies to the right of the median, or midpoint of the distribution. Thus, a skewness of 0.85 means that the mean is 85% of a standard deviation to the right of the mean. A negative skewness, such as -0.85, says that the mean is 85% of a standard deviation to the left of the median.

Kurtosis describes the peakedness or flatness of the distribution. An ideal perfectly normal curve is given the value of three. A positive kurtosis figure means the distribution is more peaked than normal. An acceptable range is 2.20 to +3.80. A kurtosis measure higher than 3.80 reflects an extremely peaked distribution, and one below 2.20 is considered to be excessively flat.

The formula used in the program is

Alpha 4 = m4/(m2)2,

Alpha 4 is the name for the measure of kurtosis and m2 and m4 represent the second and fourth moments of the x's about their mean. Moments are the means of the powers of the deviations of items in the distribution. The x's are values of the items (that is, adjusted sales prices) minus their means. The formula for the calculation of kurtosis that I am using is therefore

Sum x4/N/(Sum x2/N)2

Geary's Ratio is an alternative kurtosis measure, consisting of: average deviation/standard deviation. As N approaches infinity, this ratio approaches the square root of 2/pi, or approximately 0.80.

The Quartile Deviation is defined as 1/2 of the mid-range, consisting of the difference between the 75th quartile and the 25th quartile. Its Coefficient of Variation is its percentage of the Median.

The Standard Error gives us a measure which, when multiplied by the proper t-value, gives us the number needed to calculate a confidence interval. In this program, a 90% confidence interval is automatically calculated. The standard error is also printed, in case you want to apply some other confidence interval. To do this, find in the t-tables the t-value for the number of degrees of freedom (N-1; in this case, 6), and multiply by the standard error. These measures help to give us a snapshot of the distribution of the adjusted sales after the size adjustments are applied.

The amount of reduction in the dispersion of the prices that the size adjustments account for is also shown.

One fruitful application of the program at this point, entirely aside from derivation of size adjustment factors, is to spot sales whose adjusted prices are inconsistent with the other adjusted prices. Of course, we should not automatically discard an outlier just because it is an outlier. On the other hand, bearing in mind that one of the assumptions of the size adjustment program is that we have already made adjustments for all other relevant factors, we may want to see whether there were other property attributes or variables in these sales that we had not previously adjusted for, or whether a sale may not have been an armslength transaction, or whether we may have made some arithmetic error in analyzing the sale, or did not know all the conditions of the sale.

Remember, too, that the three preliminary value indications from the mean, median, and mode implicitly assume that each sale carries equal weight. In a particular valuation you may or may not feel that this assumption is appropriate.

In case you want to insert the final size adjustment factors into a spreadsheet containing the other adjustments, the program automatically saves a temporary ASCII file called <sizeadj.out>, for "size adjustment output," which contains nothing but the final adjustment factors, and can be read into a "Size Adjustment" row in a spreadsheet.

Size7.04: Screens

The following are the three main screens for the Size Adjustment program:

Opening Screen:

**********************************************************************

Size Adjustment Program Version 7.04 11/23/94 GENE DILMORE

Input consists of:

Size of Subject;

Number of sales (MUST be more than 3; MIN. OF 5 SUGGESTED)

Size & unit price of each comparable

Output consists of:

application of a set of 14 modified learning curves to the data,

with resulting mean, std dev, & coefficient of variation.

The best fitting curve is selected and the program then applies the best set

of adjustment factors to the sales and prints out the indicated

adjusted mean, median, and mode. [Compressed print suggested.]

A file called SIZEADJ.OUT will also contain the final

adjustment factors for reading into a 123 file.

Best results are obtained if all other adjustments are applied first. Then use

the semiadjusted unit sale prices as the input prices.

References: GD:Appraisal Journal 4/81, Right of Way 5/78, R E Appraiser 56/76.