Error Analysis in Science Experimentation

There are many ways to think about error in a scientific experiment. I think that the easiest way (at our level) is to divide possible errors into TWO general types:

  1. Accuracy
  2. Precision

Accuracy

Accuracy is a measure of how close your experimental value is to an accepted value (or the value your experiment should generate). An error in accuracy comes from a systemic problem in your experimental design or technique. If your experiment consistently gives you values that are too high, or too low, you have a problem with accuracy. In the following picture there are three bull’s-eyes (concentric circles). The centre of each bull’s-eye represents the “accepted value”. The X’s represent values you would have generated with repeated experiments. Please note that in order to determine accuracy you only need one “throw” at the bull’s-eye. The picture below represents three sets of results, each set representing an experiment that has been repeated five or six times:

Here, the left bulls-eye represents fairly accurate results. All five repetitions of the experiment seem to be close to the result we are looking for (the centre of the bull’s-eye). The middle bull’s-eye represents results that are consistently too low. The right bull’s-eye represents results that are consistently too high.

Note again that your accuracy can be determined using only one repetition (if you can call it a repetition) of your experiment (i.e. going through the procedure and getting a result once), but it is always better to do multiple trials to be sure your value isn’t a fluke (i.e. repeat the procedure and see if you get a different answer).

We will measure accuracy using % error. The formula is:

% error =

For the figure above we can get a % error for each “X”, that is, for each experimentally determined number. We can also average all the experimentally determined values generated in one set of experiments and get one overall % error. We’ll talk more about averages when we talk about precision.

In this class we will always have an “accepted” value with which to compare our experimental value. Obviously this isn’t always the case in “real life”. Otherwise how would we ever be finding out new things? If you come up with a new way of doing something, however, you must always prove that your method works before using it to find out new things, and the only way to do that is to use your experiment to generate a result that someone else has already generated. So, what we are learning here applies outside of class as well.

There are a couple of things to note about the % error you will calculate using this formula.

  1. If the value of your % error is negative, then the absolute value of your # is too small (like the middle bull’s-eye above). This could mean that your number isn’t negative enough, or isn’t positive enough.
  2. If the value of your % error is positive, then the absolute value of your # is too large (like the right-hand bull’s-eye above). This could mean that your number is too negative, or too positive.

It will be up to you to determine what that means in the context of your experiment. It is also important for you to determine how an error in accuracy could have arisen based on what you actually did in the experiment. For instance, if you calculate a molar enthalpy of solution and your experimental error is negative, that value could have arisen from accidentally dropping some material on the floorbetweenwhen you measured the mass using the balance, to when you got to your station. If you didn’t actually drop any though, this observation is pointless (for you). The purpose of science is to do an experiment and then determine what it is that you can do to improve either your design or technique.

You report accuracy using only the % error.

Ex. 1 If the actual value of an experiment is supposed to be 5, and your value is 4, your % error is: (4 – 5)/5 x 100% = -20%. You would say you got a result of 4 with a % error of -20%.

Precision

Precision is a measure of how close together your results are. For instance, if you are aiming at a bull’s-eye with a dart, you may not come close to the centre, but all your throws might be clustered together. This would mean that you are consistent (precise). On the other hand, if your throws are all over the place, your results would not be precise. Obviously you need to have tried to generate the same result more than once in order to be able to see how close together all the results are, so unlike % error, you can only talk about precision if you have repeated your experiment more than once. The diagram below may help:

In this picture the black X’s represent individual data points, each one having been generated by performing the experiment once. The white X’s represent the average of the repeated experiments in each set. The bull’s-eye on the left represents data that is not precise (even though the average comes close to the accepted value). The middle and right bull’s-eyes represent more precise results.

Precision is reported using the average value of all your trials, and the standard deviation of the data used to generate that average. The standard deviation is a measure of the range your data takes on, given your design and technique. The average of your results may end up being pretty close to the target, but if the precision is very bad, errors still occurred.

We report precision errors using an average value and a standard deviation. The formulas for each of these are below (but most calculators can calculate them automatically):

Average: , where is the average value, is the sum of all the values and n is the number of values.

Standard deviation:

You report every experimental value you generate as an average number +/- the standard deviation.

Ex. 2 An experiment generates the following values: 2, 3, 4, 5, 6, 2, 7.

Using a calculator in “STAT” mode (as me how to do this, and/or how to calculate it by hand) the average of these is 4.14 and the standard deviation is 1.95, You would report this number as 4 +/- 2. Note here that your precision determines the number of significant figures your experimental result should have. Also note that your pre-determined number of significant figures based on the accuracy of your measurements represents a maximum number of sig figs. If your S.D. ends up telling you you weren’t as precise as your measurements then that’s what you have to go with.

Putting it together and reporting experimental data

For every experiment you do, you should do repeated trials. Each one of these trials may be close to or far away from the expected results. Your trials may also be precise or close to random. In order to analyse your error you need to think about two things: Your accuracy and your precision. Different kinds of errors will lead to differences in either accuracy or precision. Generally speaking, sloppiness results in errors in precision. Something systemically wrong with your experiment (an error that happens the same way, every time) results in errors in accuracy. We will talk in class about which kinds of mistakes will show up in each of the two types of error.

The above figure shows three sets of results. The first set gives an average value very close to the actual value we are looking for, but aren’t very precise. This would mean that the kinds of errors you need to focus on in your report are those having to do with precision. The middle set of results are precise, but would have given a large, negative % error, meaning that you need to focus on errors in accuracy in your report. The right-had set of results are both accurate and precise.

So, for each set of values you generate (if possible) you report: the overall result like this:

Average value +/- S.D. with a % error of _____%.

Ex. 3 Using the values from Ex. 2 above, and assuming that the actual value the experiment should generate is 3.95, you’d report your value like:

4 +/- 2 with a +5% error. In your report you would have to focus on both errors in accuracy and in precision, since there are obviously problems with each. Remember that the errors you talk about must pertain to what you actually did. You must also always show how it is that correcting those errors would lead to improved results (showing the improvement numerically and tracing that new improved value through your error calculations is a good way to do this.