Accuracy and Precision

Accuracy and precision
Consider a tilt sensor connected to a data logger at the end of a long wire. Such a system, including the environment it is in, can be considered as an experimental system to gather measurements of the changes in tilt of an object.

The manufacturer recommends taking “100 quick samples each measurement cycle to increase the accuracy of the set”. As we spoke both the support tech and I kept using the words accuracy and precision interchangeably. What were we actually talking about? In my head I started throwing around a new expression - perceived accuracy: how your idea of the accuracy of your system is affected by the precision of the data collected. Then after some reading I stumbled across the ISO-5725 standard which introduces the concept of trueness. I like how the following fits together.

Let’s start with some (interpreted) definitions from ISO-5725 and many other online sources:

  • Precision: the closeness of agreement of data points within a data set. Imperfections in this manifest as spread and are concerned with random errors. Strongly linked to standard deviation, tempered by resolution.
  • Trueness: the closeness of the mean of a data set to the true value we are seeking. Imperfections in this manifest as bias away from the true value and are concerned with systematic errors.
  • Accuracy: the closeness to truth as a combination of the above two concepts. To be truly accurate you would want to maximize both precision and trueness, and I’ve started thinking of this as the ability to get close to a correct value, with as small a sample as possible.

In summary, at any point in time, the accuracy of a single measurement is contingent upon the precision and the trueness of the measurement.

Back to my tilt sensors: I believe we’ve got a system which, at any point in time, has good trueness, but the precision is affected by random electrical noise. For a single observation we expect to see a flow on effect to the accuracy of the data. By taking more measurements we start to increase the precision of the mean of the sample through averaging, and improvements to the standard deviation (precision) of the mean – 100 observations will be 10 times more precise. Under this scenario, shedding the random noise improves the overall accuracy of the system. 

What does the client actually want?
I’m trying to weigh up the expectations of my client who wants to manage safety on their site. A 10 minute delay (the time to take all those measurements) if something starts to move is not acceptable. False alarms from highly variable (imprecise) data can be hard to digest also. I’m also sensitive to the fact that as soon as data starts to get disseminated around the project people will start charting things in Excel with auto-ranging on the axes and I’ll start receiving emails about the data being ‘all over the place’.

So, we have an interest in how frequently data is collected, but also maintaining the long term records for the site. Ultimately, my client wants data to give confidence AND be reliable.

So what to do about Project X?

  • I’ll be doing some field testing to see how the sensors at the end of the longest cables actually perform in context of the entire measurement system – cable, data-logger, sensor, environment, mounting etc. I’ll do some detailed logging of those sensors at maximum scan rates. Then I can get a feel for what we are dealing with on site.
  • I’ve already started coding the system up to have a settable number of samples; limited to a ceiling value of 100. Initially, I’ll set that to be quite low – maybe 10 samples, and will adjust it upwards if necessary. Ten (10) samples will take one-tenth the time, but will only be scattered by a factor of one third, theoretically.
  • My other strategy to balance the ideals of responsiveness against accuracy is to implement a daily detailed measurement set when the site conditions are most stable, say 2am in the morning, and when no one is on site. I’ll let the system run the full 100 sample measurement cycle and collect a clean, more precise data set so that long term analysis and reporting of the site is not compromised.
  • I’ve also been thinking about what techniques could be employed when an alarm is triggered. I’m leaning towards gathering 100 sample data for the particular sensors that have triggered, and continuing at normal sample sizes for all the others. I’ve handled plenty of alarm situations before and there is nothing worse than waiting for the next set of data to come through while you ask yourself: Will the new data clear the alarm, confirm it, or show worse..!?

I’d love to hear what suggestions you readers might have. Please like, share, follow and we look forward to comments.

 

Until next time, KODA