The degree to which repeated measurements of the same quantity differ describes the precision of the measurement. where best value is your best estimate of the exact value and uncertainty is the maximum amount by which you think your measured value might differ from the exact value.
In metrology, measurement uncertainty is a non-negative parameter characterizing the dispersion of the values attributed to a measured quantity. By international agreement, this uncertainty has a probabilistic basis and reflects incomplete knowledge of the quantity value.
In physical experiments, it is important to have a measurement of uncertainty. Standard deviation provides a way to check the results. Very large values of standard deviation can mean the experiment is faulty - either there is too much noise from outside or there could be a fault in the measuring instrument.
Best Estimate ± Uncertainty. When scientists make a measurement or calculate some quantity from their data, they generally assume that some exact or "true value" exists based on how they define what is being measured (or calculated).
Ruler A has an uncertainty of ±0.1 cm, and Ruler B has an uncertainty of ±0.05 cm. Thus, (a) Ruler A can give the measurements 2.0 cm and 2.5 cm. (b) Ruler B can give the measurements 3.35 cm and 3.50 cm. Figure 2.2 Metric Rulers for Measuring Length On Ruler A, each division is 1 cm.
Chemists describe the estimated degree of error in a measurement as the uncertainty of the measurement, and they are careful to report all measured values using only significant figures, numbers that describe the value without exaggerating the degree to which it is known to be accurate.
The absolute uncertainty (usually called absolute error - but "error" connotes "mistake", and these are NOT mistakes) is the size of the range of values in which the "true value" of the measurement probably lies. If a measurement is given as , the absolute uncertainty is 0.1 cm.
In metereology, physics, and engineering, the uncertainty or margin of error of a measurement, when explicitly stated, is given by a range of values likely to enclose the true value. This may be denoted by error bars on a graph, or by the following notations: measured value ± uncertainty.
In some cases, the foot is denoted by a prime, which is often marked by an apostrophe, and the inch by a double prime; for example, 2 feet 4 inches is sometimes denoted as 2′−4″, 2′ 4″ or 2′4″.
Steps to Calculate the Percent Error
- Subtract the accepted value from the experimental value.
- Take the absolute value of step 1.
- Divide that answer by the accepted value.
- Multiply that answer by 100 and add the % symbol to express the answer as a percentage.
While absolute error carries the same units as the measurement, relative error has no units or else is expressed as a percent. The importance of relative uncertainty is that it puts error in measurements into perspective.
Relative Error Definition: Relative error is a measure of the uncertainty of measurement compared to the size of the measurement. Examples: Three weights are measured at 5.05 g, 5.00 g, and 4.95 g. The absolute error is ± 0.05 g. The relative error is 0.05 g/5.00 g = 0.01 or 1%.
In words, the absolute error is the magnitude of the difference between the exact value and the approximation. The relative error is the absolute error divided by the magnitude of the exact value. The percent error is the relative error expressed in terms of per 100.
A round-off error, also called rounding error, is the difference between the calculated approximation of a number and its exact mathematical value due to rounding. This is a form of quantization error.
Rounding means making a number simpler but keeping its value close to what it was. The result is less accurate, but easier to use. Example: 73 rounded to the nearest ten is 70, because 73 is closer to 70 than to 80.
Roundoff error is the difference between an approximation of a number used in computation and its exact (correct) value. In certain types of computation, roundoff error can be magnified as any initial errors are carried through one or more intermediate steps.
It occurs when an operation on two numbers increases relative error substantially more than it increases absolute error, for example in subtracting two nearly equal numbers (known as catastrophic cancellation). The effect is that the number of significant digits in the result is reduced unacceptably.
Discretization error is the principal source of error in methods of finite differences and the pseudo-spectral method of computational physics. When we define the derivative of as or , where is a finitely small number, the difference between the first formula and this approximation is known as discretization error.