Discretization error
In numerical analysis, computational physics, and simulation, discretization error is the error resulting from the fact that a function of a continuous variable is represented in the computer by a finite number of evaluations, for example, on a lattice. Discretization error can usually be reduced by using a more finely spaced lattice, with an increased computational cost.
Examples
Discretization error is the principal source of error in methods of finite differences and the pseudo-spectral method of computational physics.
When we define the derivative of as or , where is a finitely small number, the difference between the first formula and this approximation is known as discretization error.
Related phenomena
In signal processing, the analog of discretization is sampling, and results in no loss if the conditions of the sampling theorem are satisfied, otherwise the resulting error is called aliasing.
Discretization error, which arises from finite resolution in the domain, should not be confused with quantization error, which is finite resolution in the range (values), nor in round-off error arising from floating point arithmetic. Discretization error would occur even if it were possible to represent the values exactly and use exact arithmetic – it is the error from representing a function by its values at a discrete set of points, not an error in these values.[1]
References
- ↑ Higham, Nicholas (2002). Accuracy and Stability of Numerical Algorithms (2 ed). SIAM. p. 5.