Le Cam's theorem
In probability theory, Le Cam's theorem, named after Lucien le Cam (1924 – 2000), states the following.[1][2][3]
Suppose:
- X1, ..., Xn are independent random variables, each with a Bernoulli distribution (i.e., equal to either 0 or 1), not necessarily identically distributed.
- Pr(Xi = 1) = pi for i = 1, 2, 3, ...
- (i.e. follows a Poisson binomial distribution)
Then
In other words, the sum has approximately a Poisson distribution and the above inequality bounds the approximation error in terms of the total variation distance.
By setting pi = λn/n, we see that this generalizes the usual Poisson limit theorem.
When is large a better bound is possible: [4]
It is also possible to weaken the independence requirement.[4]
References
- ↑ Le Cam, L. (1960). "An Approximation Theorem for the Poisson Binomial Distribution". Pacific Journal of Mathematics. 10 (4): 1181–1197. doi:10.2140/pjm.1960.10.1181. MR 0142174. Zbl 0118.33601. Retrieved 2009-05-13.
- ↑ Le Cam, L. (1963). "On the Distribution of Sums of Independent Random Variables". In Jerzy Neyman; Lucien le Cam. Bernoulli, Bayes, Laplace: Proceedings of an International Research Seminar. New York: Springer-Verlag. pp. 179–202. MR 0199871.
- ↑ Steele, J. M. (1994). "Le Cam's Inequality and Poisson Approximations". The American Mathematical Monthly. 101 (1): 48–54. doi:10.2307/2325124. JSTOR 2325124.
- 1 2 den Hollander, Frank. Probability Theory: the Coupling Method.
External links
This article is issued from Wikipedia - version of the 7/6/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.