The Proof of Wallis' formula
Wallis公式的证明
I had a question on Zhihu Is there a problem with a result containing "π" but completely unrelated to the circle?, and I thought of Wallis' formula, which was proposed by John Wallis (1616-1703). The main application still lies in the proof of Sterling's formula. Although Wallis' formula itself seems to be an ordinary infinite product, but his result expresses the circumference π. If the textbook of higher mathematics you used is the Tongji University version, in fact, you have already learned the proof of Wallis formula in the textbook: there is a sample problem in the textbook is about the division integral of the definite integral, is the proof of Wallis formula, it is only the last step to tell everyone that this is a formula.
The Wallis formula can be expressed in this more intuitive form:
$$$ \lim_{n \to +\infty} \frac{2 \cdot2 \cdot 4 \cdot 4 \cdots (2n) (2n)}{1 \cdot 3 \cdot 3 \cdot 5 \cdot 5 \cdots (2n-1) (2n+1)} = \frac{\pi}{2} $$$
Or in a relatively abstract form:
$$$ \prod_{n=1}^\infty \frac{4n^2}{4n^2-1} = \prod_{n=1}^\infty \frac{2n}{2n-1} \cdot \frac{2n}{2n+1} = \frac{\pi}{2} $$$
If there is π, there is a circle, so where exactly this circle is can be seen from the following proof procedure.
Proof of Wallis formula
The proof of Wallis' formula requires the Wallis integral, which has the following form:
$$$ I_n = \int_{0}^{\frac{\pi}{2}} \sin^n{x} dx $$$
It can be observed that it is not easy to find the $\sin^n{x}$ integral, but it is still relatively easy to find the differential of this form, so it can be broken up into the form $\sin{x} \cdot \sin^{n-1}{x}$, i.e:
$$$ I_n = \int_{0}^{\frac{\pi}{2}} \sin{x} \cdot \sin^{n-1}{x} dx $$$
We can apply integration by parts:
$$$ \int_a^b u dv = uv \Big|_a^b - \int_a^b v du $$$
We choose $u = \sin^{n-1}{x}$ and $dv = \sin{x}dx$, then we get:
\begin{align*} du &= (n-1)\sin^{n-2}{x}\cos{x}dx \\ v &= - \cos{x} \end{align*}
Combining the above, we can make the following derivation:
\begin{align*} I_n &= \int_{0}^{\frac{\pi}{2}} \sin^n{x} dx \\ &= \int_{0}^{\frac{\pi}{2}} \sin^{n-1}{x} \cdot \sin{x} dx \\ &= \int_{0}^{\frac{\pi}{2}} u dv \\ &= uv \Big|^{\frac{\pi}{2}}_{0} - \int_{0}^{\frac{\pi}{2}} v du \\ &= \sin^{n-1}{x}\cdot(-\cos{x}) \Big|^{\frac{\pi}{2}}_{0} - \int_{0}^{\frac{\pi}{2}} - \cos{x}(n-1)\sin^{n-2}{x}\cos{x}dx \\ &= 0 +(n-1)\int_{0}^{\frac{\pi}{2}} \cos^2{x}\sin^{n-2}{x}dx \\ &= (n-1)\int_{0}^{\frac{\pi}{2}} (1 - \sin^2{x})\sin^{n-2}{x}dx \\ &= (n-1)\int_{0}^{\frac{\pi}{2}} ( \sin^{n-2}{x} - \sin^{n}{x} ) dx \\ &= (n-1)\int_{0}^{\frac{\pi}{2}} \sin^{n-2}{x} dx - (n-1)\int_{0}^{\frac{\pi}{2}} \sin^{n}{x} dx \\ &= (n-1) I_{n-2} - (n-1)I_n \end{align*}
From this we can derive the recurrence relations for $I_n$ and $I_{n-2}$ as follows
$$$ I_n = \frac{n-1}{n} I_{n-2} $$$
Since it is a recurrence relation for the nth term and the (n-2)th term, we can consider separately the case of odd and even terms:
$$$ I_{2n} = \frac{2n-1}{2n} I_{2n-2} $$$
$$$ I_{2n+1} = \frac{2n}{2n+1} I_{2n-1} $$$
Now that we have obtained the recurrence formula, let us now look at the initial case, when $n = 0$:
$$$ I_0 = \int_{0}^{\frac{\pi}{2}} dx = x \Big|_0^{\frac{\pi}{2}} = \frac{\pi}{2} $$$
when $n = 1$:
$$$ I_1 = \int_{0}^{\frac{\pi}{2}} \sin{x} dx = - \cos{x} \Big|_0^{\frac{\pi}{2}} = 1 $$$
With the initial values and the recurrence formulas, then let's calculate $I_{2n}$ and $I_{2n+1}$ separately.
\begin{align*} I_{2n} &= \frac{2n-1}{2n} \cdot I_{2n-2} \\ &= \frac{2n-1}{2n} \cdot \frac{2n-3}{2n-2} \cdot I_{2n-4} \\ &= \frac{2n-1}{2n} \cdot \frac{2n-3}{2n-2} \cdots \frac{1}{2} \cdot I_0 \\ &= I_0 \cdot \prod_{k=1}^{n} \frac{2k-1}{2k} \\ &= \frac{\pi}{2} \prod_{k=1}^{n} \frac{2k-1}{2k} \end{align*}
\begin{align*} I_{2n+1} &= \frac{2n}{2n+1} \cdot I_{2n-1} \\ &= \frac{2n}{2n+1} \cdot \frac{2n-2}{2n-1} \cdot I_{2n-3} \\ &= \frac{2n}{2n+1} \cdot \frac{2n-2}{2n-1} \cdots \frac{2}{3} \cdot I_1 \\ &= I_1 \cdot \prod_{k=1}^n \frac{2k}{2k+1} \\ &= \prod_{k=1}^n \frac{2k}{2k+1} \end{align*}
Note the form of these 2 results:
$$$ I_{2n} = \frac{\pi}{2} \prod_{k=1}^{n} \frac{2k-1}{2k} $$$
$$$ I_{2n+1} = \prod_{k=1}^n \frac{2k}{2k+1} $$$
It can be found that if we make a fraction $ \frac{I_{2n}}{I_{2n+1}} $ we get the Wallis formula in the form of
$$$ \prod_{n=1}^\infty \frac{2n}{2n-1} \cdot \frac{2n}{2n+1} $$$
By this point we are very close to the result, the next step is just to find the limit of $ \frac{I_{2n}}{I_{2n+1}} $ when $n \to \infty$, here we can apply the squeeze theorem, but how to construct the limit at both ends to make it better to find it?
We go back to the function $\sin{x}$ and we know that the upper and lower limits of the integral in Wallis are $x \isin [0, \frac{\pi}{2}]$ and in this interval $ \sin{x} \isin [0, 1] $, so the multiplier of $\sin{x}$ will be a non-ascending sequence, i.e:
$$$ \sin^{2n-1}{x} \geq \sin^{2n}{x} \geq \sin^{2n+1}{x} $$$
When the product functions are non-increasing sequences, their definite integrals will also be non-increasing sequences, so we have: $$$ I_{2n-1} \geq I_{2n} \geq I_{2n+1} $$$
Dividing each term of this inequality above by $ I_{2n+1} $, we get
$$$ \frac{I_{2n-1}}{I_{2n+1}} \geq \frac{I_{2n}}{I_{2n+1}} \geq 1 $$$
And by the recurrence formula for the odd terms, we know that
$$$ \frac{I_{2n-1}}{I_{2n+1}} = \frac{2n+1}{2n} $$$
So when $n \to \infty$, it has the following limit:
$$$ \lim_{n \to +\infty} \frac{I_{2n-1}}{I_{2n+1}} = \lim_{n \to +\infty} \frac{2n+1}{2n} = 1 $$$
By the squeeze theorem, we then know that
$$$ \lim_{n \to +\infty} \frac{I_{2n}}{I_{2n+1}} = 1 $$$
That is:
$$$ \lim_{n \to +\infty} \frac{\pi}{2} \prod_{k=1}^{n} \frac{2k-1}{2k} \prod_{k=1}^n \frac{2k+1}{2k} = 1 $$$
Taking the inverse of both sides of the above equation and combining the products, we get:
$$$ \lim_{n \to +\infty} \prod_{k=1}^{n} \frac{2k}{2k-1} \cdot \frac{2k}{2k+1} = \frac{\pi}{2} $$$
At this point, we have completed the proof of Wallis' formula.
The background of Wallis' formula
John Wallis studied the following integral in 1655:
$$$ \int_0^1 (1 - x^{\frac{1}{p}})^{q} dx $$$
Wallis found that when $p$ and $q$ are positive integers:
$$$ \int_0^1 (1 - x^{\frac{1}{p}})^{q} dx = \frac{p! q!}{(p+q)!} $$$
What happens if $ p = q = \frac{1}{2}$? At this point the above integral is written in this form:
$$$ \int_0^1 \sqrt{(1-x^2)} dx $$$
This is actually a quarter of a unit circle, and by its geometric meaning we know that its value is $\frac{\pi}{4}$, i.e:
$$$ \int_0^1 \sqrt{(1-x^2)} dx = \frac{\pi}{4} $$$
If we put it together with the previous factorial form, we get:
$$$ (\frac{1}{2})! \cdot (\frac{1}{2})! = \frac{\pi}{4} $$$
That is, we get a non-integer factorial of the form:
$$$ (\frac{1}{2})! = \frac{\sqrt{\pi}}{2} $$$
The result of this factorial is actually some of its subsequent applications.
The proof of Wallis' formula also originated from this integral that Wallis was working on at the time:
$$$ \int_0^1 (1-x^2) ^{\frac{n}{2}} dx $$$
Let $ x = \cos{\theta}$ and change the variable, noting the change in the upper and lower bounds of the integral:
\begin{align*} \int_0^1 (1-x^2) ^{\frac{n}{2}} dx &= \int_{\frac{\pi}{2}}^0 (1-\cos^2{\theta})^{\frac{n}{2}} (- \sin{\theta}) d\theta \\ &= \int_{\frac{\pi}{2}}^0 ( \sin^{2}{\theta} )^{\frac{n}{2}} (- \sin{\theta}) d\theta \\ &= \int_0^{\frac{\pi}{2}} \sin^{n+1}{\theta} d\theta \end{align*}
In this way, we obtain the starting point of the proof of Wallis' formula.
Furthermore, in this article we are using the following integral for the proof:
$$$ \int_{0}^{\frac{\pi}{2}} \sin^n{x} dx $$$
But in fact, it has some equivalent forms, all of which can be used to prove Wallis' formula, such as
$$$ \int_{0}^{\frac{\pi}{2}} \cos^n{x} dx $$$
$$$ \int_{0}^{\pi} \sin^n{x} dx $$$
Using $\pi$ as the upper limit of integration, or using $\cos{x}$ as the quotient function, will give the same result.
On the properties of definite integrals
We have made use of this property before in our proof of Wallis' formula using the pinch-force theorem:
$$$ \sin^{n-1}{x} \leq \sin^{n}{x} \implies \int_{0}^{\frac{\pi}{2}} \sin^{n-1}{x} \leq \int_{0}^{\frac{\pi}{2}} \sin^{n}{x} $$$
Since I haven't used calculus in a long time, I vaguely remember that there is such a property, but I'm not quite sure, so let's prove it.
First, on the interval $$[a, b]$$, with $$ f(x) \geq 0 $$, we have
$$$ \int_a^b f(x) dx \geq 0 $$$
This is well understood using the image of the function (in the geometric sense) or easy to prove using the definition of the integral.
If on the interval $[a, b]$, there is $ f(x) \geq g(x) $, then what is the relationship between $\int_a^b f(x) dx$ and $\int_a^b g(x) dx$ in size?
According to the conditions, we know that on the interval $[a, b]$, there is
$$$ f(x) - g(x) \geq 0 $$$
Taking $f(x)-g(x)$ as a whole, it is natural to see that
$$$ \int_a^b [f(x) - g(x)] dx \geq 0 $$$
In this way, we can then find that
$$$ \int_a^b f(x) dx \geq \int_a^b g(x) dx $$$
Namely:
$$$ f(x) \geq g(x) \implies \int_a^b f(x) dx \geq \int_a^b g(x) dx $$$