We’re going to use Taylor series a bit more here, and we’ll look at two interesting problems: one a historical relic, and the other a modern statistical result.
This result was incredible at the time of its proof for two reasons:
This was a longstanding question in mathematics! Mathematicians knew that this infinite series converged, but were stumped about what the value could be.
We’re going to follow along this proof, and for most of the time it might seem like we’re fiddling with unrelated results and functions. By the end, we’ll tie what we’re doing together to show that:
We’re going to consider this function again, but in a different form. We’re going to think about this infinite polynomial, but we’ll try to write it by thinking about the factors that we expect to see, specifically when we think of the zeros of the function.
So first, note that the zeros of this function, \(\dfrac{\sin(x)}{x}\) must occur when \(\sin(x)=0\text{.}\) What are the \(x\)-values that make \(\sin(x)=0\text{?}\)
We’ll use these zeros to write out the function’s factors, but we’re also going to make use of the fact that \(f(0)=1\text{.}\) This means that if \(x=c\) is a zero of the function, then the corresponding factor should be written as \(\left(1-\dfrac{x}{c}\right)\text{.}\) So, if \(c_1, c_2, c_3, c_4, ...\) are all zeros, then we should write the function as:
Note that we have pairs of factors that are in the form \(\left(1-\dfrac{x}{c}\right)\left(1+\dfrac{x}{c}\right)\text{.}\) This is hopefully a very recognizable factoring pattern! It’s the difference of squares! What do you get when you multiply these?
This should allow you to write your factored function from above in a slightly different way, by combining the pairs of factors using this multiplication. What does this look like?
We know that the quadratic terms should be \(-\frac{x^2}{6}\) (from the infinite sum). Can you find what the quadratic term from the infinite product would be?
We say that a random variable follows the standard normal distribution when the mean or expected value of the random variable is 0, the variance of the random variable is 0, and the probability density function is:
There are a lot of technical details behind some of these terms, but we won’t concern ourselves with it too much. For now, we’ll state two facts:
We use an integral to represent probability. For instance, the probability that a random variable will take on a value that is up to 1 standard deviation above the mean is:
Write an integral that represents the probability of a random variable with a normal distribution taking on a value up to 2 standard deviations above the mean.
In order to estimate your integral, we will need to antidifferentiate \(e^{-\frac{x^2}{2}}\text{.}\) This function does not have an elementary antiderivative, meaning we cannot express an antiderivative in terms of the traditional functions we have named and the traditional operations we have defined. Antidifferentiate your Taylor series representation of \(e^{-\frac{x^2}{2}}\text{,}\) and call this \(F(x)\text{.}\)
Construct, evaluate, and approximate the integral representing the probability that a normally distributed random variable takes on a value between 1 and 1.5 standard deviations above the mean (using the same number of terms as above).