# Intel Pentium P5 floating point unit error (1994) — an RMO problem !!!

Problem:

Two number theorists, bored in a chemistry lab, played a game with a large flask containing 2 litres of a colourful chemical solution and an ultra-accurate pipette. The game was that they would take turns to recall a prime number p such that $p+2$ is also a prime number. Then, the first number theorist would pipette out 1/p litres of chemical and the second $\frac{1}{(p+2)}$ litres. How many times do they have to play this game to empty the flask completely?

Hint:

A bit of real analysis is required.

Reference:

I will publish the reference when I post the solution. So that all students/readers can curb their impulse to see the solution immediately!!!

I hope you will be hooked to the problem in a second….!!! Here is a beautiful utility of pure math! ðŸ™‚

Cheers,

Nalin Pithwa

PS: I do not know if the above problem did (or, will?? )appear as RMO question. It is just my wild fun with math to kindle the intellect of students in analysis !! ðŸ™‚

# Euler Series question and solution

Question:

Mengoli had posed the following series to be evaluated:

$1+ \frac{1}{2^{2}} + \frac{1}{3^{2}} + \frac{1}{4^{2}} + \ldots$.

Some great mathematicians, including Liebnitz, John Bernoulli and D’Alembert, failed to compute this infinite series. Euler established himself as the best mathematician of Europe (in fact, one of the greatest mathematicians in history) by evaluating this series initially by a not-so-rigorous method. Later on, he gave alternative and more rigorous ways of getting the same result.

Prove that the series converges and gets an upper limit. Then, try to evaluate the series.

Proof:

Due Nicolas Oresine:

Consider the following infinite series: $\phi(s)=1 + \frac{1}{2^{s}} + \frac{1}{3^{s}} + \frac{1}{4^{s}} + ldots$

We can re-write the preceding series as follows: $\phi(s) = 1+ (\frac{1}{2^{s}}+\frac{1}{3^{s}}) + (\frac{1}{4^{s}} + \frac{1}{5^{s}} + \frac{1}{6^{s}} + \frac{1}{7^{s}}) + \ldots$, which in turn is less than

$1 + (\frac{2}{2^{s}}) + (\frac{4}{4^{s}}) + \ldots$. Now, the RHS of this can be re-written as

$1+(\frac{2}{2^{s}}) + (\frac{4}{4^{s}}) + \ldots=1 + \frac{1}{2^{(s-1)}}+ (\frac{1}{2^{(s-1)}})^{2} + \ldots$, which is a geometric series and it is given by

$\frac{1}{1-\frac{1}{2^{(s-1)}}}$.

Now, we can say that $\phi(s)$ will converge if $\frac{1}{2^{(s-1)}}<1 \Longrightarrow s >1$.

In order to prove what is asked, we start with $\phi(s)=1 + \frac{1}{2^{s}}+ \frac{1}{3^{s}}+ \frac{1}{4^{s}}+\ldots$

And, then multiply both sides by $\frac{1}{2^{s}}$ and then subtract the resulting equation from the preceding equation to get

$(1-\frac{1}{2^{2}})\phi(s)=1+\frac{1}{3^{s}}+\frac{1}{5^{s}}+\ldots$

where all the terms containing the reciprocals of the sth power of even numbers vanished.

Repeating this procedure with $\frac{1}{3^{s}}$ gives

$(1-\frac{1}{2^{s}})(1-\frac{1}{3^{s}})\phi(s)=1+\frac{1}{5^{s}}+ \ldots$

where all terms containing the reciprocals of the sth power of multiples of 3 vanished.

By continuing this with all prime numbers, we get

$\prod_{p}(1-\frac{1}{p^{s}})\phi(s)=1$, where p represents all prime numbers. Thus, we get

$\phi(s)=1 + \frac{}{} + \frac{}{} + \frac{}{} + \ldots =\frac{1}{\prod_{p}(1-\frac{1}{p^{s}})}$

This is a remarkable result because the LHS is concerned with only positive integers, whereas the RHS is concerned with only primes. This result is known as the “Golden Key of Euler”.

Riemann created his famous $\zeta-$ function by extending the variable s to the entire complex plane, except $s=1$ withÂ

$\zeta(s)=1+ \frac{1}{2^{s}} + \frac{1}{3^{s}} + \ldots$.

This function is now very famous as the Riemann zeta function.

How can we apply the Golden Key of Euler to Mengoli’s question that we started with?

Ans. In the Golden Key of Euler, substitute $s=2$.

Hence, we get the upper limit of the given series is 2.

Euler’s proof (1775):

The proof ran as follows:

It is a little roundabout way of arriving at the correct answer from a known result. Consider McLaurin’s series expansion of sin x:

$\sin{(x)}=x - \frac{x^{3}}{3!} + \frac{x^{5}}{5!} -\frac{x^{7}}{7!} + \frac{x^{9}}{9!} + \ldots$

By dividing both sides by x and then substituting $y=x^{2}$ on the right side, we get the following:

$\frac{\sin{(x)}}{x} = 1-\frac{y}{3!} + \frac{y^{2}}{5!} - \frac{y^{3}}{7!} + \ldots$

By taking a special value of $x=n\pi$ (and, hence $y=n^{2}\pi^{2}$), we get the following:

$\frac{\sin (n\pi)}{(n\pi)}=0=1-\frac{y}{3!} + \frac{y^{2}}{5!} - \frac{y^{3}}{5!}+ \ldots$

NoteÂ  that preceding equation is not a polynomial, but an infinite series.Â But, Euler still treated it as a polynomial (that is why it was not accepted as a rigorous result) and observed that this “infinite” polynomial has roots equal to $y_{n}=n^{2}x^{2}$. Then, Euler had used the fact that the sum of the reciprocals of the roots is determined by the coefficient of the linear term (here, the y-term) when the constant is made unity. (check this as homework quiz, for a quadratic to be convinced).Â So, Euler had arrived at the following result:

$1-\sum_{i=1}^{\infty}\frac{6}{y_{n}}=0 \Longrightarrow \sum_{i=1}^{\infty}\frac{1}{y_{n}}=\frac{1}{6}$. With $y_{n}=n^{2}(\pi)^{2}$, we get the following:

$\sum_{i=1}^{\infty}\frac{1}{n^{2}(\pi)^{2}}=\frac{1}{6}$ or, $\sum_{1}^{n^{2}}\frac{1}{n^{2}}=\frac{(\pi)^{2}}{6}$.

Another proof also attributed to Euler that uses the series expansion of sin (x) goes as follows below:

$\sin {(x)}$ has roots given by 0, $\pm \pi$, $\pm 2\pi$, $\pm 3\pi$, …So does this polynomial that Euler reportedly constructed:

$x(1-\frac{x^{2}}{(\pi)^{2}})(1-\frac{x^{2}}{(2\pi)^{2}})(1-\frac{x^{2}}{(3\pi)^{2}})\ldots=0$

So, Euler considered the preceding equation to be equivalent to:

$\sin{(x)}=x - \frac{x^{3}}{3!} + \frac{x^{5}}{5!} - \frac{x^{7}}{7!} + \ldots=0$

Then, he had equated the coefficient of $x^{3}$ in both to get the result:

$\sum_{n=1}^{\infty}\frac{1}{n^{2}(\pi)^{2}} = \frac{1}{3!} = \frac{1}{6}$.

Thus, $\sum_{n=1}^{\infty}\frac{1}{n^{2}}=\frac{(\pi)^{2}}{6}$.

Later on, Euler had provided a few more alternate and rigorous proofs of this result.

Reference: Popular Problems and Puzzles in Mathematics by Asok Kumar Mallik, IISc Press, Foundation Books.

Hope you all enjoyed it — learning to think like Euler !! By the way, it did take a long time for even analysis to become so rigorous as it is now….You might like this observation a lot. ðŸ™‚ ðŸ™‚ ðŸ™‚

Nalin Pithwa.

# Huygens’ problem to Leibnitz: solution

In the Feb 23 2018 blog problem, we posed the following question:

Sum the following infinite series:

$1+\frac{1}{3} + \frac{1}{6} + \frac{1}{10} + \frac{1}{15}+ \ldots$.

Solution:

The sum can be written as:

$S=\sum_{n=1}^{\infty}P_{n}$, where $P_{n}=\frac{2}{n(n+1)}=2(\frac{1}{n}-\frac{1}{n+1})$.

Thus, $2(1-\frac{1}{2} + \frac{1}{2} - \frac{1}{3} + \frac{1}{3} - \frac{1}{4} + \ldots)=2$. This is the answer.

If you think deeper, this needs some discussion about rearrangements of infinite series also. For the time, we consider it outside our scope.

Cheers,

Nalin Pithwa.

# Huygens’ Problem to Leibnitz

Young philosopher Leibnitz went to Huygens for training in mathematics. Huygens’ asked Leibnitz to find the sum of the following infinite series:

Question: $1 + \frac{1}{3} + \frac{1}{6} + \frac{1}{10} + \frac{1}{15}+ \ldots$ Calculate the sum.

Kindly share your ideas, even partial solutions are welcome.

Nalin Pithwa.

# Famous harmonic series — solutions

Solutions to previous blog questions on harmonic series are presented below:

Basic Reference: Popular Problems and Puzzles in Mathematics by Asok Kumar Mallik, IISc Press, Foundation Books; Amazon India link:

https://www.amazon.in/Popular-Problems-Puzzles-Mathematics-Mallik/dp/938299386X/ref=sr_1_2?ie=UTF8&qid=1505622919&sr=8-2&keywords=popular+problems+and+puzzles+in+mathematics

Detailed Reference:Â

Mallik, A. K. 2007: “Curious consequences of simple sequences,”Â Resonance,Â (January), 23-37.

Personal opinion only: Resonance is one of the best Indian magazines/journals for elementary/higher math and physics. It behooves you to subscribe to it. It will help in RMO, INMO and Madhava Mathematics Competition of India.

http://www.ias.ac.in/Journals/Resonance_â€“_Journal_of_Science_Education/

Solutions:

1. The thirteenth century French polymath Nicolas-Oresme proved that the harmonic series :$1+ \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \ldots$ does not converge. Prove this result.

Solution 1:

Nicolas Oreme had provided a simple proof as it involves mere grouping of terms, noticing patterns and making comparisons:

$\lim_{n \rightarrow \infty}H_{n}=H_{\infty}=1+\frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8} + \ldots$

$> 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{4} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \ldots$

$> 1+ \frac{1}{2} + \frac{1}{2} + \frac{1}{2} + \ldots$

Therefore, $H_{\infty}$ diverges as we go on adding one half indefinitely. Here is another way to prove this:

Consider $\lim_{n \rightarrow \infty}H_{n}=H_{\infty}=1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \ldots$

By multiplying and dividing both sides by 2 and then by regrouping the terms, we get:

$H_{\infty}=\frac{2}{2} + \frac{2}{4} + \frac{2}{6} + \frac{2}{8} + \ldots$

$H_{\infty}= \frac{1+1}{2} + \frac{1+1}{4} + \frac{1+1}{6} + \frac{1+1}{8} + \ldots$

$H_{\infty}= (\frac{1}{2} + \frac{1}{2}) + (\frac{1}{4} + \frac{1}{4}) + (\frac{1}{6} + \frac{1}{6}) + (\frac{1}{8} + \frac{1}{8}) + \ldots$

$H_{\infty}<1+ \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \ldots$

leading to a contradiction that $H_{\infty} The contradiction arose because only finite numbers remain unaltered when multiplied and divided by 2. So, $H_{\infty}$ is not a finite number, that is, it diverges.

2. Prove that $\frac{1}{2} + \frac{1}{4} + \frac{1}{6} + \ldots$ does not converge.

Solution 2:

$H_{E}=\frac{1}{2} + \frac{1}{4} + \frac{1}{6} + \frac{1}{8} + \ldots=\frac{1}{2}(1+\frac{1}{2} + \frac{1}{3} + \frac{1}{4} +\ldots)=\frac{1}{2}H_{n}$.

Since $H_{\infty}$ diverges, so does $H_{E}$.

3. Prove that $1+ \frac{1}{3} + \frac{1}{5} + \frac{1}{7} + \ldots$ does not converge.

Solution 3:

$H_{O}=\frac{1}{1}+ \frac{1}{3} +\frac{1}{5}+ \frac{1}{7}+ \ldots$ diverges as each term in this series is greater than the corresponding term of $H_{E}$, which we have just sent to diverge.

Cheers,

Nalin Pithwa.

# Famous harmonic series questions

Question 1:

The thirteenth century French polymath Nicolas Oresme proved that the harmonic series $1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \ldots$ does not converge. Prove this result.

Question 2:

Prove that $\frac{1}{2} + \frac{1}{4} + \frac{1}{6} + \ldots$ does not converge.

Question 3:

Prove that $1 + \frac{1}{3} + \frac{1}{5} + \frac{1}{7} + \ldots$ does not converge.

Even if you solve these all on your own, you won’t achieve the glory of that French polymath, but you will have “re-discovered” some “elements of truth of analysis” …You can give yourself a pat on the back!

Cheers,

Nalin Pithwa.

# Real Numbers, Sequences and Series: part 9

Definition.

We call a sequence $(a_{n})_{n=1}^{\infty}$ a Cauchy sequence if for all $\varepsilon >0$ there exists an $n_{0}$ such that $|a_{m}-a_{n}|<\varepsilon$ for all m, $n > n_{0}$.

Theorem:

Every Cauchy sequence is a bounded sequence and is convergent.

Proof.

By definition, for all $\varepsilon >0$ there is an $n_{0}$ such that

$|a_{m}-a_{n}|<\varepsilon$ for all m, $n>n_{0}$.

So, in particular, $|a_{n_{0}}-a_{n}|<\varepsilon$ for all $n > n_{0}$, that is,

$a_{n_{0}+1}-\varepsilon for all $n>n_{0}$.

Let $M=\max \{ a_{1}, \ldots, a_{n_{0}}, a_{n_{0}+1}+\varepsilon\}$ and $m=\min \{ a_{1}, \ldots, a_{n_{0}+1}-\varepsilon\}$.

It is clear that $m \leq a_{n} \leq M$, for all $n \geq 1$.

We now prove that such a sequence is convergent. Let $\overline {\lim} a_{n}=L$ and $\underline{\lim}a_{n}=l$. Since any Cauchy sequence is bounded,

$-\infty < l \leq L < \infty$.

But since $(a_{n})_{n=1}^{\infty}$ is Cauchy, for every $\varepsilon >0$ there is an $n_{0}=n_{0}(\varepsilon)$ such that

$a_{n_{0}+1}-\varepsilon for all $n>n_{0}$.

which implies that $a_{n_{0}+1}-\varepsilon \leq \underline{\lim}a_{n} =l \leq \overline{\lim}a_{n}=L \leq a_{n_{0}+1}+\varepsilon$. Thus, $L-l \leq 2\varepsilon$ for all $\varepsilon>0$. This is possible only if $L=l$.

QED.

Thus, we have established that the Cauchy criterion is both a necessary and sufficient criterion of convergence of a sequence. We state a few more results without proofs (exercises).

Theorem:

For sequences $(a_{n})_{n=1}^{\infty}$ and $(b_{n})_{n=1}^{\infty}$.

(i) If $l \leq a_{n} \leq b_{n}$ and $\lim_{n \rightarrow \infty}b_{n}=l$, then $(a_{n})_{n=1}^{\infty}$ too is convergent and $\lim_{n \rightarrow \infty}a_{n}=l$.

(ii) If $a_{n} \leq b_{n}$, then $\overline{\lim}a_{n} \leq \overline{\lim}b_{n}$, $\underline{\lim}a_{n} \leq \underline{\lim}b_{n}$.

(iii) $\underline{\lim}(a_{n}+b_{n}) \geq \underline{\lim}a_{n}+\underline{\lim}b_{n}$

(iv) $\overline{\lim}(a_{n}+b_{n}) \leq \overline{\lim}{a_{n}}+ \overline{\lim}{b_{n}}$

(v) If $(a_{n})_{n=1}^{\infty}$ and $(b_{n})_{n=1}^{\infty}$ are both convergent, then $(a_{n}+b_{n})_{n=1}^{\infty}$, $(a_{n}-b_{n})_{n=1}^{\infty}$, and $(a_{n}b_{n})_{n=1}^{\infty}$ are convergent and we have $\lim(a_{n} \pm b_{n})=\lim{(a_{n} \pm b_{n})}=\lim{a_{n}} \pm \lim{b_{n}}$, and $\lim{a_{n}b_{n}}=\lim {a_{n}}\leq \lim {b_{n}}$.

(vi) If $(a_{n})_{n=1}^{\infty}$, $(b_{n})_{n=1}^{\infty}$ are convergent and $\lim_{n \rightarrow \infty}b_{n}=l \neq 0$, then $(\frac{a_{n}}{b_{n}})_{n=1}^{\infty}$ is convergent and $\lim_{n \rightarrow \frac{a_{n}}{b_{n}}}= \frac{\lim {a_{n}}}{\lim{b_{n}}}$.

Reference: Understanding Mathematics by Sinha, Karandikar et al. I have used this reference for all the previous articles on series and sequences.

More later,

Nalin Pithwa

# Real Numbers, Sequences and Series: Part 6

Theorem:

Given $x \in \Re_{+}$ and $n \in N$, we can find $y \in \Re$ such that $x=y^{n}$.

Proof:

Let $A={u \in \Re_{+}: u^{n}. If $x<1$, then $x^{n} and hence $x \in A$. On the other hand, if $x \geq 1$, then $1/2 \in A$. Thus, A is non-empty. Next, observe that if $v^{n}>x$, then v is an upper bound of A. In particular, $\{ 1+x\}$ is an upper bound of A.

By the least upper bound property, A admits a least upper bound. Let us denote it by y. We will rule out the possibilities $y^{n}>x$ and $y^{n} implying that $x=y^{n}$.

If $y^{n}, let $a=\frac{x-y^{n}}{nx}$ and $z=y(1+a)$. It can be checked that $z^{n} so that $z \in A$. But, $y , contradicting the fact that y is the least upper bound of A. (we have used the inequalitiesÂ 7 and 8 in the previous blog).

On the other hand, if $y^{n}>x$, let $\frac{y^{n}-x}{ny^{n}}$ and $w=y(1-b)$. Again, it can be verified that $w^{n}>x$ and hence w is an upper bound of A. But, $w, contradicting the fact that y is the least upper bound of A. Hence, $y^{n}=x$.Â QED.

In particular, we see that there is an element $\alpha \in \Re$ such that $\alpha^{2}=2$ and hence also $(-\alpha)^{2}=2$ which means that the equation $x^{2}=2$ has two solutions. The positive one of those two solutions is $\sqrt{2}$. In fact, the above theorem has guaranteed its extraction of the square root, cube root, even nth root of any positive number. You could ask at this stage, if this guarantees us extraction of square root of a negative number. The answer is clearlyÂ no.Â Indeed, we have

$x^{2} \geq 0$ for $x \in \Re$.

Remark.

We can further extend $\Re$ to include numbers whose squares are negative, which you know leads to the idea of complex numbers.

We have shown thatÂ Q is a subset of $\Re$. One can also show that between any two distinct real numbers there is a rational number. This naturally leads to the decimal representation of real number: Given any real number x and any $q \in N$, we can get a unique $a_{0} \in Z$ and unique $a_{1},a_{2}, \ldots a_{q} \in N$ such that $0 \leq a_{1} \leq a_{2} \ldots a_{q} \leq 9$ and

$|x-(a_{0}+a_{1}/10+a_{2}/100+\ldots + \frac{a_{q}}{10^{q}})|<\frac{1}{10^{q}}$

You are invited to try to Â prove this familiar decimal representation.

If we have a terminating decimal representation of a real number, then surely it is rational.But, we know that rationals like 1/3, 1/7, 1/11, do not have a terminating decimal expansion.

It is clear that the decimal representation of $\sqrt{2}$ cannot terminate as it is not rational. There are many elements of $\Re$ which are not inÂ Q.Â Any element of $\Re$ which is not inÂ QÂ is called an irrational number, and irrational numbers cannot have terminating decimal representation.

More later,

Nalin Pithwa