Intel Pentium P5 floating point unit error (1994) — an RMO problem !!!


Two number theorists, bored in a chemistry lab, played a game with a large flask containing 2 litres of a colourful chemical solution and an ultra-accurate pipette. The game was that they would take turns to recall a prime number p such that p+2 is also a prime number. Then, the first number theorist would pipette out 1/p litres of chemical and the second \frac{1}{(p+2)} litres. How many times do they have to play this game to empty the flask completely?


A bit of real analysis is required.


I will publish the reference when I post the solution. So that all students/readers can curb their impulse to see the solution immediately!!!

I hope you will be hooked to the problem in a second….!!! Here is a beautiful utility of pure math! ๐Ÿ™‚


Nalin Pithwa

PS: I do not know if the above problem did (or, will?? )appear as RMO question. It is just my wild fun with math to kindle the intellect of students in analysis !! ๐Ÿ™‚

Euler Series question and solution


Mengoli had posed the following series to be evaluated:

1+ \frac{1}{2^{2}} + \frac{1}{3^{2}} + \frac{1}{4^{2}} + \ldots.

Some great mathematicians, including Liebnitz, John Bernoulli and D’Alembert, failed to compute this infinite series. Euler established himself as the best mathematician of Europe (in fact, one of the greatest mathematicians in history) by evaluating this series initially by a not-so-rigorous method. Later on, he gave alternative and more rigorous ways of getting the same result.

Prove that the series converges and gets an upper limit. Then, try to evaluate the series.


Due Nicolas Oresine:

Consider the following infinite series: \phi(s)=1 + \frac{1}{2^{s}} + \frac{1}{3^{s}} + \frac{1}{4^{s}} + ldots

We can re-write the preceding series as follows: \phi(s) = 1+ (\frac{1}{2^{s}}+\frac{1}{3^{s}}) + (\frac{1}{4^{s}} + \frac{1}{5^{s}} + \frac{1}{6^{s}} + \frac{1}{7^{s}}) + \ldots, which in turn is less than

1 + (\frac{2}{2^{s}}) + (\frac{4}{4^{s}}) + \ldots. Now, the RHS of this can be re-written as

1+(\frac{2}{2^{s}}) + (\frac{4}{4^{s}}) + \ldots=1 + \frac{1}{2^{(s-1)}}+ (\frac{1}{2^{(s-1)}})^{2} + \ldots, which is a geometric series and it is given by


Now, we can say that \phi(s) will converge if \frac{1}{2^{(s-1)}}<1 \Longrightarrow s >1.

In order to prove what is asked, we start with \phi(s)=1 + \frac{1}{2^{s}}+ \frac{1}{3^{s}}+ \frac{1}{4^{s}}+\ldots

And, then multiply both sides by \frac{1}{2^{s}} and then subtract the resulting equation from the preceding equation to get


where all the terms containing the reciprocals of the sth power of even numbers vanished.

Repeating this procedure with \frac{1}{3^{s}} gives

(1-\frac{1}{2^{s}})(1-\frac{1}{3^{s}})\phi(s)=1+\frac{1}{5^{s}}+ \ldots

where all terms containing the reciprocals of the sth power of multiples of 3 vanished.

By continuing this with all prime numbers, we get

\prod_{p}(1-\frac{1}{p^{s}})\phi(s)=1, where p represents all prime numbers. Thus, we get

\phi(s)=1 + \frac{}{} + \frac{}{} + \frac{}{} + \ldots =\frac{1}{\prod_{p}(1-\frac{1}{p^{s}})}

This is a remarkable result because the LHS is concerned with only positive integers, whereas the RHS is concerned with only primes. This result is known as the “Golden Key of Euler”.

Riemann created his famous \zeta- function by extending the variable s to the entire complex plane, except s=1 withย 

\zeta(s)=1+ \frac{1}{2^{s}} + \frac{1}{3^{s}} + \ldots .

This function is now very famous as the Riemann zeta function.

How can we apply the Golden Key of Euler to Mengoli’s question that we started with?

Ans. In the Golden Key of Euler, substitute s=2.

Hence, we get the upper limit of the given series is 2.

Euler’s proof (1775):

The proof ran as follows:

It is a little roundabout way of arriving at the correct answer from a known result. Consider McLaurin’s series expansion of sin x:

\sin{(x)}=x - \frac{x^{3}}{3!} + \frac{x^{5}}{5!} -\frac{x^{7}}{7!} + \frac{x^{9}}{9!} + \ldots

By dividing both sides by x and then substituting y=x^{2} on the right side, we get the following:

\frac{\sin{(x)}}{x} = 1-\frac{y}{3!} + \frac{y^{2}}{5!} - \frac{y^{3}}{7!} + \ldots

By taking a special value of x=n\pi (and, hence y=n^{2}\pi^{2}), we get the following:

\frac{\sin (n\pi)}{(n\pi)}=0=1-\frac{y}{3!} + \frac{y^{2}}{5!} - \frac{y^{3}}{5!}+ \ldots

Noteย  that preceding equation is not a polynomial, but an infinite series.ย But, Euler still treated it as a polynomial (that is why it was not accepted as a rigorous result) and observed that this “infinite” polynomial has roots equal to y_{n}=n^{2}x^{2}. Then, Euler had used the fact that the sum of the reciprocals of the roots is determined by the coefficient of the linear term (here, the y-term) when the constant is made unity. (check this as homework quiz, for a quadratic to be convinced).ย So, Euler had arrived at the following result:

1-\sum_{i=1}^{\infty}\frac{6}{y_{n}}=0 \Longrightarrow \sum_{i=1}^{\infty}\frac{1}{y_{n}}=\frac{1}{6}. With y_{n}=n^{2}(\pi)^{2}, we get the following:

\sum_{i=1}^{\infty}\frac{1}{n^{2}(\pi)^{2}}=\frac{1}{6} or, \sum_{1}^{n^{2}}\frac{1}{n^{2}}=\frac{(\pi)^{2}}{6}.

Another proof also attributed to Euler that uses the series expansion of sin (x) goes as follows below:

\sin {(x)} has roots given by 0, \pm \pi, \pm 2\pi, \pm 3\pi, …So does this polynomial that Euler reportedly constructed:


So, Euler considered the preceding equation to be equivalent to:

\sin{(x)}=x - \frac{x^{3}}{3!} + \frac{x^{5}}{5!} - \frac{x^{7}}{7!} + \ldots=0

Then, he had equated the coefficient of x^{3} in both to get the result:

\sum_{n=1}^{\infty}\frac{1}{n^{2}(\pi)^{2}} = \frac{1}{3!} = \frac{1}{6}.

Thus, \sum_{n=1}^{\infty}\frac{1}{n^{2}}=\frac{(\pi)^{2}}{6}.

Later on, Euler had provided a few more alternate and rigorous proofs of this result.

Reference: Popular Problems and Puzzles in Mathematics by Asok Kumar Mallik, IISc Press, Foundation Books.

Amazon India link:

Hope you all enjoyed it — learning to think like Euler !! By the way, it did take a long time for even analysis to become so rigorous as it is now….You might like this observation a lot. ๐Ÿ™‚ ๐Ÿ™‚ ๐Ÿ™‚

Nalin Pithwa.

Huygens’ problem to Leibnitz: solution

In the Feb 23 2018 blog problem, we posed the following question:

Sum the following infinite series:

1+\frac{1}{3} + \frac{1}{6} + \frac{1}{10} + \frac{1}{15}+ \ldots.


The sum can be written as:

S=\sum_{n=1}^{\infty}P_{n}, where P_{n}=\frac{2}{n(n+1)}=2(\frac{1}{n}-\frac{1}{n+1}).

Thus, 2(1-\frac{1}{2} + \frac{1}{2} - \frac{1}{3} + \frac{1}{3} - \frac{1}{4} + \ldots)=2. This is the answer.

If you think deeper, this needs some discussion about rearrangements of infinite series also. For the time, we consider it outside our scope.


Nalin Pithwa.

Famous harmonic series — solutions

Solutions to previous blog questions on harmonic series are presented below:

Basic Reference: Popular Problems and Puzzles in Mathematics by Asok Kumar Mallik, IISc Press, Foundation Books; Amazon India link:

Detailed Reference:ย 

Mallik, A. K. 2007: “Curious consequences of simple sequences,”ย Resonance,ย (January), 23-37.

Personal opinion only: Resonance is one of the best Indian magazines/journals for elementary/higher math and physics. It behooves you to subscribe to it. It will help in RMO, INMO and Madhava Mathematics Competition of India.โ€“_Journal_of_Science_Education/


  1. The thirteenth century French polymath Nicolas-Oresme proved that the harmonic series :1+ \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \ldots does not converge. Prove this result.

Solution 1:

Nicolas Oreme had provided a simple proof as it involves mere grouping of terms, noticing patterns and making comparisons:

\lim_{n \rightarrow \infty}H_{n}=H_{\infty}=1+\frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8} + \ldots

> 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{4} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \ldots

> 1+ \frac{1}{2} + \frac{1}{2} + \frac{1}{2} + \ldots

Therefore, H_{\infty} diverges as we go on adding one half indefinitely. Here is another way to prove this:

Consider \lim_{n \rightarrow \infty}H_{n}=H_{\infty}=1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \ldots

By multiplying and dividing both sides by 2 and then by regrouping the terms, we get:

H_{\infty}=\frac{2}{2} + \frac{2}{4} + \frac{2}{6} + \frac{2}{8} + \ldots

H_{\infty}= \frac{1+1}{2} + \frac{1+1}{4} + \frac{1+1}{6} + \frac{1+1}{8} + \ldots

H_{\infty}= (\frac{1}{2} + \frac{1}{2}) + (\frac{1}{4} + \frac{1}{4}) + (\frac{1}{6} + \frac{1}{6}) + (\frac{1}{8} + \frac{1}{8}) + \ldots

H_{\infty}<1+ \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \ldots

leading to a contradiction that H_{\infty}<H_{\infty} ! The contradiction arose because only finite numbers remain unaltered when multiplied and divided by 2. So, H_{\infty} is not a finite number, that is, it diverges.

2. Prove that \frac{1}{2} + \frac{1}{4} + \frac{1}{6} + \ldots does not converge.

Solution 2:

H_{E}=\frac{1}{2} + \frac{1}{4} + \frac{1}{6} + \frac{1}{8} + \ldots=\frac{1}{2}(1+\frac{1}{2} + \frac{1}{3} + \frac{1}{4} +\ldots)=\frac{1}{2}H_{n}.

Since H_{\infty} diverges, so does H_{E}.

3. Prove that 1+ \frac{1}{3} + \frac{1}{5} + \frac{1}{7} + \ldots does not converge.

Solution 3:

H_{O}=\frac{1}{1}+ \frac{1}{3} +\frac{1}{5}+ \frac{1}{7}+ \ldots diverges as each term in this series is greater than the corresponding term of H_{E}, which we have just sent to diverge.


Nalin Pithwa.

Famous harmonic series questions

Question 1:

The thirteenth century French polymath Nicolas Oresme proved that the harmonic series 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \ldots does not converge. Prove this result.

Question 2:

Prove that \frac{1}{2} + \frac{1}{4} + \frac{1}{6} + \ldots does not converge.

Question 3:

Prove that 1 + \frac{1}{3} + \frac{1}{5} + \frac{1}{7} + \ldots does not converge.

Even if you solve these all on your own, you won’t achieve the glory of that French polymath, but you will have “re-discovered” some “elements of truth of analysis” …You can give yourself a pat on the back!


Nalin Pithwa.

Real Numbers, Sequences and Series: part 9


We call a sequence (a_{n})_{n=1}^{\infty} a Cauchy sequence if for all \varepsilon >0 there exists an n_{0} such that |a_{m}-a_{n}|<\varepsilon for all m, n > n_{0}.


Every Cauchy sequence is a bounded sequence and is convergent.


By definition, for all \varepsilon >0 there is an n_{0} such that

|a_{m}-a_{n}|<\varepsilon for all m, n>n_{0}.

So, in particular, |a_{n_{0}}-a_{n}|<\varepsilon for all n > n_{0}, that is,

a_{n_{0}+1}-\varepsilon<a_{n}<a_{n_{0}+1}+\varepsilon for all n>n_{0}.

Let M=\max \{ a_{1}, \ldots, a_{n_{0}}, a_{n_{0}+1}+\varepsilon\} and m=\min \{ a_{1}, \ldots, a_{n_{0}+1}-\varepsilon\}.

It is clear that m \leq a_{n} \leq M, for all n \geq 1.

We now prove that such a sequence is convergent. Let \overline {\lim} a_{n}=L and \underline{\lim}a_{n}=l. Since any Cauchy sequence is bounded,

-\infty < l \leq L < \infty.

But since (a_{n})_{n=1}^{\infty} is Cauchy, for every \varepsilon >0 there is an n_{0}=n_{0}(\varepsilon) such that

a_{n_{0}+1}-\varepsilon<a_{n}<a_{n_{0}+1}+\varepsilon for all n>n_{0}.

which implies that a_{n_{0}+1}-\varepsilon \leq \underline{\lim}a_{n} =l \leq \overline{\lim}a_{n}=L \leq a_{n_{0}+1}+\varepsilon. Thus, L-l \leq 2\varepsilon for all \varepsilon>0. This is possible only if L=l.


Thus, we have established that the Cauchy criterion is both a necessary and sufficient criterion of convergence of a sequence. We state a few more results without proofs (exercises).


For sequences (a_{n})_{n=1}^{\infty} and (b_{n})_{n=1}^{\infty}.

(i) If l \leq a_{n} \leq b_{n} and \lim_{n \rightarrow \infty}b_{n}=l, then (a_{n})_{n=1}^{\infty} too is convergent and \lim_{n \rightarrow \infty}a_{n}=l.

(ii) If a_{n} \leq b_{n}, then \overline{\lim}a_{n} \leq \overline{\lim}b_{n}, \underline{\lim}a_{n} \leq \underline{\lim}b_{n}.

(iii) \underline{\lim}(a_{n}+b_{n}) \geq \underline{\lim}a_{n}+\underline{\lim}b_{n}

(iv) \overline{\lim}(a_{n}+b_{n}) \leq \overline{\lim}{a_{n}}+ \overline{\lim}{b_{n}}

(v) If (a_{n})_{n=1}^{\infty} and (b_{n})_{n=1}^{\infty} are both convergent, then (a_{n}+b_{n})_{n=1}^{\infty}, (a_{n}-b_{n})_{n=1}^{\infty}, and (a_{n}b_{n})_{n=1}^{\infty} are convergent and we have \lim(a_{n} \pm b_{n})=\lim{(a_{n} \pm b_{n})}=\lim{a_{n}} \pm \lim{b_{n}}, and \lim{a_{n}b_{n}}=\lim {a_{n}}\leq \lim {b_{n}}.

(vi) If (a_{n})_{n=1}^{\infty}, (b_{n})_{n=1}^{\infty} are convergent and \lim_{n \rightarrow \infty}b_{n}=l \neq 0, then (\frac{a_{n}}{b_{n}})_{n=1}^{\infty} is convergent and \lim_{n \rightarrow \frac{a_{n}}{b_{n}}}= \frac{\lim {a_{n}}}{\lim{b_{n}}}.

Reference: Understanding Mathematics by Sinha, Karandikar et al. I have used this reference for all the previous articles on series and sequences.

More later,

Nalin Pithwa