Real Numbers, Sequences and Series: part 9


We call a sequence (a_{n})_{n=1}^{\infty} a Cauchy sequence if for all \varepsilon >0 there exists an n_{0} such that |a_{m}-a_{n}|<\varepsilon for all m, n > n_{0}.


Every Cauchy sequence is a bounded sequence and is convergent.


By definition, for all \varepsilon >0 there is an n_{0} such that

|a_{m}-a_{n}|<\varepsilon for all m, n>n_{0}.

So, in particular, |a_{n_{0}}-a_{n}|<\varepsilon for all n > n_{0}, that is,

a_{n_{0}+1}-\varepsilon<a_{n}<a_{n_{0}+1}+\varepsilon for all n>n_{0}.

Let M=\max \{ a_{1}, \ldots, a_{n_{0}}, a_{n_{0}+1}+\varepsilon\} and m=\min \{ a_{1}, \ldots, a_{n_{0}+1}-\varepsilon\}.

It is clear that m \leq a_{n} \leq M, for all n \geq 1.

We now prove that such a sequence is convergent. Let \overline {\lim} a_{n}=L and \underline{\lim}a_{n}=l. Since any Cauchy sequence is bounded,

-\infty < l \leq L < \infty.

But since (a_{n})_{n=1}^{\infty} is Cauchy, for every \varepsilon >0 there is an n_{0}=n_{0}(\varepsilon) such that

a_{n_{0}+1}-\varepsilon<a_{n}<a_{n_{0}+1}+\varepsilon for all n>n_{0}.

which implies that a_{n_{0}+1}-\varepsilon \leq \underline{\lim}a_{n} =l \leq \overline{\lim}a_{n}=L \leq a_{n_{0}+1}+\varepsilon. Thus, L-l \leq 2\varepsilon for all \varepsilon>0. This is possible only if L=l.


Thus, we have established that the Cauchy criterion is both a necessary and sufficient criterion of convergence of a sequence. We state a few more results without proofs (exercises).


For sequences (a_{n})_{n=1}^{\infty} and (b_{n})_{n=1}^{\infty}.

(i) If l \leq a_{n} \leq b_{n} and \lim_{n \rightarrow \infty}b_{n}=l, then (a_{n})_{n=1}^{\infty} too is convergent and \lim_{n \rightarrow \infty}a_{n}=l.

(ii) If a_{n} \leq b_{n}, then \overline{\lim}a_{n} \leq \overline{\lim}b_{n}, \underline{\lim}a_{n} \leq \underline{\lim}b_{n}.

(iii) \underline{\lim}(a_{n}+b_{n}) \geq \underline{\lim}a_{n}+\underline{\lim}b_{n}

(iv) \overline{\lim}(a_{n}+b_{n}) \leq \overline{\lim}{a_{n}}+ \overline{\lim}{b_{n}}

(v) If (a_{n})_{n=1}^{\infty} and (b_{n})_{n=1}^{\infty} are both convergent, then (a_{n}+b_{n})_{n=1}^{\infty}, (a_{n}-b_{n})_{n=1}^{\infty}, and (a_{n}b_{n})_{n=1}^{\infty} are convergent and we have \lim(a_{n} \pm b_{n})=\lim{(a_{n} \pm b_{n})}=\lim{a_{n}} \pm \lim{b_{n}}, and \lim{a_{n}b_{n}}=\lim {a_{n}}\leq \lim {b_{n}}.

(vi) If (a_{n})_{n=1}^{\infty}, (b_{n})_{n=1}^{\infty} are convergent and \lim_{n \rightarrow \infty}b_{n}=l \neq 0, then (\frac{a_{n}}{b_{n}})_{n=1}^{\infty} is convergent and \lim_{n \rightarrow \frac{a_{n}}{b_{n}}}= \frac{\lim {a_{n}}}{\lim{b_{n}}}.

Reference: Understanding Mathematics by Sinha, Karandikar et al. I have used this reference for all the previous articles on series and sequences.

More later,

Nalin Pithwa


Real Numbers, Sequences and Series: Part 7


Discover (and justify) an essential difference between the decimal expansions of rational and irrational numbers.

Giving a decimal expansion of a real number means that given n \in N, we can find a_{0} \in Z and 0 \leq a_{1}, \ldots, a_{n} \leq 9 such that

|x-\sum_{k=0}^{n}\frac{a_{k}}{10^{k}}|< \frac{1}{10^{n}}

In other words if we write

x_{n}=a_{0}+\frac{a_{1}}{10}+\frac{a_{2}}{10^{2}}+\ldots +\frac{a_{n}}{10^{n}}

then x_{1}, x_{2}, x_{3}, \ldots, x_{n}, \ldots are approximate values of x correct up to the first, second, third, …, nth place of decimal respectively. So when we write a real number by a non-terminating decimal expansion, we mean that we have a scheme of approximation of the real numbers by terminating decimals in such a way that if we stop after the nth place of decimal expansion, then the maximum error committed by us is 10^{-n}.

This brings us to the question of successive approximations of a number. It is obvious that when we have some approximation we ought to have some notion of the error committed. Often we try to reach a number through its approximate values, and the context determines the maximum error admissible. Now, if the error admissible is \varepsilon >0, and x_{1}, x_{2}, x_{3}, \ldots is a scheme of successive is approximation of a number x, then we should be able to tell at which stage the desired accuracy is achieved. In fact, we should find an n such that |x-x_{n}|<\varepsilon. But this could be a chance event. If the error exceeds \varepsilon at a later stage, then the scheme cannot be a good approximation as it is not “stable”. Instead, it would be desirable that accuracy is achieved at a certain stage and it should not get worse after that stage. This can be realized by demanding that there is a natural number n_{0} such that |x-x_{n}|<\varepsilon for all n > n_{0}. It is clear that n_{0} will depend on varepsilon. This leads to the notion of convergence, which is the subject of a later blog.

More later,

Nalin Pithwa

What is analysis and why do analysis — part 2 of 2

We had discussed this on Nov 17 2015 blog. We finish the article with more examples from the work of Prof. Terence Tao. (If you like it, please send a thanks to him :-))

Example 1. (Interchanging limits and integrals).

For any real number y, we have

\int_{-\infty}^{\infty}\frac {dx}{1+(x-y)^{2}}=\arctan (x-y)\mid_{x=-\infty}^{\infty} which equals


Taking limits as y \rightarrow \infty, we should obtain

\int_{-\infty}^{\infty}\lim_{y \rightarrow \infty}\frac {dx}{1+(x-y)^{2}}=\lim_{y \rightarrow \infty} \int_{-\infty}^{\infty}\frac {dx}{1+(x-y)^{2}}=\pi

But, for every x, have \lim_{y \rightarrow \infty} \frac {1}{1+(x-y)^{2}}=0. So, we seem to have concluded that 0=\pi. What was the problem with the above argument? Should one abandon the (very useful) technique of interchanging limits and integrals?

Example 2. Interchanging limits and derivatives.

Observe that if \in > 0, then

\frac {d}{dx}\frac {x^{3}}{\in^{2}+x^{2}}=\frac {3x^{2}(\in^{2}+x^{2})+x^{2}-2x^{4}}{(\in^{2}+x^{2})^{2}},

and in particular that

\frac {d}{dx}\frac {x^{3}}{\in^{2}+x^{2}}\mid_{x=0}=0.

Taking limits as \in \rightarrow 0, one might then expect that

\frac {d}{dx}\frac {x^{3}}{0+x^{2}}\mid_{x=0}=0.

But, the right hand side is \frac {dx}{dx}=1. Does this mean that it is always illegitimate to interchange limits and derivatives?

Example 3. Interchanging derivatives.

Let^{1} f(x,y) be the function f(x,y)=\frac {xy^{3}}{x^{2}+y^{2}}. A common manoeuvre in analysis is to interchange two partial derivatives, thus one expects

\frac {\partial^{2}f(0,0)}{\partial x \partial y}=\frac {\partial^{2}f(0,0)}{\partial y \partial x} .

But, from the quotient rule, we have

\frac {\partial f(x,y)}{\partial y}=\frac {3xy^{2}}{x^{2}+y^{2}}=\frac {2xy^{4}}{(x^{2}+y^{2})^{2}}

and in particular,

\frac {\partial f(x,0)}{\partial y}=\frac {0}{x^{2}}-\frac{0}{x^{4}}=0.

Thus, \frac {\partial^{2}f(0,0)}{\partial x \partial y}=0.

On the other hand, from the quotient rule again, we have

\frac {\partial f(x,y)}{\partial x}=\frac {y^{3}}{x^{2}+y^{2}} - \frac {2x^{2}y^{3}}{(x^{2}+y^{2})^{2}} and hence,

\frac {\partial f(0,y)}{\partial x}=\frac {y^{3}}{y^{2}}-\frac {0}{y^{4}}=y.

Thus, \frac {\partial^{2}f(0,0)}{\partial y \partial x}=1.

Since 1 \neq 0, we thus seem to have shown that interchange of two derivatives is untrustworthy. But, are there any other circumstances in which the interchange of derivatives is legitimate?

Example 4.L^{'} H\hat {o}pital's Rule

We are familiar with the beautifully simple L^{'}H \hat{0}pital's rule

\lim_{ x \rightarrow x_{0}} \frac {f(x)}{g(x)}=\lim_{x \rightarrow x_{0}}\frac {f^{'}(x)}{g^{'}(x)}

but one can still get led to incorrect conclusions if one applies it incorrectly. For instance, applying it to f(x)=x, g(x)=1+x and x_{0}=0 we would obtain

\lim_{x \rightarrow 0}\frac {x}{1+x}=\lim_{x \rightarrow 0} \frac {1}{1}=1.

But this is an incorrect answer since \lim_{x \rightarrow 0}\frac {x}{1+x}=\frac {0}{1+0}=0.

Of course, all that is going on here is that L^{'}H \hat{o}pital's rule is only applicable when both f(x), g(x) go to zero as x \rightarrow x_{0}, a condition which was violated in the previous example. But, even when f(x) and g(x) do go to zero as x \rightarrow x_{0}, there is still a possibility for an incorrect conclusion. For instance, consider the limit

\lim_{x \rightarrow 0} \frac {x^{2} \sin (x^{-4})}{x}.

Both numerator and denominator go to zero as x \rightarrow 0, so it seems pretty safe to apply the rule, to obtain

\lim_{x \rightarrow 0} \frac {x^{2}\sin (x^{-4})}{x}=\lim_{x \rightarrow 0} \frac {2x \sin (x^{-4})-4x^{-3}\cos (x^{-4})}{1} which equals

\lim_{x \rightarrow 0}2x \sin (x^{-4})-\lim_{x \rightarrow 0}4x^{-3}\cos (x^{-4}).

The first limit converges to zero by the Sandwich theorem (since the function 2xsin(x^{-4}) is bounded above by 2|x| and below by -2|x|, both of which go to zero at 0). But the second limit is divergent (because x^{-3} goes to infinity as x \rightarrow 0, and \cos (x^{-4}) does not go to zero.) So the limit \lim_{x \rightarrow 0} \frac {2x \sin(x^{-4})-4x^{-2}\cos (x^{-4})}{1} diverges. One might then conclude using L^{'}H\hat{o}pital's Rule that \lim_{x \rightarrow 0}\frac {x^{2}\sin (x^{-4})}{x} also diverges; however, we can clearly rewrite this limit as \lim_{x \rightarrow 0}x\sin(x^{-4}), which goes to zero when x \rightarrow 0 by the Sandwich Theorem again. This does not show that L^{"}H\hat opital's Rule is untrustworthy. Indeed, it is quite rigorous, but it still requires some care when applied.

That is all, once again, if you like this, please send a thanks note to Prof. Terence Tao.

More later,

Nalin Pithwa