Real numbers, sequences and series: part VI

No study is complete without solving problems on your own. Below are the exercises related to part V.

Exercises.

Using the properties of real numbers, show that

1) \alpha <0 or \alpha=0 or \alpha>0.

2) If \alpha \neq 0, (-\alpha)^{2}=(\alpha)^{2}>0.

3) if 0<a\leq b, then \frac{1}{b} \leq \frac{1}{a}.

4) If 0 \leq a,b, then (1-a)(1-b) \geq 1-a-b.

5) If 0<a<1, then a^{n}<1 for any positive integer n.

6) For 0<a,b, a^{n}<b^{n} implies a<b.

7) If 0 \leq a, then (1+a)^{n}\geq 1+na, and (1-a)^{n} \geq 1-na, and

8) If 0<a<\frac{1}{n}, then (1+a)^{n}<\frac{1}{(1-a)^{n}}<\frac{1}{1-na}

9) Every set A \subseteq \Re that is bounded below admits a greatest lower bound.

More later,

Nalin Pithwa

What is analysis and why do analysis — part 2 of 2

We had discussed this on Nov 17 2015 blog. We finish the article with more examples from the work of Prof. Terence Tao. (If you like it, please send a thanks to him :-))

Example 1. (Interchanging limits and integrals).

For any real number y, we have

\int_{-\infty}^{\infty}\frac {dx}{1+(x-y)^{2}}=\arctan (x-y)\mid_{x=-\infty}^{\infty} which equals

(\pi/2)-(-\pi/2)=\pi.

Taking limits as y \rightarrow \infty, we should obtain

\int_{-\infty}^{\infty}\lim_{y \rightarrow \infty}\frac {dx}{1+(x-y)^{2}}=\lim_{y \rightarrow \infty} \int_{-\infty}^{\infty}\frac {dx}{1+(x-y)^{2}}=\pi

But, for every x, have \lim_{y \rightarrow \infty} \frac {1}{1+(x-y)^{2}}=0. So, we seem to have concluded that 0=\pi. What was the problem with the above argument? Should one abandon the (very useful) technique of interchanging limits and integrals?

Example 2. Interchanging limits and derivatives.

Observe that if \in > 0, then

\frac {d}{dx}\frac {x^{3}}{\in^{2}+x^{2}}=\frac {3x^{2}(\in^{2}+x^{2})+x^{2}-2x^{4}}{(\in^{2}+x^{2})^{2}},

and in particular that

\frac {d}{dx}\frac {x^{3}}{\in^{2}+x^{2}}\mid_{x=0}=0.

Taking limits as \in \rightarrow 0, one might then expect that

\frac {d}{dx}\frac {x^{3}}{0+x^{2}}\mid_{x=0}=0.

But, the right hand side is \frac {dx}{dx}=1. Does this mean that it is always illegitimate to interchange limits and derivatives?

Example 3. Interchanging derivatives.

Let^{1} f(x,y) be the function f(x,y)=\frac {xy^{3}}{x^{2}+y^{2}}. A common manoeuvre in analysis is to interchange two partial derivatives, thus one expects

\frac {\partial^{2}f(0,0)}{\partial x \partial y}=\frac {\partial^{2}f(0,0)}{\partial y \partial x} .

But, from the quotient rule, we have

\frac {\partial f(x,y)}{\partial y}=\frac {3xy^{2}}{x^{2}+y^{2}}=\frac {2xy^{4}}{(x^{2}+y^{2})^{2}}

and in particular,

\frac {\partial f(x,0)}{\partial y}=\frac {0}{x^{2}}-\frac{0}{x^{4}}=0.

Thus, \frac {\partial^{2}f(0,0)}{\partial x \partial y}=0.

On the other hand, from the quotient rule again, we have

\frac {\partial f(x,y)}{\partial x}=\frac {y^{3}}{x^{2}+y^{2}} - \frac {2x^{2}y^{3}}{(x^{2}+y^{2})^{2}} and hence,

\frac {\partial f(0,y)}{\partial x}=\frac {y^{3}}{y^{2}}-\frac {0}{y^{4}}=y.

Thus, \frac {\partial^{2}f(0,0)}{\partial y \partial x}=1.

Since 1 \neq 0, we thus seem to have shown that interchange of two derivatives is untrustworthy. But, are there any other circumstances in which the interchange of derivatives is legitimate?

Example 4.L^{'} H\hat {o}pital's Rule

We are familiar with the beautifully simple L^{'}H \hat{0}pital's rule

\lim_{ x \rightarrow x_{0}} \frac {f(x)}{g(x)}=\lim_{x \rightarrow x_{0}}\frac {f^{'}(x)}{g^{'}(x)}

but one can still get led to incorrect conclusions if one applies it incorrectly. For instance, applying it to f(x)=x, g(x)=1+x and x_{0}=0 we would obtain

\lim_{x \rightarrow 0}\frac {x}{1+x}=\lim_{x \rightarrow 0} \frac {1}{1}=1.

But this is an incorrect answer since \lim_{x \rightarrow 0}\frac {x}{1+x}=\frac {0}{1+0}=0.

Of course, all that is going on here is that L^{'}H \hat{o}pital's rule is only applicable when both f(x), g(x) go to zero as x \rightarrow x_{0}, a condition which was violated in the previous example. But, even when f(x) and g(x) do go to zero as x \rightarrow x_{0}, there is still a possibility for an incorrect conclusion. For instance, consider the limit

\lim_{x \rightarrow 0} \frac {x^{2} \sin (x^{-4})}{x}.

Both numerator and denominator go to zero as x \rightarrow 0, so it seems pretty safe to apply the rule, to obtain

\lim_{x \rightarrow 0} \frac {x^{2}\sin (x^{-4})}{x}=\lim_{x \rightarrow 0} \frac {2x \sin (x^{-4})-4x^{-3}\cos (x^{-4})}{1} which equals

\lim_{x \rightarrow 0}2x \sin (x^{-4})-\lim_{x \rightarrow 0}4x^{-3}\cos (x^{-4}).

The first limit converges to zero by the Sandwich theorem (since the function 2xsin(x^{-4}) is bounded above by 2|x| and below by -2|x|, both of which go to zero at 0). But the second limit is divergent (because x^{-3} goes to infinity as x \rightarrow 0, and \cos (x^{-4}) does not go to zero.) So the limit \lim_{x \rightarrow 0} \frac {2x \sin(x^{-4})-4x^{-2}\cos (x^{-4})}{1} diverges. One might then conclude using L^{'}H\hat{o}pital's Rule that \lim_{x \rightarrow 0}\frac {x^{2}\sin (x^{-4})}{x} also diverges; however, we can clearly rewrite this limit as \lim_{x \rightarrow 0}x\sin(x^{-4}), which goes to zero when x \rightarrow 0 by the Sandwich Theorem again. This does not show that L^{"}H\hat opital's Rule is untrustworthy. Indeed, it is quite rigorous, but it still requires some care when applied.

That is all, once again, if you like this, please send a thanks note to Prof. Terence Tao.

More later,

Nalin Pithwa

 

 

 

 

 

What is analysis and why do analysis — part I of 2

One of my bright students of IITJEE Math asked me this question just now: when can we switch integrals or an infinite series and an integral or two limits or two infinite series?

Most of the time, students routinely do such things without questioning whether it is a valid operation. It is almost the same as in matrices: AB \neq BA.

My student’s question hits the bull’s eye: what is analysis and why do analysis? To answer these two questions, or to satisfy your curiosity, I am reproducing the answer from Prof. Terence Tao’s text Analysis Volume I. It is just for the purpose of sharing with budding minds…

What is analysis?

Real Analysis deals with the analysis of real numbers, sequences and series of real numbers, and real-valued functions. This is related to, but is distinct from, complex analysis, which concerns the analysis of the complex numbers and complex numbers, harmonic analysis, which concerns the analysis of harmonics (waves0 such as sine waves, and how they synthesize other functions via the Fourier transform, functional analysis, which focuses much more heavily on  functions (and how they can form things like vector spaces), and so forth. Analysis is the rigorous study of such objects, with a focus on trying to pin down precisely and accurately the qualitative and quantitative behaviour of these objects. Real analysis is the theoretical foundation which underlies calculus, which is the collection of computational algorithms which one uses to manipulate functions.

In Real Analysis, we study many objects which will be familiar to you from freshman calculus: numbers, sequences, series, limits, functions, definite integrals, derivatives and so forth. You already have a great deal of experience of computing with these objects; however, in Real Analysis we focus more on the underlying theory for these objects. In Real Analysis, we are concerned with questions such as the following:

1) What is a real number? Is there a largest real number? After 0, what is the “next” real number(i.e., what is the smallest positive real number)? Can you cut a real number into pieces infinitely many times? Why does a number such as 2 have a square root, but a number such as -2 does not? If there are infinitely many reals and infinitely many rationals, how  come there are “more” real numbers than rational numbers?

2) How do you take the limit of a sequence of real numbers? Which sequences have limits and which ones don’t? If you can stop a sequence from escaping to infinity, does this mean that it must eventually settle down and converge? Can you add infinitely many real numbers together and still get a finite real number? Can you add infinitely many rational numbers together and end up with a non-rational number? if you rearrange the elements of an infinite sum, is the sum still the same?

3) What is a function? What does it mean for a function to be continuous? Differentiable? Integrable? Bounded? Can you add infinitely many functions together? What about taking limits of sequences of functions? Can you  differentiate an infinite series of functions? What about integrating? If a function f(x) takes the value 3 when x=0 and 5 when x=1 (that  is, f(0)=3 and f(1)=5), does it have to take every intermediate value between 3 and 5 when x goes between 0 and 1? Why?

You may already know answers to some of these questions from your calculus  classes, but most likely these sorts of issues were only of secondary importance to these courses; the emphasis was on getting you to perform computations, such as computing the integral of x \sin (x^{2}) from x=0 to x=1. But, now that you are comfortable with these objects, so real analysis investigates what is really going on.

Why do analysis?

It is a fair question to ask, “why bother?”, when it comes to analysis. There is a certain philosophical satisfaction in knowing why things work, but a pragmatic person may argue that one only needs to know how things work to do real life problems. The calculus training you receive in introductory classes is certainly adequate for you to begin solving many problems in physics, chemistry, biology, economics, computer science, finance, engineering, or whatever else you end up doing — and you can certainly use things like chain rule, L’Hopital’s Rule, or integration by parts without  knowing why these rules work, or whether there are any exceptions to these rules. However, one can get into  trouble if one applies rules without knowing  where they came from and what the limits of their appllicability are. Below are some examples in which several of these familiar rules, if applied blindly without knowledge of  the underlying analysis, can lead to disaster.

Example 1. (Division by zero). This is a very familiar one to you: the cancellation law ac=bc \longrightarrow a=b does not work when c=0. For instance, the identity 1 \times 0 = 2 \times 0 is true, but if one blindly cancels the 0 then one obtains

1=2, which is false. In  this case, it was obvious that one was dividing by zero; but in other cases it can be more hidden. (For example, refer to my blog article

https://mathhothouse.wordpress.com/2014/07/22/math-basics-division-by-zero-3/)

Example 2 (Divergent Series). You  have probably seen geometric series such as the infinite sum

S=1+(1/2)+(1/4)+(1/8)+(1/16)+\ldots

You have probably seen the following trick to sum the series: if we call the above sum S, then if we multiply both sides by 2, we obtain

2S=2+1+(1/2)+(1/4)+(1/8)+ldots = 2 +S

and hence, S=2, so the series sums to 2. However, if you apply the same trick to the series

S=1+2+4+8+16+\ldots one gets nonsensical results

2S=2+4+8+10+ \ldots = S-1 \longrightarrow S=-1

So the same reasoning that shows that 1+(1/2)+(1/4)+(1/8)+ \ldots =2 also gives that

1+2+4+8+ \ldots =-1. Why is it that we trust the first equation but not the second? A similar example arises with the series

S=1-1+1-1+1-1+ \ldots

we can write

S=1-(1-1+1-1+\ldots)=1-S

and hence that S=1/2, or instead we can write

S=(1-1)+(1-1)+(1-1)+ ldots=0+0+0+\ldots and hence, that S=0; or instead we can write

S=1+(-1+1)+(-1+1)+\ldots=1+0+0+\ldots and hence that S=1. Which one is correct?

Example 3. (Divergent Sequence) Here is a slight variation of the previous example. Let x be a real number, and let L be the limit L=\lim_{n \rightarrow \infty}x^{n}

Changing  the variables n=m+1. we have

\lim_{m+1 \rightarrow \infty}x^{m+1}=\lim_{m+1 \rightarrow \infty}x \times x^{m}

which equals x\lim_{m+1 \rightarrow \infty} x^{m}.

But, if m+1 \rightarrow \infty, then m \rightarrow \infty, thus

\lim_{m+1 \rightarrow \infty}x^{m}=\lim_{m \rightarrow \infty}x^{m}=\lim_{n \rightarrow \infty}x^{n}

and thus xL=L

At this point, we could cancel the L’s and conclude that x=1 for an arbitrarily real number x, which is absurd. But, since we are already aware of the division by zero problem, we could be a little smarter and conclude instead that either x=1, or L=0. in particular, we seem to have shown that

\lim_{n \rightarrow \infty}x^{n}=0 for all x \neq 1

But, this conclusion is absurd if we apply it to certain values of x, for instance by specializing to the case x=2 we could conclude that the sequence 1,2,4,8,… converges to zero, and by specializing to the case x=-1, we conclude that the sequence 1,-1,1,-1,1 \ldots also converges to zero. These conclusions appear to be absurd; what is the problem with the above argument?

Ecample 4. (Limiting values of functions). Start with the expression

\lim_{x \rightarrow \infty} \sin (x), make the change of variable x=y+\pi and recall that

\sin (y+\pi) = -\sin (y) to obtain

\lim_{x \rightarrow \infty} \sin (x)=\lim_{y+\pi} \sin (y+\pi)=\lim_{y \rightarrow \infty} (-\sin (y))

which equals -\lim_{y \rightarrow \infty} \sin (y)

Since \lim_{x \rightarrow \infty}\sin (x)=\lim_{y \rightarrow \infty}\sin (y), we thus have

\lim_{x \rightarrow \infty}\sin (x)=-\lim_{x \rightarrow \infty} \sin (x) and hence

\lim_{x \rightarrow \infty}\sin (x)=0.

If we then make the change of variables x=\pi/2 - z and recall that \sin (\pi/2-z)=\cos (z) we conclude that \lim_{x \rightarrow 0}\cos (x)=0.

Squaring both sides of these limits and adding we see that

\lim_{x \rightarrow \infty}(\sin^{2}(x)+cos^{2}(x)=0^{2}+0^{2}=0.

On the other hand, we have \sin^{2}(x) + cos^{2}(x)=1 for all x. Thus, we have shown that 1=0! What is the difficulty here?

Example 5. (Interchanging integrals) The interchanging of integrals is a trick which occurs in math just as commonly as the interchanging of sums. Suppose one wants to compute the volume under a surface z=f(x,y)

(let us ignore  the limits of integration for the moment). One can do it by slicing parallel to the x-axis: for each fixed value of y, we can compute an area \int f(x,y)dx, and then we integrate the area in the y variable to obtain the volume

V=\int \int f(x,y)dx dy.

Or we could slice parallel to  the y axis for  each fixed x and compute an area f(x,y)dy, and then integrate in the x axis to obtain V=\int \int f(x,y)dydx.

This seems to suggest that one should always be able to swap integral signs:

\int \int f(x,y)dx dy=\int \int f(x,y)dy dx

And, indeed people swap integral signs all the time, because sometimes one variable is easier to integrate in first than the other. However, just as infinite sums sometimes cannot be swapped, integrals are also sometimes dangerous to swap. An example is with the integrand e^{-yx}-xye^{-xy}. Suppose we believe that we can swap the integrals:

\int_{0}^{\infty} \int_{0}^{1}(e^{-xy} - xy e^{-xy})dydx which equals

\int_{0}^{1} \int_{0}^{\infty}(e^{-xy}-xye^{-xy})dxdy

Since \int_{0}^{1}(e^{-xy}-xye^{-xy})dy=ye^{-xy}|_{y=0}^{y=1}=e^{-x},

the left hand side is \int_{0}^{\infty}e^{-x}dx=-e^{-x}|_{0}^{\infty}=1. But, since

\int_{0}^{\infty}(e^{-xy}-xye^{-xy})dx=xe^{-xy}|_{x=0}^{x=\infty}=0,

the right hand side is \int_{0}^{1}0dx=o. Clearly 1 \neq 0, so there is an error somewhere; but, you won’t find one anywhere except in the step where we interchanged the integrals. So, how do we know when to trust the interchange of integrals?

Example 6. (Interchanging the limits). Suppose we start with the plausible looking statement

\lim_{x \rightarrow 0} \lim_{y \rightarrow 0} \frac {x^{2}}{x^{2}+y^{2}} which equals

\lim_{y \rightarrow 0} \lim_{x \rightarrow 0}\frac {x^{2}}{x^{2}+y^{2}}.

But, we have \lim_{y \rightarrow 0}\frac {x^{2}}{x^{2}+y^{2}}=\frac {x^{2}}{x^{2}+0^{2}}=1,

so  the LHS is 1; on the other hand, we have

\lim_{x \rightarrow 0}\frac {x^{2}}{x^{2}+y^{2}}=\frac {0^{2}}{0^{2}+y^{2}}=0.

so  the RHS is o. Since 1 is clearly not equal to zero, this suggests that interchange of limits is untrustworthy. But are there any other circumstances in which the interchange of limits is legitimate?

Example 7. (Interchanging limits again). Consider the plausible looking statement

\lim_{x \rightarrow 1^{-}} \lim_{n \rightarrow \infty}x^{n}

which equals

\lim_{n \rightarrow \infty} \lim_{x \rightarrow 1^{-}} x^{n}

where the notation x \rightarrow 1^{-} means that x is approaching 1 from the left. When x is to the left of 1, then \lim_{n \rightarrow \infty}x^{n}=0, and hence the left hand side is zero. But, we also have

\lim_{x \rightarrow 1^{-}}x^{n}=1 for all n, and so the right hand side limit is 1. Does this demonstrate that this type of limit interchange is always trustworthy?

More examples, later…

Nalin Pithwa