Some random assorted (part A) problems in algebra for RMO and INMO training

You might want to take a serious shot at each of these. In the first stage of attack, apportion 15 minutes of time for each problem. Do whatever you can, but write down your steps in minute detail. In the last 5 minutes, check why the method or approach does not work. You can even ask — or observe, for example, that if surds are there in an equation, the equation becomes inherently tough. So, as a child we are tempted to think — how to get rid of the surds ?…and so on, thinking in math requires patience and introversion…

So, here are the exercises for your math gym today:

1) Prove that if x, y, z are non-zero real numbers with x+y+z=0, then

\frac{x^{2}+y^{2}}{x+y} + \frac{y^{2}+z^{2}}{y+z} + \frac{z^{2}+x^{2}}{x+z} = \frac{x^{3}}{yz} + \frac{y^{3}}{zx} + \frac{z^{3}}{xy}

2) Let a b, c, d be complex numbers with a+b+c+d=0. Prove that

a^{3}+b^{3}+c^{3}+d^{3}=3(abc+bcd+adb+acd)

3) Let a, b, c, d be integers. Prove that a+b+c+d divides

2(a^{4}+b^{4}+c^{4}+d^{4})-(a^{2}+b^{2}+c^{2}+d^{2})^{2}+8abcd

4) Solve in complex numbers the equation:

(x+1)(x+2)(x+3)^{2}(x+4)(x+5)=360

5) Solve in real numbers the equation:

\sqrt{x} + \sqrt{y} + 2\sqrt{z-2} + \sqrt{u} + \sqrt{v} = x+y+z+u+v

6) Find the real solutions to the equation:

(x+y)^{2}=(x+1)(y-1)

7) Solve the equation:

\sqrt{x + \sqrt{4x + \sqrt{16x + \sqrt{\ldots + \sqrt{4^{n}x+3}}}}} - \sqrt{x}=1

8) Prove that if x, y, z are real numbers such that x^{3}+y^{3}+z^{3} \neq 0, then the ratio \frac{2xyz - (x+y+z)}{x^{3}+y^{3}+z^{3}} equals 2/3 if and only if x+y+z=0.

9) Solve in real numbers the equation:

\sqrt{x_{1}-1} = 2\sqrt{x_{2}-4}+ \ldots + n\sqrt{x_{n}-n^{2}}=\frac{1}{2}(x_{1}+x_{2}+ \ldots + x_{n})

10) Find the real solutions to the system of equations:

\frac{1}{x} + \frac{1}{y} = 9

(\frac{1}{\sqrt[3]{x}} + \frac{1}{\sqrt[3]{y}})(1+\frac{1}{\sqrt[3]{x}})(1+\frac{1}{\sqrt[3]{y}})=18

More later,
Nalin Pithwa

PS: if you want hints, do let me know…but you need to let me know your approach/idea first…else it is spoon-feeding…

Some Number Theory Questions for RMO and INMO

1) Let n \geq 2 and k be any positive integers. Prove that (n-1)^{2}\mid (n^{k}-1) if and only if (n-1) \mid k.

2) Prove that there are no positive integers a, b, n >1 such that (a^{n}-b^{n}) \mid (a^{n}+b^{n}).

3) If a and b>2 are any positive integers, prove that 2^{a}+1 is not divisible by 2^{b}-1.

4) The integers 1,3,6,10, \ldots, n(n+1)/2, …are called the triangular numbers because they are the numbers of dots needed to make successive triangular arrays of dots. For example, the number 10 can be perceived as the number of acrobats in a human triangle, 4 in a row at the bottom, 3 at the next level, then 2, then 1 at the top. The square numbers are 1, 4, 9, \ldots, n^{2}, \ldots The pentagonal numbers 1, 5, 12, 22, \ldots, (3n^{2}-n)/2, \ldots, can be seen in a geometric array in the following way: Start with n equally spaced dots P_{1}, P_{2}, \ldots, P_{n} on a straight line in a plane, with distance 1 between consecutive dots. Using P_{1}P_{2} as a base side, draw a regular pentagon in the plane. Similarly, draw n-2 additional regular pentagons on base sides P_{1}P_{3}, P_{1}P_{4}, \ldots, P_{1}P_{n}, all pentagons lying on the same side of the line P_{1}P_{n}. Mark dots at each vertex and at unit intervals along the sides of these pentagons. Prove that the total number of dots in the array is (3n^{2}-n)/2. In general, if regular k-gons are constructed on the sides P_{1}P_{2}, P_{1}P_{3}, …, P_{1}P_{n}, with dots marked again at unit intervals, prove that the total number of dots is 1+kn(n-1)/2 -(n-1)^{2}. This is the nth k-gonal number.

5) Prove that if m>n, then a^{2^{n}}+1 is a divisor of a^{2^{m}}-1. Show that if a, m, n are positive with m \neq n, then

( a^{2^{m}}+1, a^{2^{n}}+1) = 1, if a is even; and is 2, if a is odd.

6) Show that if (a,b)=1 then (a+b, a^{2}-ab+b^{2})=1 or 3.

7) Show that if (a,b)=1 and p is an odd prime, then ( a+b, \frac{a^{p}+b^{p}}{a+b})=p or 1.

8) Suppose that 2^{n}+1=xy, where x and y are integers greater than 1 and n>0. Show that 2^{a}\mid (x-1) if and only if 2^{a}\mid (y-1).

9) Prove that (n!+1, (n+1)!+1)=1.

10) Let a and b be positive integers such that (1+ab) \mid (a^{2}+b^{2}). Show that the integer (a^{2}+b^{2})/(1+ab) must be a perfect square.

Note that in the above questions, in general, (a,b) means the gcd of a and b.

More later,
Nalin Pithwa.

A Primer: Generating Functions: Part II: for RMO/INMO 2019

We shall now complicate the situation a little bit. Let us ask for the combinations of the symbols \alpha_{1}, \alpha_{2}, \ldots, \alpha_{n} with repetitions of each symbol allowed once more in the combinations. For example, let there be only two symbols \alpha_{1}, \alpha_{2}. Let us look for combinations of the form:

\alpha_{1}, \alpha_{2}, \alpha_{1}\alpha_{2}, \alpha_{1}\alpha_{1}, \alpha_{2}\alpha_{2}, \alpha_{1}\alpha_{1}\alpha_{2}, \alpha_{1}\alpha_{2}\alpha_{2}, \alpha_{1}\alpha_{1}\alpha_{2}\alpha_{2}

where, in each combination, each symbol may occur once, twice, or not at all. The OGF for this can be constructed by reasoning as follows: the choices for \alpha_{1} are not-\alpha_{1}, \alpha_{1} once, \alpha_{1} twice. This is represented by the factor (1+\alpha_{1}t+\alpha_{1}^{2}t^{2}). Similarly, the possible choices for \alpha_{2} correspond to the factor (1+\alpha_{2}t+\alpha_{2}^{2}t^{2}). So, the required OGF is (1+\alpha_{1}t+\alpha_{1}^{2}t)(1+\alpha_{2}t+\alpha_{2}^{2}t^{2})

On expansion, this gives : 1+(\alpha_{1}+\alpha_{2})t+(\alpha_{1}\alpha_{2}+\alpha_{1}^{2}+\alpha_{2}^{2})t^{2}+(\alpha_{1}^{2}\alpha_{2}+\alpha_{1}\alpha_{2}^{2})t^{3}+(\alpha_{1}^{2}\alpha_{2}^{2})t^{4}

Note that if we omit the term 1 (which corresponds to not choosing any \alpha), the other 8 terms correspond to the 8 different combinations listed in (*). Also, observe that the exponent r of the t^{r} tells us that the coefficient of t^{r} has the list or inventory of the r-combinations (under the required specification — in this case, with the restriction on repetitions of symbols) in it:

\bf{Illustration}

In the light of the foregoing discussion, let us now take up the following question again: in how many ways, can a total of 16 be obtained by rolling 4 dice once?; the contribution of each die to the total is either a “1” or a “2” or a “3” or a “4” or a “5” or a “6”. The contributions from each of the 4 dice have to be added to get the total — in this case, 16. So, if we write: t^{1}+t^{2}+t^{3}+t^{4}+t^{5}+t^{6}

as the factor corresponding to the first die, the factors corresponding to the other three dice are exactly the same. The product of these factors would be:

(*) (t+t^{2}+t^{3}+t^{4}+t^{5}+t^{6})^{4}

Each term in the expansion of this would be a power of t, and the exponent k of such a term t^{k} is nothing but the total of the four contributions which went into it. The number of times a term t^{k} can be obtained is exactly the number of times k can be obtained as a total on a throw of the four dice. So, if \alpha_{k} is the coefficient of t^{k} in the expansion, \alpha_{16} is the answer for the above question. Further, since (*) simplifies to (\frac{t(1-t^{6})}{1-t})^{4}, it follows that the answer for the above question tallies with the coefficient specified in the following next question: calculate the coefficient of t^{12} in (\frac{(1-t^{6})}{(1-t)})^{4}.6

Now, consider the following problem: Express the number N(n,p) of ways of obtaining a total of n by rolling p dice, as a certain coefficient in a suitable product of binomial expansions in powers of t. [ this in turn, is related to the observation that the number of ways a total of 16 can be obtained by rolling 4 dice once is the same as the coefficient of t^{12} in (\frac{1-t^{6}}{1-t})^{4}]:

So, we get that N(n,p)= coefficient of t^{n-p} in (\frac{1-t^{6}}{1-t})^{p}

Let us take an example from a graphical enumeration:

A \it {graph} G=G(V,F) is a set V of vertices a, b, c, …, together with a set E=V \times V of \it {edges} (a,b), (a,a), (b,a), (c,b), \ldots If (x,y) is considered the same as (y,x), we say the graph is \it{undirected}. Otherwise, the graph is said to be \it{directed}, and we say ‘(a,b) has a direction from a to b’. The edge (x,x) is called a loop. The graph is said to be of order |V|.

If the edge-set E is allowed to be a multiset, that is, if an edge (a,b) is allowed to occur more than once, (and, this may be called a ‘multiple edge’), we refer to the graph as a general graph.

If \phi_{5}(n) and \psi_{5}(n) denote the numbers of undirected (respectively, directed) loopless graphs of order 5, with n edges, none of them a multiple edge, find the series \sum \phi_{5}(n)t^{n} and \sum \psi_{5}(n)t^{n}.

Applying our recently developed techniques to the above question, a graph of 5 specified vertices is uniquely determined once you specify which pairs of vertices are ‘joined’. Suppose we are required to consider only graphs with 4 edges. This would need four pairs of vertices to be selected out of the total of 5 \choose 2 equal to 10 pairs that are available. So selection of pairs of vertices could be made in 10 \choose 4 ways. Each such selection corresponds to one unique graph, with the selected pairs being considered as edges. More informally, having selected a certain pairs of vertices, imagine that the vertices are represented by dots in a diagram and join the vertices of each selected pair by a running line. Then, the “graph” becomes a “visible” object. Note that the number of graphs is just the number of selections of pairs of vertices. Hence, \phi_{5}(4)=10 \choose 4.

Or, one could approach this problem in a different way. Imagine that you have a complete graph on 5 vertices — the “completeness” here means that every possible pair of vertices has been joined by an edge. From the complete graph which has 10 edges, one has to choose 4 edges — any four, for that matter — in order to get a graph as required by the problem.

On the same lines for a directed graph, one has a universe of 10 by 2, that is, 29 edges to choose from, for, each pair x,y gives rise to two possible edges (x,y) and (y,x). Hence,

\psi_{5}(4)=20 \choose 4.

Thus, the counting series for labelled graphs on 5 vertices is 1 + \sum_{p=1}^{10} {10 \choose p}t^{p}
and the counting series for directed labelled graphs on 5 vertices is
1+ \sum_{p=1}^{20}{20 \choose p}t^{p}.

Finally, the OGF for increasing words on an alphabet {a,b,c,d,e} with a<b<c<d<e is

(1+at+a^{2}t^{2}+\ldots)(1+bt+b^{2}t^{2}+\ldots)(1+ct+c^{2}t^{2}+\ldots)\times (1+dt+d^{2}t^{2}+\ldots)(1+et+e^{2}t^{2}+\ldots)

The corresponding OE is (1+t+t^{2}+t^{3}+\ldots)^{5} which is nothing but (1-t)^{-5} (this explains the following problem: Verify that the number of increasing words of length 10 out of the alphabet \{a,b,c,d,e \} with a<b<c<d<e is the coefficient of t^{10} in (1-t)^{-5} ).

We will continue this detailed discussion/exploration in the next article.

Until then aufwiedersehen,
Nalin Pithwa

Prof. Tim Gowers’ on recognising countable sets

https://gowers.wordpress.com/2008/07/30/recognising-countable-sets/

Thanks Dr. Gowers’. These are invaluable insights into basics. Thanks for giving so much of your time.

Prof. Tim Gowers’ on functions, domains, etc.

https://gowers.wordpress.com/2011/10/13/domains-codomains-ranges-images-preimages-inverse-images/

Thanks a lot Prof. Gowers! Math should be sans ambiguities as far as possible…!

I hope my students and readers can appreciate the details in this blog article of Prof. Gowers.

Regards,
Nalin Pithwa

A Primer: Generating Functions: Part I : RMO/INMO 2019

GENERATING FUNCTIONS and RECURRENCE RELATIONS:

The concept of a generating function is one of the most useful and basic concepts in the theory of combinatorial enumeration. If we want to count a collection of objects that depend in some way on n objects and if the desired value is say, \phi (n), then a series in powers of t such as \sum \phi (n) t^{n} is called a generating function for \phi (n). The generating functions arise in two different ways. One is from the investigation of recurrence relations and another is more straightforward: the generating functions arise as counting devices, different terms being specifically included to account for specific situations which we wished to count or ignore. This is a very fundamental, though difficult, technique in combinatorics. It requires considerable ingenuity for its success. We will have a look at the bare basics of such stuff.

We start here with the common knowledge:

(1+\alpha_{1}t)(1+\alpha_{2}t)\ldots (1+\alpha_{n}t)=1+a_{1}t+a_{2}t^{2}+ \ldots + a_{n}t^{n}….(2i) where a_{r}=sum of the products of the \alpha‘s taken r at a time. …(2ii)

Incidentally, the a‘s thus defined in (2ii) are called the elementary symmetric functions associated with the a‘s. We will re-visit these functions later.

Let us consider the algebraic identity (2i) from a combinatorial viewpoint. The explicit expansion in powers of t of the RHS of (2i) is symbolically a listing of the various combinations of the \alpha‘s in the following sense:

a_{1}=\sum \alpha_{1} represents all the 1-combinations of the \alpha‘s
a_{2}=\sum \alpha_{1}\alpha_{2} represents all the 2-combinations of the \alpha‘s
and so on.

In other words, if we want the r-combinations of the \alpha‘s, we have to look only at the coefficients of t^{r}. Since the LHS of (2i) is an expression which is easily constructed and its expansion generates the combinations in the said manner,we say that the LHS of (2i) is a Generating Function (GF) for the combinations of the \alpha‘s. It may happen that we are interested only in the number of combinations and not in a listing or inventory of them. Then, we need to look for only the number of terms in each coefficient above and this number will be easily obtained if we set each \alpha as 1. Thus, the GF for the number of combinations is (1+t)(1+t)(1+t)\ldots (1+t) n times;

and this is nothing but (1+t)^{n}. We already know that the expansion of this gives n \choose r as the coefficient of t^{r} and this tallies with the fact that the number of r-combinations of the \alpha‘s is n \choose r. Abstracting these ideas, we make the following definition:

Definition I:
The Ordinary Generating Function (OGF) for a sequence of symbolic expressions \phi(n) is the series

f(t)=\sum_{n}\phi (n)t^{n} …(2iii)

If \phi (n) is a number which counts a certain type of combinations or permutations, the series f(t) is called the Ordinary Enumeration (OE) or counting series for \phi (n) for n=1,2,\ldots

Example 2:
The OGF for the combinations of five symbols a, b, c, d, e is (1+at)(1+bt)(1+ct)(1+dt)(1+et)

The OE for the same is (1+t)^{5}. The coefficient of t^{4} in the first expression is

(*) abcd+abce+ abde+acde+bcde.

The coefficient of t^{4} in the second expression is 5 \choose 4, that is, 5 and this is the number of terms in (*).

Example 3:

The OGF for the elementary symmetric functions a_{1}, a_{2}, \ldots in the symbols \alpha_{1},\alpha_{2}, \alpha_{3}, \ldots is (1+\alpha_{1}t)(1+\alpha_{2}t)(1+\alpha_{3}t)\ldots ….(2iv)

This is exactly the algebraic result with which we started this section.

Remark:

The fact that the series on the HRS of (2iii) is an infinite series should not bother us with questions of convergence and the like. For, throughout (combinatorics) we shall be working only in the framework of “formal power series” which we now elaborate.

*THE ALGEBRA OF FORMAL POWER SERIES*

The vector space of infinite sequences of real numbers is well-known. If (\alpha_{k}) and \beta_{k} are two sequences, their sum is the sequence (\alpha_{k}+\beta_{k}), and a scalar multiple of the sequence (\alpha_{k}) is (c\alpha_{k}). We now identify the sequence (\alpha_{k}) with k=0,1,2, \ldots with the “formal” series

f = \sum_{k=0}^{\infty}\alpha_{k}t^{k}….(2v)

where t^{k} only means the following:

t^{0}=1, t^{k}t^{l}=t^{k+l}.

In the same way, (\beta_{k}), where k=0,1,2,\ldots corresponds to the formal series:

g=\sum_{k=0}^{\infty}\beta_{k}t^{k} and

we define: f+g = \sum (\alpha_{k}+\beta_{k})t^{k}, and cf= \sum (c\alpha_{k})t^{k}.

The set of all power series f now becomes a vector space isomorphic to the space of infinite sequences of real numbers. The zero element of this space is the series with every coefficient zero.

Now, let us define a product of two formal power series. Given f and g as above, we write fg=\sum_{k=0}^{\infty}\gamma_{k} t^{k} where

\gamma_{k}=\alpha_{0}\beta_{k}+\alpha_{1}\beta_{k-1}+\ldots + \alpha_{k}\beta_{0} = \sum (\alpha_{i}\beta_{j}), where i+j=k.

The multiplication is associative, commutative, and also distributive wrt addition. (the students/readers can take up this as an appetizer exercise !!) In fact, the set of all formal power series becomes an algebra. It is called the algebra of formal power series over the real s. It is denoted by \bf\Re[t], where \bf\Re means the algebra of reals. We further postulate that f=g in \bf\Re[t] iff \alpha_{k}=\beta_{k} for all k=0,1,2,\ldots. As we do in polynomials, we shall agree that the terms not present indicate that the coefficients are understood to be zero. The elements of \bf\Re may be considered as elements of \bf\Re[t]. In particular, the unity 1 of \bf\Re is also the unity of \bf\Re[t]. Also, the element t^{n} with n>0 belongs to \bf\Re, it being the formal power series \sum \alpha_{k}t^{k} with \alpha_{n}=1 and all other \alpha‘s zero. We now have the following important proposition which is the only tool necessary for working with formal power series as far as combinatorics is concerned:

Proposition : 2_4:
The element f of \bf\Re[t] given by (2v) has an inverse in \bf\Re[t] iff \alpha_{0} has an inverse in \bf\Re.

Proof:
If g=\sum \beta_{k}t^{k} is such that fg=1, the multiplication rule in \bf\Re[t] tells us that \alpha_{0}\beta_{0}=1 so that \beta_{0} is the inverse of \alpha_{0}. Hence, the “only if” part is proved.

To prove the “if” part, let \alpha_{0} have an inverse \alpha_{0}^{-1} in \bf\Re. We will show that it is possible to find g=\sum \beta_{k}t^{k} in \bf\Re[t] such that fg=1. If such a g were to exist, then the following equations should hold in order that fg=1, that is,

\alpha_{0}\beta_{0}=1
\alpha_{0}\beta_{1}+\alpha_{1}\beta_{0}=0
\alpha_{0}\beta_{2}+\alpha_{1}\beta_{1}+\alpha_{2}\beta_{0}=0
\vdots

So we have \beta_{0}=\alpha_{0}^{-1} from the first equation. Substituting this value of \beta_{0} in the second equation, we get \beta_{1} in terms of the \alpha‘s. And, so on, by the principle of mathematical induction, all the \beta‘s are uniquely determined. Thus, f is invertible in \bf\Re. QED.

Note that it is the above proposition which justifies the notation in \bf\Re[t], equalities such as

\frac{1}{1-t}=1+t+t^{2}+t^{3}+\ldots

The above is true because the RHS has an inverse and (1-t)(1+t+t^{2}+t^{3}+\ldots)=1

So, the unique inverse of 1+t+t^{2}+t^{3}+\ldots is (1-t) and vice versa. Hence, the expansion of \frac{1}{1-t} as above. Similarly, we have

\frac{1}{1+t}=1-t+t^{2}-\ldots
\frac{1}{1-t^{2}}=1+t^{2}+t^{4}+\ldots and many other such familiar expansions.

There is a differential operator in D in \bf\Re[t], which behaves exactly like the differential operator of calculus.

Define: (Df)(\alpha)=\sum_{k=0}^{\infty}(k+1)\alpha_{k+1}t^{k}

Then, one can easily prove that D: f \rightarrow Df is linear on \bf\Re[t], and further
D^{r}f(t)=\sum_{k=0}^{\infty}(k+r)(k+r-1)\ldots(k+1)\alpha_{k+r}t^{k} from which we get the term “Taylor-MacLaurin” expansion

f(t)=f(0)+Df(0)+\frac{D^{2}f(0)}{2!}+ \ldots…(2vi)

In the same manner, one can obtain, from f(t)=\frac{1}{1-\alpha t}, which in turn is equal to
1+ \alpha t + \alpha^{2} t^{2}+ \alpha^{3} t^{3} + \ldots

the result which mimics the logarithmic differentiation of calculus, viz.,

\frac{(Df)(t)}{f(t)} = \alpha + \alpha^{2} t+ \alpha^{3}t^{2}+ \alpha^{4}t^{3}+\ldots…(2vii)

The truth of this in \bf\Re[t] is seen by multiplying the series on the RHS of (2vii) by the series for f(t), and thus obtaining the series for (Df)(t).

Let us re-consider generating functions now. We saw that the GF for combinations of \alpha_{1}, \alpha_{2}, \ldots, \alpha_{n} is (1+\alpha_{1}t)(1+\alpha_{2}t)\ldots(1+\alpha_{n}t).

Let us analyze this and find out why it works. After all, what is a combination of the symbols : \alpha_{1}, \alpha_{2}, \ldots, \alpha_{n}? It is the result of a decision process involving a sequence of independent decisions as we move down the list of the \alpha‘s. The decisions are to be made on the following questions: Do we choose \alpha_{1} or not? Do we choose \alpha_{2} or not? \ldots Do we choose \alpha_{n} or not? And, if it is an r-combination that we want, we say “yes” to r of the questions above and say “no” to the remaining. The factor (1+\alpha_{1}t) in the expression (2ii) is an algebraic indication of the combinatorial fact that there are only two mutually exclusive alternatives available for us as far as the symbol \alpha_{1} is concerned. Either we choose \alpha_{1} or not. Choosing “\alpha_{1}” corresponds to picking the term \alpha_{1}t and choosing “not -\alpha_{1}” corresponds to picking the term 1. This correspondence is justified by the fact that in the formation of products in the expression of (2iv), each term in the expansion has only one contribution from 1+\alpha_{1}t and that is either 1 or \alpha_{t}.

The product (1+\alpha_{1}t)(1+\alpha_{2}t) gives us terms corresponding to all possible choices of combinations of the symbols \alpha_{1} and \alpha_{2} — these are:

1.1 standing for the choice “not-\alpha_{1}” and “not-\alpha_{2}

\alpha_{1}t . 1 standing for the choice of \alpha_{1} and “not-\alpha_{2}

1.\alpha_{2}t standing for the choice of “not-\alpha_{1}” and \alpha_{2}.

\alpha_{1}t . \alpha_{2}t standing for the choice of \alpha_{1} and \alpha_{2}.

This is, in some sense, the rationale for (2iv) being the OGF for the several r-combinations of \alpha_{1}, \alpha_{2}, \ldots, \alpha_{n}.

We shall now complicate the situation a little bit. Let us ask for the combinations of the symbols \alpha_{1}, \alpha_{2}, \ldots, \alpha_{n} with repetitions of each symbol allowed once more in the combinations.

To be discussed in the following article,

Regards,
Nalin Pithwa.

Reference:
Combinatorics, Theory and Applications, V. Krishnamurthy, East-West Press.
Amazon India Link:
https://www.amazon.in/Combinatorics-Theory-Applications-Krishnamurthy-V/dp/8185336024/ref=sr_1_5?keywords=V+Krishnamurthy&qid=1553718848&s=books&sr=1-5

Tutorial problems for RMO 2019 : combinatorics continued

1) In how many ways can 5 men and 5 women be seated in a round table if no two women may be seated side by side?

2) Six generals propose locking a safe containing top secret with a number of different locks. Each general will be given keys to certain of these locks. How many locks are required and how many keys must each general have so that, unless at least four generals are present, the safe cannot be opened?

3) How many integers between 1000 and 9999 inclusive have distinct digits? Of these, how many are even numbers? How many consist entirely of odd digits?

4) In how many ways can 9 distinct objects be placed in 5 distinct boxes in such a way that 3 of these boxes would be occupied and 2 would be empty?

5) In how many permutations of the word AUROBIND do the vowels appear in the alphabetical order?

6) There is an unlimited supply of weights of integral numbers of grams. Using n or fewer weights, find the number of ways in which a weight of m grams can be obtained. Prove that there is a bijection of the set of all such ways on the set of increasing words of length (n-1) or (m+1) ordered letters.

7) How many distinct solutions are there of x+y+z+w=10 (a) in positive integers and (b) in non-negative integers?

8) A train with n passengers aboard makes m stops. In how many ways can the passengers distribute themselves among these m stops as alighting passengers? if we are concerned only with the number of alighting passengers at each stop, how would the answer be modified?

9) There are 16 books on a bookshelf. In how many ways can 6 of these books be selected if a selection must not include two neighbouring books?

10) Show that there are {(n=5)} \choose 5 distinct throws of a throw with n non-distinct dice.

11) Given n indistinguishable objects and n additional distinct objects —- also distinct from the earlier n objects — in how many ways can we choose n out of the 2n objects?

12) Establish the following relations:
12a) B_{n+1}=\sum_{k=0}^{n}(B_{k}){n \choose k}
12b) \sum_{k}{p \choose k}{q \choose {n-k}}={{p+q} \choose n}
12c) S_{n+1}^{m} = \sum_{k=0}^{n}{n \choose k}S_{k}^{m-1}
12d) n^{p}=\sum_{k=0}^{n}{n \choose k}k! (S_{p}^{k})

13) Prove the following identity for all real numbers x:
x^{n}= \sum_{k=1}^{n}S_{n}^{k}[x]_{k}

14) Express x^{4} in terms of {x \choose 4}, {x \choose 3}, …by using the S_{n}^{k}‘s. Express {x \choose 4} in terms of x^{4}, x^{3}, …by using the s_{n}^{k}‘s.

15) A circular loop is divided into p parts, p prime. In how many ways can we paint the loop with n colours if we do not distinguish between patterns which differ only by a rotation of the loop? Deduce Fermat’s Little theorem: n^{p}-n is divisible by p if p is prime.

16) In problem 15, prove that n^{p}-n is also divisible by 2p if p \neq 2. Where is the hypothesis that p is prime used in Problem 15 or in this problem?

17) How many equivalence relations are possible on an n-set?

18) The complete homogeneous symmetric function of n variables \alpha_{1}, \alpha_{2}, \ldots, \alpha_{n} of degree r is defined as h_{r}(\alpha_{1},\alpha_{2}, \alpha_{3}, \ldots, \alpha_{n})=\sum \alpha_{1}^{i_{1}}\alpha_{2}^{i_{2}}\ldots \alpha_{n}^{i_{n}} the summation being taken over all ordered partitions of r, where the parts are also allowed to be zero. How many terms are there in h_{r}?

Test yourself ! Improve your mettle in math !
Regards,
Nalin Pithwa.