# A Primer: Generating Functions: Part II: for RMO/INMO 2019

We shall now complicate the situation a little bit. Let us ask for the combinations of the symbols $\alpha_{1}, \alpha_{2}, \ldots, \alpha_{n}$ with repetitions of each symbol allowed once more in the combinations. For example, let there be only two symbols $\alpha_{1}, \alpha_{2}$. Let us look for combinations of the form:

$\alpha_{1}$, $\alpha_{2}$, $\alpha_{1}\alpha_{2}$, $\alpha_{1}\alpha_{1}$, $\alpha_{2}\alpha_{2}$, $\alpha_{1}\alpha_{1}\alpha_{2}$, $\alpha_{1}\alpha_{2}\alpha_{2}$, $\alpha_{1}\alpha_{1}\alpha_{2}\alpha_{2}$

where, in each combination, each symbol may occur once, twice, or not at all. The OGF for this can be constructed by reasoning as follows: the choices for $\alpha_{1}$ are not-$\alpha_{1}$, $\alpha_{1}$ once, $\alpha_{1}$ twice. This is represented by the factor $(1+\alpha_{1}t+\alpha_{1}^{2}t^{2})$. Similarly, the possible choices for $\alpha_{2}$ correspond to the factor $(1+\alpha_{2}t+\alpha_{2}^{2}t^{2})$. So, the required OGF is $(1+\alpha_{1}t+\alpha_{1}^{2}t)(1+\alpha_{2}t+\alpha_{2}^{2}t^{2})$

On expansion, this gives : $1+(\alpha_{1}+\alpha_{2})t+(\alpha_{1}\alpha_{2}+\alpha_{1}^{2}+\alpha_{2}^{2})t^{2}+(\alpha_{1}^{2}\alpha_{2}+\alpha_{1}\alpha_{2}^{2})t^{3}+(\alpha_{1}^{2}\alpha_{2}^{2})t^{4}$

Note that if we omit the term 1 (which corresponds to not choosing any $\alpha$), the other 8 terms correspond to the 8 different combinations listed in (*). Also, observe that the exponent r of the $t^{r}$ tells us that the coefficient of $t^{r}$ has the list or inventory of the r-combinations (under the required specification — in this case, with the restriction on repetitions of symbols) in it:

$\bf{Illustration}$

In the light of the foregoing discussion, let us now take up the following question again: in how many ways, can a total of 16 be obtained by rolling 4 dice once?; the contribution of each die to the total is either a “1” or a “2” or a “3” or a “4” or a “5” or a “6”. The contributions from each of the 4 dice have to be added to get the total — in this case, 16. So, if we write: $t^{1}+t^{2}+t^{3}+t^{4}+t^{5}+t^{6}$

as the factor corresponding to the first die, the factors corresponding to the other three dice are exactly the same. The product of these factors would be:

(*) $(t+t^{2}+t^{3}+t^{4}+t^{5}+t^{6})^{4}$

Each term in the expansion of this would be a power of t, and the exponent k of such a term $t^{k}$ is nothing but the total of the four contributions which went into it. The number of times a term $t^{k}$ can be obtained is exactly the number of times k can be obtained as a total on a throw of the four dice. So, if $\alpha_{k}$ is the coefficient of $t^{k}$ in the expansion, $\alpha_{16}$ is the answer for the above question. Further, since (*) simplifies to $(\frac{t(1-t^{6})}{1-t})^{4}$, it follows that the answer for the above question tallies with the coefficient specified in the following next question: calculate the coefficient of $t^{12}$ in $(\frac{(1-t^{6})}{(1-t)})^{4}$.6

Now, consider the following problem: Express the number $N(n,p)$ of ways of obtaining a total of n by rolling p dice, as a certain coefficient in a suitable product of binomial expansions in powers of t. [ this in turn, is related to the observation that the number of ways a total of 16 can be obtained by rolling 4 dice once is the same as the coefficient of $t^{12}$ in $(\frac{1-t^{6}}{1-t})^{4}$]:

So, we get that $N(n,p)=$ coefficient of $t^{n-p}$ in $(\frac{1-t^{6}}{1-t})^{p}$

Let us take an example from a graphical enumeration:

A $\it {graph}$ $G=G(V,F)$ is a set V of vertices a, b, c, …, together with a set $E=V \times V$ of $\it {edges}$ $(a,b), (a,a), (b,a), (c,b), \ldots$ If $(x,y)$ is considered the same as $(y,x)$, we say the graph is $\it{undirected}$. Otherwise, the graph is said to be $\it{directed}$, and we say ‘$(a,b)$ has a direction from a to b’. The edge $(x,x)$ is called a loop. The graph is said to be of order $|V|$.

If the edge-set E is allowed to be a multiset, that is, if an edge $(a,b)$ is allowed to occur more than once, (and, this may be called a ‘multiple edge’), we refer to the graph as a general graph.

If $\phi_{5}(n)$ and $\psi_{5}(n)$ denote the numbers of undirected (respectively, directed) loopless graphs of order 5, with n edges, none of them a multiple edge, find the series $\sum \phi_{5}(n)t^{n}$ and $\sum \psi_{5}(n)t^{n}$.

Applying our recently developed techniques to the above question, a graph of 5 specified vertices is uniquely determined once you specify which pairs of vertices are ‘joined’. Suppose we are required to consider only graphs with 4 edges. This would need four pairs of vertices to be selected out of the total of $5 \choose 2$ equal to 10 pairs that are available. So selection of pairs of vertices could be made in $10 \choose 4$ ways. Each such selection corresponds to one unique graph, with the selected pairs being considered as edges. More informally, having selected a certain pairs of vertices, imagine that the vertices are represented by dots in a diagram and join the vertices of each selected pair by a running line. Then, the “graph” becomes a “visible” object. Note that the number of graphs is just the number of selections of pairs of vertices. Hence, $\phi_{5}(4)=10 \choose 4$.

Or, one could approach this problem in a different way. Imagine that you have a complete graph on 5 vertices — the “completeness” here means that every possible pair of vertices has been joined by an edge. From the complete graph which has 10 edges, one has to choose 4 edges — any four, for that matter — in order to get a graph as required by the problem.

On the same lines for a directed graph, one has a universe of 10 by 2, that is, 29 edges to choose from, for, each pair x,y gives rise to two possible edges $(x,y)$ and $(y,x)$. Hence,

$\psi_{5}(4)=20 \choose 4$.

Thus, the counting series for labelled graphs on 5 vertices is $1 + \sum_{p=1}^{10} {10 \choose p}t^{p}$
and the counting series for directed labelled graphs on 5 vertices is
$1+ \sum_{p=1}^{20}{20 \choose p}t^{p}$.

Finally, the OGF for increasing words on an alphabet ${a,b,c,d,e}$ with $a is

$(1+at+a^{2}t^{2}+\ldots)(1+bt+b^{2}t^{2}+\ldots)(1+ct+c^{2}t^{2}+\ldots)\times (1+dt+d^{2}t^{2}+\ldots)(1+et+e^{2}t^{2}+\ldots)$

The corresponding OE is $(1+t+t^{2}+t^{3}+\ldots)^{5}$ which is nothing but $(1-t)^{-5}$ (this explains the following problem: Verify that the number of increasing words of length 10 out of the alphabet $\{a,b,c,d,e \}$ with $a is the coefficient of $t^{10}$ in $(1-t)^{-5}$ ).

We will continue this detailed discussion/exploration in the next article.

Until then aufwiedersehen,
Nalin Pithwa

# A Primer: Generating Functions: Part I : RMO/INMO 2019

GENERATING FUNCTIONS and RECURRENCE RELATIONS:

The concept of a generating function is one of the most useful and basic concepts in the theory of combinatorial enumeration. If we want to count a collection of objects that depend in some way on n objects and if the desired value is say, $\phi (n)$, then a series in powers of t such as $\sum \phi (n) t^{n}$ is called a generating function for $\phi (n)$. The generating functions arise in two different ways. One is from the investigation of recurrence relations and another is more straightforward: the generating functions arise as counting devices, different terms being specifically included to account for specific situations which we wished to count or ignore. This is a very fundamental, though difficult, technique in combinatorics. It requires considerable ingenuity for its success. We will have a look at the bare basics of such stuff.

We start here with the common knowledge:

$(1+\alpha_{1}t)(1+\alpha_{2}t)\ldots (1+\alpha_{n}t)=1+a_{1}t+a_{2}t^{2}+ \ldots + a_{n}t^{n}$….(2i) where $a_{r}=$sum of the products of the $\alpha$‘s taken r at a time. …(2ii)

Incidentally, the $a$‘s thus defined in (2ii) are called the elementary symmetric functions associated with the $a$‘s. We will re-visit these functions later.

Let us consider the algebraic identity (2i) from a combinatorial viewpoint. The explicit expansion in powers of t of the RHS of (2i) is symbolically a listing of the various combinations of the $\alpha$‘s in the following sense:

$a_{1}=\sum \alpha_{1}$ represents all the 1-combinations of the $\alpha$‘s
$a_{2}=\sum \alpha_{1}\alpha_{2}$ represents all the 2-combinations of the $\alpha$‘s
and so on.

In other words, if we want the r-combinations of the $\alpha$‘s, we have to look only at the coefficients of $t^{r}$. Since the LHS of (2i) is an expression which is easily constructed and its expansion generates the combinations in the said manner,we say that the LHS of (2i) is a Generating Function (GF) for the combinations of the $\alpha$‘s. It may happen that we are interested only in the number of combinations and not in a listing or inventory of them. Then, we need to look for only the number of terms in each coefficient above and this number will be easily obtained if we set each $\alpha$ as 1. Thus, the GF for the number of combinations is $(1+t)(1+t)(1+t)\ldots (1+t)$ n times;

and this is nothing but $(1+t)^{n}$. We already know that the expansion of this gives $n \choose r$ as the coefficient of $t^{r}$ and this tallies with the fact that the number of r-combinations of the $\alpha$‘s is $n \choose r$. Abstracting these ideas, we make the following definition:

Definition I:
The Ordinary Generating Function (OGF) for a sequence of symbolic expressions $\phi(n)$ is the series

$f(t)=\sum_{n}\phi (n)t^{n}$ …(2iii)

If $\phi (n)$ is a number which counts a certain type of combinations or permutations, the series $f(t)$ is called the Ordinary Enumeration (OE) or counting series for $\phi (n)$ for $n=1,2,\ldots$

Example 2:
The OGF for the combinations of five symbols a, b, c, d, e is $(1+at)(1+bt)(1+ct)(1+dt)(1+et)$

The OE for the same is $(1+t)^{5}$. The coefficient of $t^{4}$ in the first expression is

(*) abcd+abce+ abde+acde+bcde.

The coefficient of $t^{4}$ in the second expression is $5 \choose 4$, that is, 5 and this is the number of terms in (*).

Example 3:

The OGF for the elementary symmetric functions $a_{1}, a_{2}, \ldots$ in the symbols $\alpha_{1},\alpha_{2}, \alpha_{3}, \ldots$ is $(1+\alpha_{1}t)(1+\alpha_{2}t)(1+\alpha_{3}t)\ldots$ ….(2iv)

This is exactly the algebraic result with which we started this section.

Remark:

The fact that the series on the HRS of (2iii) is an infinite series should not bother us with questions of convergence and the like. For, throughout (combinatorics) we shall be working only in the framework of “formal power series” which we now elaborate.

*THE ALGEBRA OF FORMAL POWER SERIES*

The vector space of infinite sequences of real numbers is well-known. If $(\alpha_{k})$ and $\beta_{k}$ are two sequences, their sum is the sequence $(\alpha_{k}+\beta_{k})$, and a scalar multiple of the sequence $(\alpha_{k})$ is $(c\alpha_{k})$. We now identify the sequence $(\alpha_{k})$ with $k=0,1,2, \ldots$ with the “formal” series

$f = \sum_{k=0}^{\infty}\alpha_{k}t^{k}$….(2v)

where $t^{k}$ only means the following:

$t^{0}=1$, $t^{k}t^{l}=t^{k+l}$.

In the same way, $(\beta_{k})$, where $k=0,1,2,\ldots$ corresponds to the formal series:

$g=\sum_{k=0}^{\infty}\beta_{k}t^{k}$ and

we define: $f+g = \sum (\alpha_{k}+\beta_{k})t^{k}$, and $cf= \sum (c\alpha_{k})t^{k}$.

The set of all power series f now becomes a vector space isomorphic to the space of infinite sequences of real numbers. The zero element of this space is the series with every coefficient zero.

Now, let us define a product of two formal power series. Given f and g as above, we write $fg=\sum_{k=0}^{\infty}\gamma_{k} t^{k}$ where

$\gamma_{k}=\alpha_{0}\beta_{k}+\alpha_{1}\beta_{k-1}+\ldots + \alpha_{k}\beta_{0} = \sum (\alpha_{i}\beta_{j})$, where $i+j=k$.

The multiplication is associative, commutative, and also distributive wrt addition. (the students/readers can take up this as an appetizer exercise !!) In fact, the set of all formal power series becomes an algebra. It is called the algebra of formal power series over the real s. It is denoted by $\bf\Re[t]$, where $\bf\Re$ means the algebra of reals. We further postulate that $f=g$ in $\bf\Re[t]$ iff $\alpha_{k}=\beta_{k}$ for all $k=0,1,2,\ldots$. As we do in polynomials, we shall agree that the terms not present indicate that the coefficients are understood to be zero. The elements of $\bf\Re$ may be considered as elements of $\bf\Re[t]$. In particular, the unity 1 of $\bf\Re$ is also the unity of $\bf\Re[t]$. Also, the element $t^{n}$ with $n>0$ belongs to $\bf\Re$, it being the formal power series $\sum \alpha_{k}t^{k}$ with $\alpha_{n}=1$ and all other $\alpha$‘s zero. We now have the following important proposition which is the only tool necessary for working with formal power series as far as combinatorics is concerned:

Proposition : 2_4:
The element f of $\bf\Re[t]$ given by (2v) has an inverse in $\bf\Re[t]$ iff $\alpha_{0}$ has an inverse in $\bf\Re$.

Proof:
If $g=\sum \beta_{k}t^{k}$ is such that $fg=1$, the multiplication rule in $\bf\Re[t]$ tells us that $\alpha_{0}\beta_{0}=1$ so that $\beta_{0}$ is the inverse of $\alpha_{0}$. Hence, the “only if” part is proved.

To prove the “if” part, let $\alpha_{0}$ have an inverse $\alpha_{0}^{-1}$ in $\bf\Re$. We will show that it is possible to find $g=\sum \beta_{k}t^{k}$ in $\bf\Re[t]$ such that $fg=1$. If such a g were to exist, then the following equations should hold in order that $fg=1$, that is,

$\alpha_{0}\beta_{0}=1$
$\alpha_{0}\beta_{1}+\alpha_{1}\beta_{0}=0$
$\alpha_{0}\beta_{2}+\alpha_{1}\beta_{1}+\alpha_{2}\beta_{0}=0$
$\vdots$

So we have $\beta_{0}=\alpha_{0}^{-1}$ from the first equation. Substituting this value of $\beta_{0}$ in the second equation, we get $\beta_{1}$ in terms of the $\alpha$‘s. And, so on, by the principle of mathematical induction, all the $\beta$‘s are uniquely determined. Thus, f is invertible in $\bf\Re$. QED.

Note that it is the above proposition which justifies the notation in $\bf\Re[t]$, equalities such as

$\frac{1}{1-t}=1+t+t^{2}+t^{3}+\ldots$

The above is true because the RHS has an inverse and $(1-t)(1+t+t^{2}+t^{3}+\ldots)=1$

So, the unique inverse of $1+t+t^{2}+t^{3}+\ldots$ is $(1-t)$ and vice versa. Hence, the expansion of $\frac{1}{1-t}$ as above. Similarly, we have

$\frac{1}{1+t}=1-t+t^{2}-\ldots$
$\frac{1}{1-t^{2}}=1+t^{2}+t^{4}+\ldots$ and many other such familiar expansions.

There is a differential operator in $D$ in $\bf\Re[t]$, which behaves exactly like the differential operator of calculus.

Define: $(Df)(\alpha)=\sum_{k=0}^{\infty}(k+1)\alpha_{k+1}t^{k}$

Then, one can easily prove that $D: f \rightarrow Df$ is linear on $\bf\Re[t]$, and further
$D^{r}f(t)=\sum_{k=0}^{\infty}(k+r)(k+r-1)\ldots(k+1)\alpha_{k+r}t^{k}$ from which we get the term “Taylor-MacLaurin” expansion

$f(t)=f(0)+Df(0)+\frac{D^{2}f(0)}{2!}+ \ldots$…(2vi)

In the same manner, one can obtain, from $f(t)=\frac{1}{1-\alpha t}$, which in turn is equal to
$1+ \alpha t + \alpha^{2} t^{2}+ \alpha^{3} t^{3} + \ldots$

the result which mimics the logarithmic differentiation of calculus, viz.,

$\frac{(Df)(t)}{f(t)} = \alpha + \alpha^{2} t+ \alpha^{3}t^{2}+ \alpha^{4}t^{3}+\ldots$…(2vii)

The truth of this in $\bf\Re[t]$ is seen by multiplying the series on the RHS of (2vii) by the series for $f(t)$, and thus obtaining the series for $(Df)(t)$.

Let us re-consider generating functions now. We saw that the GF for combinations of $\alpha_{1}, \alpha_{2}, \ldots, \alpha_{n}$ is $(1+\alpha_{1}t)(1+\alpha_{2}t)\ldots(1+\alpha_{n}t)$.

Let us analyze this and find out why it works. After all, what is a combination of the symbols : $\alpha_{1}, \alpha_{2}, \ldots, \alpha_{n}$? It is the result of a decision process involving a sequence of independent decisions as we move down the list of the $\alpha$‘s. The decisions are to be made on the following questions: Do we choose $\alpha_{1}$ or not? Do we choose $\alpha_{2}$ or not? $\ldots$ Do we choose $\alpha_{n}$ or not? And, if it is an r-combination that we want, we say “yes” to r of the questions above and say “no” to the remaining. The factor $(1+\alpha_{1}t)$ in the expression (2ii) is an algebraic indication of the combinatorial fact that there are only two mutually exclusive alternatives available for us as far as the symbol $\alpha_{1}$ is concerned. Either we choose $\alpha_{1}$ or not. Choosing “$\alpha_{1}$” corresponds to picking the term $\alpha_{1}t$ and choosing “not $-\alpha_{1}$” corresponds to picking the term 1. This correspondence is justified by the fact that in the formation of products in the expression of (2iv), each term in the expansion has only one contribution from $1+\alpha_{1}t$ and that is either $1$ or $\alpha_{t}$.

The product $(1+\alpha_{1}t)(1+\alpha_{2}t)$ gives us terms corresponding to all possible choices of combinations of the symbols $\alpha_{1}$ and $\alpha_{2}$ — these are:

$1.1$ standing for the choice “not-$\alpha_{1}$” and “not-$\alpha_{2}$

$\alpha_{1}t . 1$ standing for the choice of $\alpha_{1}$ and “not-$\alpha_{2}$

$1.\alpha_{2}t$ standing for the choice of “not-$\alpha_{1}$” and $\alpha_{2}$.

$\alpha_{1}t . \alpha_{2}t$ standing for the choice of $\alpha_{1}$ and $\alpha_{2}$.

This is, in some sense, the rationale for (2iv) being the OGF for the several r-combinations of $\alpha_{1}, \alpha_{2}, \ldots, \alpha_{n}$.

We shall now complicate the situation a little bit. Let us ask for the combinations of the symbols $\alpha_{1}, \alpha_{2}, \ldots, \alpha_{n}$ with repetitions of each symbol allowed once more in the combinations.

To be discussed in the following article,

Regards,
Nalin Pithwa.

Reference:
Combinatorics, Theory and Applications, V. Krishnamurthy, East-West Press.
https://www.amazon.in/Combinatorics-Theory-Applications-Krishnamurthy-V/dp/8185336024/ref=sr_1_5?keywords=V+Krishnamurthy&qid=1553718848&s=books&sr=1-5

# How to find the number of proper divisors of an integer and other cute related questions

Question 1:

Find the number of proper divisors of 441000. (A proper divisor of a positive integer n is any divisor other than 1 and n):

Solution 1:

Any integer can be uniquely expressed as the product of powers of prime numbers (Fundamental theorem of arithmetic); thus, $441000 = (2^{3})(3^{2})(5^{3})(7^{2})$. Any divisor, proper or improper, of the given number must be of the form $(2^{a})(3^{b})(5^{c})(7^{d})$ where $0 \leq a \leq 3$, $0 \leq b \leq 2$, $0 \leq c \leq 3$, and $0 \leq d \leq 2$. In this paradigm, the exponent a can be chosen in 4 ways, b in 3 ways, c in 4 ways, d in 3 ways. So, by the product rule, the total number of proper divisors will be $(4)(3)(4)(3)-2=142$.

Question 2:

Count the proper divisors of an integer N whose prime factorization is: $N=p_{1}^{\alpha_{1}} p_{2}^{\alpha_{2}} p_{3}^{\alpha_{3}}\ldots p_{k}^{\alpha_{k}}$

Solution 2:

By using the same reasoning as in previous question, the number of proper divisors of N is $(\alpha_{1}+1)(\alpha_{2}+1)(\alpha_{3}+1)\ldots (\alpha_{k}+1)-2$, where we deduct 2 because choosing all the factors means selecting the given number itself, and choosing none of the factors means selecting the trivial divisor 1.

Question 3:

Find the number of ways of factoring 441000 into 2 factors, m and n, such that $m>1, n>1$, and the GCD of m and n is 1.

Solution 3:

Consider the set $A = {2^{3}, 3^{2}, 5^{3}, 7^{2}}$ associated with the prime factorization of 441000. It is clear that each element of A must appear in the prime factorization of m or in the prime factorization of n, but not in both. Moreover, the 2 prime factorizations must be composed exclusively of elements of A. It follows that the number of relatively prime pairs m, n is equal to the number of ways of partitioning A into 2 unordered nonempty, subsets (unordered as mn and nm mean the same factorization; recall the fundamental theorem of arithmetic).

The possible unordered partitions are the following:

$A = \{ 2^{3}\} + \{ 3^{2}, 5^{3}, 7^{2}\} = \{3^{2}\}+\{ 2^{3}, 5^{3}, 7^{2}\} = \{ 5^{3}\} + \{ 2^{3}, 3^{2}, 7^{2}\} = \{ 7^{2}\}+\{ 2^{3}, 3^{2}, 5^{3}\}$,

and $A = \{ 2^{3}, 3^{2}\} + \{ 5^{3}, 7^{2}\}=\{ 2^{3}, 5^{3}\} + \{3^{2}, 7^{2} \} = \{ 2^{3}, 7^{2}\} + \{ 3^{2}, 5^{3}\}$

Hence, the required answer is $4+3=2^{4-1}+1=7$.

Question 4:

Generalize the above problem by showing that any integer has $2^{k-1}-1$ factorizations into relatively prime pairs m, n ($m>1, n>1$).

Solution 4:

Proof by mathematical induction on k:

For $k=1$, the result holds trivially.

For $k=2$, we must prove that a set of k distinct elements, $Z = \{ a_{1}, a_{2}, a_{3}, \ldots, a_{k-1}, a_{k}\}$ has $2^{k-1}-1$ sets. Now, one partition of Z is

$Z = \{ a_{k}\} \bigcup \{ a_{1}, a_{2}, a_{3}, \ldots, a_{k-1}\} \equiv \{ a_{k}\} \bigcup W$

All the remaining partitions may be obtained by first partitioning W into two parts — which, by the induction hypothesis, can be done in $2^{k-2}-1$ ways — and then, including $a_{k}$ in one part or other — which can be done in 2 ways. By the product rule, the number of partitions of Z is therefore

$1 + (2^{k-2})(2)=2^{k-1}-1$. QED.

Remarks: Question 1 can be done by simply enumerating or breaking it into cases. But, the last generalized problem is a bit difficult without the refined concepts of set theory, as illustrated; and of course, the judicious use of mathematical induction is required in the generalized case.

Cheers,

Nalin Pithwa.

# Some basics of Number Theory for RMO: part III: Fermat’s Little Theorem

Fermat’s Little Theorem:

The fact that there are only a finite number of essentially different numbers in arithmetic to a modulus m means that there are algebraic relations which are satisfied by every number in that arithmetic. There is nothing analogous to these relations in ordinary arithmetic.

Suppose we take any number x and consider its powers $x, x^{2}, x^{3}, \ldots$. Since there are only a finite number of possibilities of these to the modulus m, we must eventually come to one which we have met before, say $x^{h} \equiv x^{k} {\pmod m}$, where $k . If x is relatively prime to m, the factor $x^{k}$ can be cancelled, and it follows that $x^{l} \equiv 1 {\pmod m}$, where $l \equiv {h-k}$. Hence, every number x which is relatively prime to m satisfies some congruence of this form. The least exponent l for which $x^{l} \equiv 1 {\pmod m}$ will be called the order of x to the modulus m. If x is 1, its order is obviously 1. To illustrate the definition, let us calculate the orders of a few numbers to the modulus 11. The powers of 2, taken to the modulus 11, are

2, 4, 8, 5, 10, 9, 7, 3, 6, 1, 2, 4, $\ldots$

Each one is twice the preceding one, with 11 or a multiple of 11 subtracted where necessary to make the result less than 11. The first power of 2 which is $\equiv 1$ is $2^{10}$, and so the order of $2 \pmod {11}$ is 10. As another example, take the powers of 3:

3, 9, 5, 4, 1, 3, 9, $\ldots$

The first power of 3 which is equivalent to 1 is $3^{5}$, so the order of $3 \pmod {11}$ is 5. It will be found that the order of 4 is again 5, and so also is that of 5.

It will be seen that the successive powers of x are periodic; when we have reached the first number l for which $x^{l} \equiv 1$, then $x^{l+1} \equiv x$ and the previous cycle is repeated. It is plain that $x^{n} \equiv 1 {\pmod m}$ if and only if n is a multiple of the order of x. In the last example, $3^{n} \equiv 1 {\pmod 11}$ if and only if n is a multiple of 5. This remains valid if n is 0 (since 3^{0} = 1), and it remains valid also for negative exponents, provided $3^{-n}$, is interpreted as a fraction (mod 11) in the way explained earlier (an earlier blog article).

In fact, the negative powers of 3 (mod 11) are obtained by prolonging the series backwards, and the table of powers of 3 to the modulus 11 is:

$\begin{array}{cccccccccccccc} n & = & \ldots & -3 & -2 & -1 & 0 & 1 &2 & 3 & 4 & 5 & 6 & \ldots \\ 3^{n} & \equiv & \ldots & 9 & 5 & 4 & 1 & 3 & 9 & 5 & 4 & 1 & 3 & \ldots \end{array}$

Fermat discovered that if the modulus is a prime, say p, then every integer x not congruent to 0 satisfies

$x^{p-1} \equiv 1 {\pmod p}$….call this as equation A.

In view of what we have seen above, this is equivalent to saying that the order of any number is a divisor of $p-1$. The result A was mentioned by Fermat in a letter to Frenicle de Bessy of 18 October 1640, in which he also stated that he had a proof. But, as with most of Fermat’s discoveries, the proof was not published or preserved. The first known proof seems to have been given by Leibniz (1646-1716). He proved that $x^{p} \equiv x {\pmod p}$, which is equivalent to A, by writing x as a sum $1+ 1 + 1 + \ldots + 1$ of x units (assuming x positive), and then expanding $(1+1+ \ldots + 1)^{p}$ by the multinomial theorem. The terms $1^{p} + 1^{p} + \ldots + 1^{p}$ give x, and the coefficients of all the other terms are easily proved to be divisible by p.

Quite a different proof was given by Ivory in 1806. If $x \not\equiv 0 {\pmod p}$, the integers

$x, 2x, 3x, \ldots, (p-1)x$

are congruent in some order to the numbers

$1, 2, 3, \ldots, p-1$.

In fact, each of these sets constitutes a complete set of residues except that 0 (zero) has been omitted from each. Since the two sets are congruent, their products are congruent, and so

$(x)(2x)(3x) \ldots ((p-1)x) \equiv (1)(2)(3)\ldots (p-1){(\pmod p)}$

Cancelling the factors 2, 3, ….(p-1), as is permissible we obtain the above relation A.

One merit of this proof is that it can be extended so as to apply to the more general case when the modulus is no longer a prime.

The generalization of the result A to any modulus was first given by Euler in 1760. To formulate it, we must begin by considering how many numbers in the set 0, 1, 2, …, (m-1) are relatively prime to m. Denote this number by $\phi(m)$. When m is a prime, all the numbers in the set except 0 (zero) are relatively prime to m, so that $\phi(p) = p-1$ for any prime p. Euler’s generalization of Fermat’s theorem is that for any modulus m,

$x^{\phi(m)} = 1 {\pmod m}$…relation B

provided only that x is relatively prime to m.

To prove this, it is only necessary to modify Ivory’s method by omitting from the numbers $0, 1, 2, \ldots, (m-1)$ not only the number 0, but all numbers which are not relatively prime to m. These remain $\phi(m)$ numbers, say

$a_{1}, a_{2}, \ldots, a_{\mu}$, where $\mu = \phi(m)$.

Then, the numbers

$a_{1}x, a_{2}x, \ldots, a_{\mu}x$

are congruent, in some order, to the previous numbers, and on multiplying and cancelling $a_{1}, a_{2}, \ldots, a_{\mu}$ (as is permissible) we obtain $x^{p} \equiv 1 {\pmod m}$, which is relation B.

To illustrate this proof, take $m=20$. The numbers less than 20 and relatively prime to 20 are :

1, 3, 7, 9, 11, 13, 17, 19.

So that $\phi(20) = 8$. If we multiply these by any number x which is relatively prime to 20, the new numbers are congruent to the original numbers in some other order. For example, if x is 3, the new numbers are congruent respectively to

$3, 9, 1, 7, 13, 19, 11, 17 {\pmod 20}$;

and the argument proves that $3^{8} \equiv 6561$.

Reference:

1. The Higher Arithmetic, H. Davenport, Eighth Edition.
2. Elementary Number Theory, Burton, Sixth Edition.
3. A Friendly Introduction to Number Theory, J. Silverman

Shared for those readers who enjoy expository articles.

Nalin Pithwa.

# Combinatorics for RMO : some basics and examples: homogeneous products of r dimensions

Question:

Find the number of homogeneous products of r dimensions that can be formed out of the n letters a, b, c ….and their powers.

Solution:

By division, or by the binomial theorem, we have:

$\frac{1}{1-ax} = 1 + ax + a^{2}x^{2} + a^{3}x^{3} + \ldots$

$\frac{1}{1-bx} = 1+ bx + b^{2}x^{2} + a^{3}x^{3} + \ldots$

$\frac{1}{1-cx} = 1 + cx + c^{2}x^{2} + c^{3}x^{3} + \ldots$

Hence, by multiplication,

$\frac{1}{1-ax} \times \frac{1}{1-bx} \times \frac{1}{1-cx} \times \ldots$

$= (1+ax + a^{2}x^{2}+a^{3}x^{3}+ \ldots)(1+bx + b^{2}x^{2} + b^{3}x^{3}+ \ldots)(1+cx + c^{2}x^{2} + c^{3}x^{3}+ \ldots)\ldots$

$= 1 + x(a + b + c + \ldots) +x^{2}(a^{2}+ab+ac+b^{2}+bc + c^{2} + \ldots) + \ldots$

$= 1 + S_{1}x + S_{2}x^{2} + S_{3}x^{3} + \ldots$ suppose;

where $S_{1}$, $S_{2}$, $S_{3}$, $\ldots$ are the sums of the homogeneous products of one, two, three, … dimensions that can be formed of a, b, c, …and their powers.

To obtain the number of these products, put a, b, c, …each equal to 1; each term in $S_{1}$, $S_{2}$, $S_{3}$, …now becomes 1, and the values of $S_{1}$, $S_{2}$, $S_{3}$, …so obtained give the number of the homogeneous products of one, two, three, ….dimensions.

Also,

$\frac{1}{1-ax} \times \frac{1}{1-bx} \times \frac{1}{1-cx} \ldots$

becomes $\frac{1}{(1-x)^{n}}$, or $(1-x)^{-n}$

Hence, $S_{r} =$ the coefficient of $x^{r}$ in the expansion of $(1-x)^{-n}$

$= \frac{n(n+1)(n+2)(n+3)\ldots (n+r-1)}{r!}= \frac{(n+r-1)!}{r!(n-1)!}$

Question:

Find the number of terms in the expansion of any multinomial when the index is a positive integer.

In the expansion of $(a_{1}+ a_{2} + a_{3} + \ldots + a_{r})^{n}$

every term is of n dimensions; therefore, the number of terms is the same as the number of homogeneous products of n dimensions that can be formed out of the r quantities $a_{1}$, $a_{2}$, $a_{3}$, …$a_{r}$, and their powers; and therefore by the preceding question and solution, this is equal to

$\frac{(r+n-1)!}{n! (r-1)!}$

A theorem in combinatorics:

From the previous discussion in this blog article, we can deduce a theorem relating to the number of combinations of n things.

Consider n letters a, b, c, d, ….; then, if we were to write down all the homogeneous products of r dimensions, which can be formed of these letters and their powers, every such product would represent one of the combinations, r at a time, of the n letters, when any one of the letters might occur once, twice, thrice, …up to r times.

Therefore, the number of combinations of n things r at a time when repetitions are allowed is equal to the number of homogeneous products of r dimensions which can be formed out of n letters, and therefore equal to $\frac{(n+r-1)!}{r!(n-1)!}$, or ${{n+r-1} \choose r}$.

That is, the number of combinations of n things r at a time when repetitions are allowed is equal to the number of combinations of $n+r-1$ things r at a time when repetitions are NOT allowed.

Example 1:

Find the coefficient of $x^{r}$ in the expansion of $\frac{(1-2x)^{2}}{(1+x)^{3}}$

Solution 1:

The expression $= (1-4x+4x^{2})(1+p_{1}x+p_{2}x^{2}+ \ldots + p_{r}x^{r}+ \ldots)$, suppose.

The coefficients of $x^{r}$ will be obtained by multiplying $p_{r}$, $p_{r-1}$, $p_{r-2}$ by 1, -4, and 4 respectively, and adding the results; hence,

the required coefficient is $p_{r} - 4p_{r-1}+4p_{r-2}$

But, with a little work, we can show that $p_{r} = (-1)^{r}\frac{(r+1)(r+2)}{2}$.

Hence, the required coefficient is

$= (-1)^{r}\frac{(r+1)(r+2)}{2} - 4(-1)^{r-1}\frac{r(r+1)}{2} + 4 (-1)^{r-2}\frac{r(r-1)}{2}$

$= \frac{(-1)^{r}}{2}\times ((r+1)(r+2) + 4r(r+1) + 4r(r-1))$

$= \frac{(-1)^{r}}{2}(9r^{2}+3r+2)$

Example 2:

Find the value of the series

$2 + \frac{5}{(2!).3} + \frac{5.7}{3^{2}.(3!)} + \frac{5.7.9}{3^{3}.(4!)} + \ldots$

Solution 2:

The expression is equal to

$2 + \frac{3.5}{2!}\times \frac{1}{3^{2}} + \frac{3.5.7}{3!}\times \frac{1}{3^{3}} + \frac{3.5.7.9}{4!}\times \frac{1}{3^{4}} + \ldots$

$= 2 + \frac{\frac{3}{2}.\frac{5}{2}}{2!} \times \frac{2^{2}}{3^{2}} + \frac{\frac{3}{2}.\frac{5}{2}.\frac{7}{2}}{3!} \times \frac{2^{3}}{3^{3}} + \frac{\frac{3}{2}.\frac{5}{2}.\frac{7}{2}.\frac{9}{2}}{4!} \times \frac{2^{4}}{3^{4}} + \ldots$

$= 1 + \frac{\frac{3}{2}}{1} \times \frac{2}{3} + \frac{\frac{3}{2}.\frac{5}{2}}{2!} \times (\frac{2}{3})^{2} + \frac{\frac{3}{2}.\frac{5}{2}.\frac{7}{2}}{3!} \times (\frac{2}{3})^{3} + \frac{\frac{3}{2}.\frac{5}{2}.\frac{7}{2}.\frac{9}{2}}{4!} \times (\frac{2}{3})^{4} + \ldots$

$= (1-\frac{2}{3})^{\frac{-3}{2}} = (\frac{1}{3})^{-\frac{3}{2}} = 3^{\frac{3}{2}} = 3 \sqrt{3}$.

Example 3:

If n is any positive integer, show that the integral part of $(3+\sqrt{7})^{n}$ is an odd number.

Solution 3:

Suppose I to denote the integral and f the fractional part of $(3+\sqrt{7})^{n}$.

Then, $I + f = 3^{n} + {n \choose 1}3^{n-1}\sqrt{7} + {n \choose 2}3^{n-2}.7 + {n \choose 3}3^{n-3}.(\sqrt{7})^{3}+ \ldots$…call this relation 1.

Now, $3 - \sqrt{7}$ is positive and less than 1, therefore $(3-\sqrt{7})^{n}$ is a proper fraction; denote it by $f^{'}$;

Hence, $f^{'} = 3^{n} - {n \choose 1}.3^{n-1}.\sqrt{7} + {n \choose 2}.3^{n-2}.7 - {n \choose 3}.3^{n-3}.(\sqrt{7})^{3}+ \ldots$…call this as relation 2.

Add together relations 1 and 2; the irrational terms disappear, and we have

$I + f + f^{'} = 2(3^{n} + {n \choose 2}.3^{n-2}.7+ \ldots ) = an even integer$

But, since f and $f^{'}$ are proper fractions their sum must be 1;

Hence, I is an odd integer.