GENERATING FUNCTIONS and RECURRENCE RELATIONS:
The concept of a generating function is one of the most useful and basic concepts in the theory of combinatorial enumeration. If we want to count a collection of objects that depend in some way on n objects and if the desired value is say, , then a series in powers of t such as is called a generating function for . The generating functions arise in two different ways. One is from the investigation of recurrence relations and another is more straightforward: the generating functions arise as counting devices, different terms being specifically included to account for specific situations which we wished to count or ignore. This is a very fundamental, though difficult, technique in combinatorics. It requires considerable ingenuity for its success. We will have a look at the bare basics of such stuff.
We start here with the common knowledge:
….(2i) where sum of the products of the ‘s taken r at a time. …(2ii)
Incidentally, the ‘s thus defined in (2ii) are called the elementary symmetric functions associated with the ‘s. We will re-visit these functions later.
Let us consider the algebraic identity (2i) from a combinatorial viewpoint. The explicit expansion in powers of t of the RHS of (2i) is symbolically a listing of the various combinations of the ‘s in the following sense:
represents all the 1-combinations of the ‘s
represents all the 2-combinations of the ‘s
and so on.
In other words, if we want the r-combinations of the ‘s, we have to look only at the coefficients of . Since the LHS of (2i) is an expression which is easily constructed and its expansion generates the combinations in the said manner,we say that the LHS of (2i) is a Generating Function (GF) for the combinations of the ‘s. It may happen that we are interested only in the number of combinations and not in a listing or inventory of them. Then, we need to look for only the number of terms in each coefficient above and this number will be easily obtained if we set each as 1. Thus, the GF for the number of combinations is n times;
and this is nothing but . We already know that the expansion of this gives as the coefficient of and this tallies with the fact that the number of r-combinations of the ‘s is . Abstracting these ideas, we make the following definition:
The Ordinary Generating Function (OGF) for a sequence of symbolic expressions is the series
If is a number which counts a certain type of combinations or permutations, the series is called the Ordinary Enumeration (OE) or counting series for for
The OGF for the combinations of five symbols a, b, c, d, e is
The OE for the same is . The coefficient of in the first expression is
(*) abcd+abce+ abde+acde+bcde.
The coefficient of in the second expression is , that is, 5 and this is the number of terms in (*).
The OGF for the elementary symmetric functions in the symbols is ….(2iv)
This is exactly the algebraic result with which we started this section.
The fact that the series on the HRS of (2iii) is an infinite series should not bother us with questions of convergence and the like. For, throughout (combinatorics) we shall be working only in the framework of “formal power series” which we now elaborate.
*THE ALGEBRA OF FORMAL POWER SERIES*
The vector space of infinite sequences of real numbers is well-known. If and are two sequences, their sum is the sequence , and a scalar multiple of the sequence is . We now identify the sequence with with the “formal” series
where only means the following:
In the same way, , where corresponds to the formal series:
we define: , and .
The set of all power series f now becomes a vector space isomorphic to the space of infinite sequences of real numbers. The zero element of this space is the series with every coefficient zero.
Now, let us define a product of two formal power series. Given f and g as above, we write where
, where .
The multiplication is associative, commutative, and also distributive wrt addition. (the students/readers can take up this as an appetizer exercise !!) In fact, the set of all formal power series becomes an algebra. It is called the algebra of formal power series over the real s. It is denoted by , where means the algebra of reals. We further postulate that in iff for all . As we do in polynomials, we shall agree that the terms not present indicate that the coefficients are understood to be zero. The elements of may be considered as elements of . In particular, the unity 1 of is also the unity of . Also, the element with belongs to , it being the formal power series with and all other ‘s zero. We now have the following important proposition which is the only tool necessary for working with formal power series as far as combinatorics is concerned:
Proposition : 2_4:
The element f of given by (2v) has an inverse in iff has an inverse in .
If is such that , the multiplication rule in tells us that so that is the inverse of . Hence, the “only if” part is proved.
To prove the “if” part, let have an inverse in . We will show that it is possible to find in such that . If such a g were to exist, then the following equations should hold in order that , that is,
So we have from the first equation. Substituting this value of in the second equation, we get in terms of the ‘s. And, so on, by the principle of mathematical induction, all the ‘s are uniquely determined. Thus, f is invertible in . QED.
Note that it is the above proposition which justifies the notation in , equalities such as
The above is true because the RHS has an inverse and
So, the unique inverse of is and vice versa. Hence, the expansion of as above. Similarly, we have
and many other such familiar expansions.
There is a differential operator in in , which behaves exactly like the differential operator of calculus.
Then, one can easily prove that is linear on , and further
from which we get the term “Taylor-MacLaurin” expansion
In the same manner, one can obtain, from , which in turn is equal to
the result which mimics the logarithmic differentiation of calculus, viz.,
The truth of this in is seen by multiplying the series on the RHS of (2vii) by the series for , and thus obtaining the series for .
Let us re-consider generating functions now. We saw that the GF for combinations of is .
Let us analyze this and find out why it works. After all, what is a combination of the symbols : ? It is the result of a decision process involving a sequence of independent decisions as we move down the list of the ‘s. The decisions are to be made on the following questions: Do we choose or not? Do we choose or not? Do we choose or not? And, if it is an r-combination that we want, we say “yes” to r of the questions above and say “no” to the remaining. The factor in the expression (2ii) is an algebraic indication of the combinatorial fact that there are only two mutually exclusive alternatives available for us as far as the symbol is concerned. Either we choose or not. Choosing “” corresponds to picking the term and choosing “not ” corresponds to picking the term 1. This correspondence is justified by the fact that in the formation of products in the expression of (2iv), each term in the expansion has only one contribution from and that is either or .
The product gives us terms corresponding to all possible choices of combinations of the symbols and — these are:
standing for the choice “not-” and “not-”
standing for the choice of and “not-”
standing for the choice of “not-” and .
standing for the choice of and .
This is, in some sense, the rationale for (2iv) being the OGF for the several r-combinations of .
We shall now complicate the situation a little bit. Let us ask for the combinations of the symbols with repetitions of each symbol allowed once more in the combinations.
To be discussed in the following article,
Combinatorics, Theory and Applications, V. Krishnamurthy, East-West Press.
Amazon India Link: