SEOUL NATIONAL UNIVERSITY

DEPARTMENT OF MATHEMATICAL SCIENCES

Computational Number Theory

TERM PROJECT

Exponentiation in by the repeated-squaring algorithm

Abdurashid Mamadolimov

Seoul 2007

Contents

1.  Definitions and basic properties………………………………….. 3

2.  Computing exponentiation in ………………………………… 5

3.  Other variants of repeated-squaring algorithm………………… 7

4. Computing of products of powers………………………………… 9

1.  Definitions and basic properties

Consider the integers Z:={…, -2, -1, 0, 1, 2, …}. For positive integer n, and for a,bZ, we say that a is congruent to b modulo n if n|(a-b), and we write ab(mod n). If n|(a-b), than we write ab(mod n). The relation ab(mod n) is called a congruence relation, or simply, a congruence. The number n appearing in such congruence is called the modulus of the congruence.

A simple observation is that ab(mod n) if and only if there exists an integer c such that a=b+cn.

Theorem 1. Let n be a positive integer. For every integer a, there exists a unique integer b such that ab(mod n) and 0b<n, namely, b:=a mod n, where a mod n:=a-n.

For any fixed positive integer n, the binary relation “ (mod n)” is an equivalence relation on the set Z:

Theorem 2. Let n be a positive integer. For all a,b,cZ, we have:

(i) aa(mod n);

(ii) ab(mod n) implies ba(mod n);

(iii ) ab(mod n) and bc(mod n) implies ac(mod n).

As such, the binary relation “ (mod n)” partitions the set into equivalence classes. We denote the equivalence class containing the integer a by . These equivalence classes are called residue classes modulo n.

It is easy to see from the definitions that

Note that a given residue class modulo n has many different “names”; for example, the residue class is the same as the residue class . For any integer a in a residue class, we call a a representative of that class.

Theorem 3. For a positive integer n, there are precisely n distinct residue classes modulo n, namely, for a=0, …, n-1.

Fix a positive integer n. Let us define as the set of residue classes modulo n. We can “equip” with binary operations defining addition and multiplication in a natural way as follows: for a,bZ, we define

,

and we define

.

The set is a commutative ring with unity under the rules of multiplication and addition defined above.

For and positive integer k, the expression denotes the product , where there are k terms in the product. One may extend this definition to k=0, defining to be the multiplicative identity .

For an integer a, we define len(a) to be the number of bits in the binary representation of |a|; more precisely,

The following theorem is about computing with large integers:

Theorem 4. Let a and b be arbitrary integers.

(i)  We can compute ab in time O(len(a)+len(b));

(ii)  We can compute ab in time O(len(a)len(b));

(iii)  If b0, we can compute the quotient and the remainder r:=a mod b in time O(len(b)len(q)).

2.  Computing exponentiation in

Let n>1. For , there exists a unique integer such that ; we call this integer a the canonical representative of , and denote it by . For computational purposes, we represent elements of by their canonical representatives.

Given and a non-negative integer e, compute . Perhaps the most obvious way to do this is to iteratively multiply by a total of e times. This yields the following algorithm:

Algorithm 1.

Running time of Algorithm 1. Multiplication in can be performed in time O(len(n)): given , we compute rep() as , using one integer multiplication and one division with remainder. So running time of Algorithm 1 is .

A much faster algorithm is the repeated-squaring algorithm. This method works as follows. Let be the binary expansion of e (where is the low-order bit). For i=0, …, l, define ; the binary expansion of is . Also define for i=0, …, l, so and . Then we have for i=0, …, l-1. The repeated application of this algorithm is equivalent to decomposing the exponent (by a base conversion to binary) into a sequence of squares and products: for example

algorithm needs only 5 multiplications instead of 12=13-1.

Some more examples:

because , algorithm needs 4 multiplications instead of 9;

because , algorithm needs 8 multiplications instead of 99;

because , algorithm needs 14 multiplications instead of 999;

because

, algorithm needs 25 multiplications instead of 999999.

This idea yields the following algorithm:

Algorithim 2.

It is clear that when this algorithm terminates, we have . The algorithm uses l squarings in , and at most l additional multiplications in . So running time for Algorithm 2 is O(len(e)len(n)).

3.  Other variants of repeated-squaring algorithm

The repeated-squaring algorithm we have presented here processes the bits of the exponent from left to right (i.e., from high order to low order). The following algorithm for exponentiation in has similar complexity that processes the bits of the exponent from right to left.

Algorithm 3.

The next goal of this section is to develop a “ary” variant of the repeated-squaring algorithm, in which the exponent is effectively treated as a number in base , rather than in base 2.

Let be 2-base and -base expansion of e, respectively. It is clear, that

For i=0, …,k, define ; The -base expansion of is Also define for i=0, …,k, so and Then we have and

So, we have

The following algorithm begins by building a table of powers and after that, it processes the bits of e from left to right in blocks of length t (i.e., as -base digits).

Algorithm 4.

This algorithm computes using multiplications in (2,3), multiplications in (5,8) and squarings in .

If we choose the parameter t as , then we can bound the number

of additional multiplications in by

Thus, the cost of exponentiation is essentially the cost of l squarings in . Algorithm 4 takes time.

The following algorithm improves Algorithm 4, so that it only uses l+t squarings in , and an additional multiplications in . We build a table that contains only the odd powers of among

Algorithm 5.

4. Computing of products of powers

Suppose we are given along with non-negative integers where for i=1, … , k. Show how to compute using l+O(1) squarings in and an additional multiplications in . Our algorithm works in two phases: in the first phase, the algorithm uses just the values to build a table of all possible products of subsets of ; in the second phase, the algorithm computes , using the exponents , and the table computed in the first phase.

We can assume that for I=1,…,k (if ) then we can set some left-zeros: ). So we have the matrix:

, where

, i=1,…,k.

Algorithm 6.

Example. Compute

The matrix: .

Literature

1.  V. Shoup, A Computational Introduction to Number Theory and Algebra, Cambridge Press, 2005.

2.  N. Koblitz, A cours in Number Theory and Cryptography, Springer, 1994.

3.  K. Rosen, Discrete Mathematics and Its Applications, 2003.

7