# Calculating discrete logarithms modulo a prime, using Shanks’ baby-step/giant-step algorithm

Suppose you’re working in a prime field GF(p); you have a generator g; a desired non-zero value x; and you want to find the power y that gives you gy = x mod p.

We can write this as y = logg x mod p – this is called the “discrete logarithm” problem.

As a toy example, suppose you’re working in GF(17) and you have generator 3 and desired value 12. You want to find y = log3 12 mod 17. Spoiler alert: y = 13 because 313 = 12 mod 17.

The most straightforward solution is to start with g0 = 1 and repeatedly multiply by g until you get the desired x. Then y is just the number of times you had to multiply.

Using our toy example:

• 30 = 1 mod 17
• 31 = 1 * 3 = 3 mod 17
• 32 = 3 * 3 = 9 mod 17
• 33 = 3 * 9 = 10 mod 17
• 34 = 3 * 10 = 13 mod 17,
• 312 = 3 * 7 = 4 mod 17
• 313 = 3 * 4 = 12 mod 17

So y = 13 is the desired logarithm.

This takes O(p) time and constant space. If you like you can use this approach to generate a lookup table, which will take O(p) space, but will allow you to perform future logarithms in constant time. Start with an empty array and hop around filling it as you go until you get this:

 x = 3y mod 17 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 y N/A 0 14 1 12 5 15 11 10 2 3 7 13 4 9 6 8

If p is large (for example 2255 – 19) then this approach fails miserably because there just isn’t enough time in the universe (and if you want a lookup table there isn’t enough space in the universe either.)

Mathematician Daniel Shanks (who we met last time in Calculating square roots modulo a prime, using the Tonelli-Shanks algorithm) found a faster algorithm which runs in O(√p) time at the cost of also requiring O(√p) space; it is called the baby-step giant-step algorithm.

The idea is to find the smallest integer whose square is at least p – call this s. (For p = 17, s = 5.)

Then we can write every number from 0 to p – 1 as a two-digit number in base s:

 0 = (0, 0) 5 = (1, 0) 10 = (2, 0) 15 = (3, 0) 1 = (0, 1) 6 = (1, 1) 11 = (2, 1) 16 = (3, 1) 2 = (0, 2) 7 = (1, 2) 12 = (2, 2) – 3 = (0, 3) 8 = (1, 3) 13 = (2, 3) – 4 = (0, 4) 9 = (1, 4) 14 = (2, 4) –

In general, we write (i, j) = is + j where 0 ≤ i, j < s.

The “baby step” part is to precompute the logarithms for all (0, j) and store them in a lookup table. This takes O(√p) time and O(√p) space:

• 3(0, 0) = 1 mod 17
• 3(0, 1) = 3 mod 17
• 3(0, 2) = 9 mod 17
• 3(0, 3) = 10 mod 17
• 3(0, 4) = 13 mod 17

(We need the lookup table to be able to answer the questions “do we know the logarithm of x” and “if so, what is the logarithm of x” in constant time.)

If the lookup table contains our desired x, then return the corresponding y = (0, j) and we’re done.

Alas for our toy example, the lookup table does not contain 3(0, j) = 12 mod 17. So we keep going.

Take a “giant step”. Instead of targeting x, target gs x. If we find such a y′ = (0, j) in our lookup table, then return y′ + s = (1, j). This works because gy = gs x mod p which means x = gy′ + s mod p which means logg x = y′ + s mod p which means y = y′ + s mod p.

FIRST GIANT STEP: 3-5 = 7 mod 17 so our new target is 12 * 7 = 16 mod 17. Alas, the lookup table does not contain this either. So we keep going.

Repeat the “giant step” process until the lookup table contains the logarithm. Return y′ + is = (i, j) = is + j mod p where i is the number of giant steps you ultimately had to take. This works for the same reason taking the first giant step worked.

SECOND GIANT STEP: new target is 16 * 7 = 10 mod 17. The lookup table tells us 3(0, 3) = 10 mod 17. Yay! But we had to take two giant steps to reach this, which means instead of (0, 3) being the answer, it’s actually (2, 3). So the final answer is 2 * 5 + 3 = 13 mod 17, which checks.

Of course, if p is large enough, even this faster algorithm is too slow.

Here’s an implementation in PowerShell

```> .\baby-step-giant-step.ps1 -p 17 -g 3 -x 12
Find y so that 3^y = 12 mod 17
log_3 12 mod 17 = 13
3^13 mod 17 = 12```