Computing Integer Roots

Fred Akalin

1. The algorithm

Today I’m going to talk about the generalization of the integer square root algorithm to higher roots. That is, given \(n\) and \(p\), computing \(\iroot(n, p) = \lfloor \sqrt[p]{n} \rfloor\), or the greatest integer whose \(p\)th power is less than or equal to \(n\). The generalized algorithm is straightforward, and it’s easy to generalize the proof of correctness, but the run-time bound is a bit trickier, since it has a dependence on \(p\).

First, the algorithm, which we’ll call \(\NewtonRoot\):
  1. If \(n = 0\), return \(0\).
  2. If \(p \ge \Bits(n)\) return \(1\).
  3. Otherwise, set \(i\) to \(0\) and set \(x_0\) to \(2^{\lceil \Bits(n) / p\rceil}\).
  4. Repeat:
    1. Set \(x_{i+1}\) to \(\lfloor ((p - 1) x_i + \lfloor n/x_i^{p-1} \rfloor) / p \rfloor\).
    2. If \(x_{i+1} \ge x_i\), return \(x_i\). Otherwise, increment \(i\).
and its implementation in Javascript:[1]
// iroot returns the greatest number x such that x^p <= n. The type of
// n must behave like BigInteger (e.g.,
// https://github.com/akalin/jsbn ), n must be non-negative, and
// p must be a positive integer.
//
// Example (open up the JS console on this page and type):
//
//   iroot(new BigInteger("64"), 3).toString()
function iroot(n, p) {
  var s = n.signum();
  if (s < 0) {
    throw new Error('negative radicand');
  }
  if (p <= 0) {
    throw new Error('non-positive degree');
  }
  if (p !== (p|0)) {
    throw new Error('non-integral degree');
  }

  if (s == 0) {
    return n;
  }

  var b = n.bitLength();
  if (p >= b) {
    return n.constructor.ONE;
  }

  // x = 2^ceil(Bits(n)/p)
  var x = n.constructor.ONE.shiftLeft(Math.ceil(b/p));
  var pMinusOne = new n.constructor((p - 1).toString());
  var pBig = new n.constructor(p.toString());
  while (true) {
    // y = floor(((p-1)x + floor(n/x^(p-1)))/p)
    var y = pMinusOne.multiply(x).add(n.divide(x.pow(pMinusOne))).divide(pBig);
    if (y.compareTo(x) >= 0) {
      return x;
    }
    x = y;
  }
}

This algorithm turns out to require \(Θ(p) + O(\lg \lg n)\) loop iterations, with the run-time for a loop iteration depending on what kind of arithmetic operations are used.

2. Correctness

Again we look at the iteration rule: \[ x_{i+1} = \left\lfloor \frac{(p - 1) x_i + \left\lfloor \frac{n}{x_i^{p-1}} \right\rfloor}{p} \right\rfloor \] Letting \(f(x)\) be the right-hand side, we can again use basic properties of the floor function to remove the inner floor: \[ f(x) = \left\lfloor \frac{1}{p} ((p-1) x + n/x^{p-1}) \right\rfloor \] Letting \(g(x)\) be its real-valued equivalent: \[ g(x) = \frac{1}{p} ((p-1) x + n/x^{p-1}) \] we can, again using basic properties of the floor function, show that \(f(x) \le g(x)\), and for any integer \(m\), \(m \le f(x)\) if and only if \(m \le g(x)\).

Finally, let’s give a name to our desired output: let \(s = \iroot(n, p) = \lfloor \sqrt[p]{n} \rfloor\).[2]

Unsurprisingly, \(f(x)\) never underestimates:
(Lemma 1.) For \(x \gt 0\), \(f(x) \ge s\).

Proof. By the basic properties of \(f(x)\) and \(g(x)\) above, it suffices to show that \(g(x) \ge s\). \(g'(x) = (1 - 1/p) (1 - n/x^p)\) and \(g''(x) = (p - 1) (n/x^{p+1})\). Therefore, \(g(x)\) is concave-up for \(x \gt 0\); in particular, its single positive extremum at \(x = \sqrt[p]{n}\) is a minimum. But \(g(\sqrt[p]{n}) = \sqrt[p]{n} \ge s\). ∎

Also, our initial guess is always an overestimate:
(Lemma 2.) \(x_0 \gt s\).

Proof. \(\Bits(n) = \lfloor \lg n \rfloor + 1 \gt \lg n\). Therefore, \[ \begin{aligned} x_0 &= 2^{\lceil \Bits(n) / p \rceil} \\ &\ge 2^{\Bits(n) / p} \\ &\gt 2^{\lg n / p} \\ &= \sqrt[p]{n} \\ &\ge s\text{.} \; \blacksquare \end{aligned} \]

Therefore, we again have the invariant that \(x_i \ge s\), which lets us prove partial correctness:
(Theorem 1.) If \(\NewtonRoot\) terminates, it returns the value \(s\).

Proof. Assume it terminates. If it terminates in step \(1\) or \(2\), then we are done. Otherwise, it can only terminate in step \(4.2\) where it returns \(x_i\) such that \(x_{i+1} = f(x_i) \ge x_i\). This implies \(g(x_i) = ((p-1)x_i + n/x_i^{p-1}) / p \ge x_i\). Rearranging yields \(n \ge x_i^p\) and combining with our invariant we get \(\sqrt[p]{n} \ge x_i \ge s\). But \(s + 1 \gt \sqrt[p]{n}\), so that forces \(x_i\) to be \(s\), and thus \(\NewtonRoot\) returns \(s\) if it terminates. ∎

Total correctness is also easy:
(Theorem 2.) \(\NewtonRoot\) terminates.

Proof. Assume it doesn’t terminate. Then we have a strictly decreasing infinite sequence of integers \(\{ x_0, x_1, \dotsc \}\). But this sequence is bounded below by \(s\), so it cannot decrease indefinitely. This is a contradiction, so \(\NewtonRoot\) must terminate. ∎

Note that, like \(\NewtonRoot\), the check in step \(4.2\) cannot be weakened to \(x_{i+1} = x_i\), as doing so would cause the algorithm to oscillate. In fact, as \(p\) grows, so do the number of values of \(n\) that exhibit this behavior, and so do the number of possible oscillations. For example, \(n = 972\) with \(p = 3\) would yield the sequence \(\{ 16, 11, 10, 9, 10, 9, \dotsc \}\), and \(n = 80\) with \(p = 4\) would yield the sequence \(\{ 4, 3, 2, 4, 3, 2, \dotsc \}\).

3. Run-time

We will show that \(\NewtonRoot\) takes \(Θ(p) + O(\lg \lg n)\) loop iterations. Then we will analyze a single loop iteration and the arithmetic operations used to get a total run-time bound.

Analagous to the square root case, define \(\Err(x) = x^p/n - 1\) and let \(ϵ_i = \Err(x_i)\). First, let’s prove our lower bound for \(ϵ_i\), which translates directly from the square root case:
(Lemma 3.) \(x_i \ge s + 1\) if and only if \(ϵ_i \ge 1/n\).

Proof. \(n \lt (s + 1)^p\), so \(n + 1 \le (s + 1)^p\), and therefore \((s + 1)^p/n - 1 \ge 1/n\). But the expression on the left side is just \(\Err(s + 1)\). \(x_i \ge s + 1\) if and only if \(ϵ_i \ge \Err(s + 1)\), so the result immediately follows. ∎

Now for the next few lemmas we need to do some algebra and calculus. Inverting \(\Err(x)\), we get that \(x_i = \sqrt[p]{(ϵ_i + 1) \cdot n}\). Expressing \(g(x_i)\) in terms of \(ϵ_i\) and \(q = 1 - 1/p\) we get \[ g(x_i) = \sqrt[p]{n} \left( \frac{ϵ_i q + 1}{(ϵ_i + 1)^q} \right) \] and \[ \Err(g(x_i)) = \frac{(q ϵ_i + 1)^p}{(ϵ_i + 1)^{p-1}} - 1\text{.} \] Let \[ f(ϵ) = \frac{(q ϵ + 1)^p}{(ϵ + 1)^{p-1}} - 1\text{.} \] Then computing derivatives, \[ \begin{aligned} f'(ϵ) &= q ϵ \frac{(q ϵ + 1)^{p-1}}{(ϵ + 1)^p}\text{,} \\ f''(ϵ) &= q \frac{(q ϵ + 1)^{p-2}}{(ϵ + 1)^{p + 1}}\text{, and} \\ f'''(ϵ) &= -q (2 + q (2 + 3 ϵ)) \frac{(q ϵ + 1)^{p-3}}{(ϵ + 1)^{p + 2}}\text{.} \end{aligned} \] Note that \(f(0) = f'(0) = 0\), and \(f''(0) = q\). Also, for \(ϵ > 0\), \(f'(ϵ) \gt 0\), \(f''(ϵ) \gt 0\), and \(f'''(ϵ) < 0\).

Now we’re ready to show that the \(ϵ_i\) shrink quadratically:
(Lemma 4.) \(f(ϵ) \lt (ϵ/\sqrt{2})^2\) for \(ϵ \gt 0\).

Proof. Taylor-expand \(f(ϵ)\) around \(0\) with the Lagrange remainder form to get \[ f(ϵ) = f(0) + f'(0) ϵ + \frac{f''(0)}{2} ϵ^2 + \frac{f'''(\xi)}{6} ϵ^3 \] for some some \(\xi\) such that \(0 \lt \xi \lt ϵ\). Plugging in values, we see that \(f(ϵ) = \frac{1}{2} q ϵ^2 + \frac{1}{6} f'''(\xi) ϵ^3\) with the last term being negative, so \(f(ϵ) \lt \frac{1}{2} q ϵ^2 \lt \frac{1}{2} ϵ^2\). ∎

But this is only a useful upper bound when \(ϵ_i \le 1\). In the square root case this was okay, since \(ϵ_1 \le 1\), but that is not true for larger values of \(p\). In fact, in general, the \(ϵ_i\) start off shrinking linearly:
(Lemma 5.) For \(ϵ \gt 1\), \(f(ϵ) \gt ϵ/8\).

Proof. Since \(f(0) = f'(0) = 0\), and \(f''(ϵ) \gt 0\) for \(ϵ \ge 0\), \(f'(ϵ)\) and \(f(ϵ)\) are increasing, and thus \(f(1) \gt 0\) and \(f(ϵ)\) is a concave-up curve.

Then \((0, 0)\) and \((1, f(1))\) are two points on a concave-up curve, and thus geometrically the line \(y = f(1) ϵ\) must lie below \(y = f(ϵ)\) for \(ϵ \gt 1\), and thus \(f(ϵ) \gt f(1) ϵ\) for \(ϵ \gt 1\). Algebraically, this also follows from the definition of (strict) convexity (with \(x_1 = 0\), \(x_2 = ϵ\), and \(t = 1 - 1/ϵ\)).

But \(f(1) = (2 - 1/p)^p/2^{p-1} - 1 = 2 \left(1 - \frac{1}{2p}\right)^p - 1\), which is always increasing as a function of \(p\), as you can see by calculating its derivative. Therefore, its minimum is at \(p = 2\), which is \(1/8\), and so \(f(ϵ) \gt f(1) ϵ \ge ϵ/8\). ∎

Finally, let’s bound our initial values:
(Lemma 6.) \(x_0 \le 2s\) and \(ϵ_0 \le 2^p - 1\).

Proof. This is a straightforward generalization of the equivalent lemma from the square root case. Let’s start with \(x_0\): \[ \begin{aligned} x_0 &= 2^{\lceil \Bits(n) / p \rceil} \\ &= 2^{\lfloor (\lfloor \lg n \rfloor + 1 + p - 1)/p \rfloor} \\ &= 2^{\lfloor \lg n / p \rfloor + 1} \\ &= 2 \cdot 2^{\lfloor \lg n / p \rfloor}\text{.} \end{aligned} \] Then \(x_0/2 = 2^{\lfloor \lg n / p \rfloor} \le 2^{\lg n / p} = \sqrt[p]{n}\). Since \(x_0/2\) is an integer, \(x_0/2 \le \sqrt[p]{n}\) if and only if \(x_0/2 \le \lfloor \sqrt[p]{n} \rfloor = s\). Therefore, \(x_0 \le 2s\).

As for \(ϵ_0\): \[ \begin{aligned} ϵ_0 &= \Err(x_0) \\ &\le \Err(2s) \\ &= (2s)^p/n - 1 \\ &= 2^p s^p/n - 1\text{.} \end{aligned} \] Since \(s^p \le n\), \(2^p s^p/n \le 2^p\) and thus \(ϵ_0 \le 2^p - 1\). ∎

Now we’re ready to show our main result, which involves calculating how long the \(ϵ_i\) shrink linearly:
(Theorem 3.) \(\NewtonRoot\) performs \(Θ(p) + O(\lg \lg n)\) loop iterations.

Proof. Assume that \(ϵ_i \gt 1\) for \(i \le j\), \(ϵ_{j+1} \le 1\), and \(j+k\) is the number of loop iterations performed when running the algorithm for \(n\) and \(p\) (i.e., \(x_{j+k} \ge x_{j+k-1}\)). Using Lemma 5, \[ \left( \frac{1}{8} \right)^{j+1} ϵ_0 \lt ϵ_{j+1} \le 1\text{,} \] which implies \[ j \gt \frac{\lg ϵ_0}{3} - 1\text{.} \]

Similarly, \[ \left( \frac{1}{8} \right)^j ϵ_0 \ge ϵ_j \gt 1\text{,} \] which implies \[ j \lt \frac{\lg ϵ_0}{3} \text{.} \] Therefore, \(j = Θ(\lg ϵ_0)\), which is \(Θ(p)\) by Lemma 6.

Now assume \(k \ge 5\). Then \(x_i \ge s + 1\) for \(i \lt j + k - 1\). Since \(ϵ_{j+1} \le 1\) by assumption, \(ϵ_{j+3} \le 1/2\) and \(ϵ_i \le (ϵ_{j+3})^{2^{i-j-3}}\) for \(j + 3 \le i \lt j + k - 1\) by Lemma 4, then \(ϵ_{j+k-2} \le 2^{-2^{k-5}}\). But \(1/n \le ϵ_{j+k-2}\) by Lemma 3, so \(1/n \le 2^{-2^{k-5}}\). Taking logs to bring down the \(k\) yields \(k - 5 \le \lg \lg n\). Then \(k \le \lg \lg n + 5\), and thus \(k = O(\lg \lg n)\).

Therefore, the total number of loop iterations is \(Θ(p) + O(\lg \lg n)\). ∎

Note that \(p \le \lg n\), so we can just say that \(\NewtonRoot\) performs \(Θ(\lg n)\) operations. But that obscures rather than simplifies. Note that the proof above is very similar to the proof of the worse run-time of \(\mathrm{N{\small EWTON}\text{-}I{\small SQRT}'}\) where the initial guess varies. In this case, the error in our initial guess is magnified, since we raise it to the \((p-1)\)th power, and so that manifests as the \(Θ(p)\) term.

Furthermore, unlike the square root case, the number of arithmetic operations in a loop iteration isn’t constant. In particular, the sub-step to compute \(x_i^{p-1}\) takes a number of arithmetic operations dependent on \(p - 1\). Using repeated squarings, this computation would take \(Θ(\lg p)\) squarings and at most \(Θ(\lg p)\) multiplications.

If the cost of an arithmetic operation is constant, e.g., we’re working with fixed-size integers, then the run-time bounds is the above multiplied by \(Θ(\lg p)\).

Otherwise, if the cost of an arithmetic operation depends on the length of its arguments, then we only have to multiply by a constant factor to get the run-time bounds in terms of arithmetic operations. If the cost of multiplying two numbers \(\le x\) is \(M(x) = O(\lg^k x)\), then the cost of computing \(x^p\) is \(O((p \lg x)^k)\). But \(x\) is \(Θ(n^{1/p})\), so the cost of computing \(x^p\) is \(O(\lg^k n)\), which is on the order of the cost of multiplying two numbers \(\le n\). Furthermore, note that we divide the result into \(n\), so we can stop once the computation of \(x_i^{p-1}\) exceeds \(n\). So in that case, we can treat a loop iteration as if it were performing a constant number of arithmetic operations on numbers of order \(n\), and so, like in the square root case, we pick up a factor of \(D(n)\), where \(D(n)\) is the run-time of dividing \(n\) by some number \(\le n\).


Like this post? Subscribe to my feed RSS icon or follow me on Twitter Twitter icon.

Footnotes

[1] Go and JS implementations are available on my GitHub.

[2] Here, and in most of the article, we’ll implicitly assume that \(n \gt 0\) and \(p \gt 1\).