Home

[This page is auto-generated; please prefer to read the article’s PDF Form.]



Why Some Algorithms Divide Problems Equally

Nitin Verma

April 18, 2021

There are many algorithms which divide any problem of size n into two subproblems of equal size. Merge-Sort and Binary-Search are two examples. Some algorithms like Quick-Sort may not be able to control the two subproblems’ sizes, but work more efficiently when the subproblems are of equal size. Whenever we need to divide a problem into two subproblems, why it is often good to have the two subproblems’ sizes equal?

Suppose there is a problem for which we know how to divide any instance of the problem into two subproblems and combine their results to solve the original instance. Also, we are able to associate a size with each problem instance such that the size of the two subproblems (which need not be equal) add up to the original problem’s size. We can recursively apply this dividing procedure until the problem instances are of size at most N0. We refer such instances as “Base Instances” and solve them by some other method. In this article, we will refer to such a Divide-and-Conquer algorithm as “D2C” (as it divides into 2) algorithm.

For an initial problem instance of size n, there must be at least n∕N0 such base-instances. If every such base-instance requires amount of work (number of operations) at least W0, then total work required by all of them is nW0∕N0. Additionally, some work is also required to create subproblems and combine their results; but we do not account that here. So, we can conclude that the work required to solve the problem instance of size n, by any such D2C algorithm, is at least nW0∕N0 which is linear in n.

[Next]