CMUQ 15-121 Algorithmic Complexity



1. Introduction

When we discuss efficiency, we are trying to find a way to compare different algorithms and determine which one performs better. Sometimes when you want to compare two algorithms the best thing to do is implement both of them, run them on lots of data, and then time them to see which one is faster. Other times, we want to manually analyze their complexity and compare them mathematically. In this set of notes we’ll discuss how to analyze the growth-rate of an algorithm as its input size changes.

A good way to be introduced to algorithmic complexity is simply by example. There a lot of difference aspects of complexity that we could analyze, but for now we’ll use algorithmic steps. Loosely speaking, this means how many steps an algorithm takes to finish with respect to the size of its input.

For the examples below, let’s consider the following MyList class that creates a dynamic array:

public class MyList {
    String[] arr;
    int numItems;

    public MyList() {
        numItems = 0;
        arr = new String[100];
    }

    // Return the number of items in the array.
    public int getSize() {
        return numItems;
    }

    // Add an item to the end of the array.  If the arrays if full, double its size.
    public void add(String s) {
        if (numItems < arr.length) {
            arr[numItems++] = s;
        } else {
            String[] newArr = new String[arr.length * 2];
            for (int i = 0; i < arr.length; i++) {
                newArr[i] = arr[i];
            }
            arr = newArr;
            arr[numItems++] = s;
        }
    }

    // Remove an item from the array, and shift items to remove the empty space.
    public void remove(int idx) {
        if (idx < numItems) {
            // Remove the item
            arr[idx] = null;
            // Slide all the others back to remove the hole.
            for (int i = idx + 1; i < numItems; i++) {
                arr[i - 1] = arr[i];
            }
            numItems--;
        }
    }

    // Return the number of times item occurs in the array
    private int countItem(String item) {
        int count = 0;

        for (int i = 0; i < numItems; i++) {
            if (arr[i].equals(item)) {
                count++;
            }
        }

        return count;
    }

    // Return which items occurs most frequently in the array.
    public String findMostFrequest() {
        String mostFreq = "";
        int howMany = 0;

        for (int i = 0; i < numItems; i++) {
            int count = countItem(arr[i]);
            if (count > howMany) {
                howMany = count;
                mostFreq = arr[i];
            }
        }

        return mostFreq;
    }

    public String toString() {
        return numItems + ": " + Arrays.toString(arr);
    }

}

2. \(O(1)\)

When an algorithm is \(O(1)\), it is constant time, meaning that no matter how large the input is, the algorithm takes the same number of steps to finish.

Consider the getSize() method of MyList:

// Return the number of items in the array.
public int getSize() {
    return numItems;
}

Regardless of how many items are in the list, this method always takes the same number of steps. (In this case, 1.) It doesn’t matter if there are 100 items in the list or 1,000,000 items in the last, it is still only one step. Because the number of steps is constant no matter how big the list is, we say this is \(O(1)\).

3. \(O(n)\)

When an algorithm is \(O(n)\), it is linear time, meaning that the number of steps is linearly related to the size of the input. So, if you double the size of the input, you also need to double the number of steps.

Consider the remove(int idx) method of MyList:

// Remove an item from the array, and shift items to remove the empty space.
public void remove(int idx) {
    if (idx < numItems) {
        // Remove the item
        arr[idx] = null;
        // Slide all the others back to remove the hole.
        for (int i = idx + 1; i < numItems; i++) {
            arr[i - 1] = arr[i];
        }
        numItems--;
    }
}

In this case, when we remove an item we then need to shift all of the items after it in the array. If the array were twice as large, then we need to shift twice as many items. This is a linear relationship between the array size and the number of algorithmic steps, so we say this algorithm is \(O(n)\).

Let’s look at a more slightly more complicated case: add(String s) of MyList:

// Add an item to the end of the array.  If the arrays if full, double its size.
public void add(String s) {
    if (numItems < arr.length) {
        arr[numItems++] = s;
    } else {
        String[] newArr = new String[arr.length * 2];
        for (int i = 0; i < arr.length; i++) {
            newArr[i] = arr[i];
        }
        arr = newArr;
        arr[numItems++] = s;
    }
}

The reason this is more complicated is because there are actually two cases:

  1. The array isn’t full, so the item can be added in one step. (This would be \(O(1)\)).
  2. The array is full, so we need to create a new one and copy the old into it. (This would be \(O(n)\)).

In this case, we will choose the worse complexity and say that the entire algorithm is \(O(n)\).

4. \(O(n^2)\)

When an algorithm is \(O(n^2)\), it is quadratic, meaning that that number of steps grows quadratically with respect to the input size. So, if you double the size of the input, then the number of steps multiplies by four.

Consider the findMostFrequest() method of MyList (as well as its helper function, countItem(String item)):

// Return the number of times item occurs in the array
private int countItem(String item) {
    int count = 0;

    for (int i = 0; i < numItems; i++) {
        if (arr[i].equals(item)) {
            count++;
        }
    }

    return count;
}

// Return which items occurs most frequently in the array.
public String findMostFrequest() {
    String mostFreq = "";
    int howMany = 0;

    for (int i = 0; i < numItems; i++) {
        int count = countItem(arr[i]);
        if (count > howMany) {
            howMany = count;
            mostFreq = arr[i];
        }
    }

    return mostFreq;
}

Inside of findMostFrequent we have a for-loop that goes through all the items in the array. This would lead us to think this might be linear. However, for each run of the for-loop, countItem is called once, and countItem also has a for-loop that goes through the entire array. That means that, really, it is as if findMostFrequent has nested for-loops. For every item in the array (first for-loop), we call a function that goes through every item in the array (second for-loop). That makes the complexity \(O(n^2)\).

5. Other Complexities

We’ve only covered three of the most common complexities. There are lots of others, some of which we will discuss more as the course goes on:

  1. Logarithmic \(O(\textrm{log }n)\)
  2. Square-Root \(O(\sqrt{n})\)
  3. Linearithmic, Loglinear, or quasilinear \(O(n\textrm{ log }n)\)
  4. Exponential \(O(k^n)\)

6. Importance

Why is complexity important? Because the lower the complexity level of your algorithm, the faster, in general, it will run. In data structures this is important because if you understand the complexity of various operations of a data structure, then you can decide which data structure will be best for your given use-case.