Complexity — Introduction
This is Part 3 of Series: “How To Explain Complexity Theory To Your Buddy”. You can check my all posts here:
What Is a Complexity Class?
In the world of computer science, many algorithms have similar time and/or space characteristics, which allows us to predict how efficient would an algorithm be in terms of time / space, if
- the input is a “big” number
- input is an array consisting of N( x M ( x P …)) items
- the argument is a dense graph G = (V, E) etc.
It is the reason how there are a lot of fast sudoku solvers, fast calculating chess bots etc.
How Are Classes Defined?
In practice, the number of iterations of an algorithm may not always grow exponentially fast despite high complexity. In theory, even if it grows very slow but exponentially, it is exponential, and a polynomial algorithm always outperforms an exponential one. Worst-case scenarios define how well an algorithm behave in worst conditions, extreme large values etc, that’s why it is very important to have the most efficient underlying programs when building much larger applications on top of them.
Calculate the Complexity on Your Own
In this section, we will analyse different Python programs:
Elementary Operations:
Unless it is given that a statement has an underlying complexity, we always define the complexity of a single statement as O(1), constant.
Sequential Statements:
has the complexity max{O(n), O(n²), O(n³)} = O(n³)
If-Statement:
Here applies the same: the overall complexity is max{O(n), O(n²), O(n³)} = O(n³), but in single cases such as n = 500, the complexity may be lower.
Loops:
For simplicity, every loop is written as WHILE loop
has a complexity O(n) because it has a linear progress.
has a complexity O(log(n)) because it grows exponentially till n reached.
While-Loop:
has an unknown complexity because it might not be terminated at all if f(n) is never yes. It is recognizable (semi-decidable) in terms of computability.
has a complexity O(n) because it iterates in worst case once over the whole array.
Recursion: Complexity of recursion needs to be analysed very carefully. The type of recursion can be different, e.g.
- Linked List Recursion (Linear)
- Tree Recursion (Logarithmic or sometimes linear)
- Graph Recursion Algorithms like Depth First Search, Breadth First Search
On general use cases, it is a far more complex topic than anything here, so if you want to understand the complexity of recursive algorithms on a deeper level, you can learn more about Master Theorem
Definition of t in X, X-hard, X-complete
How Reductions Help Us Draw Conclusions
We assume that X is a complexity class.
Let’s say, max is reduced to a sorting algorithm in the code below:
Let’s say that our naive sorting algorithm had the upper bound O(n²) and max too. We replace it with a more efficient algorithm using Divide-and-Conquer Technique operating in O(n log(n)). Then, the max is in O(n log (n)) too.
Given that SAT (Boolean Satisfiability Problem) is NP-complete. We assume that SAT is reduced to SORT where SORT is the language of all sorted arrays. Sorting can be solved in polynomial time. We could implement SAT using SORT, then SAT should be in polynomial class P too. Due to NP-completeness of SAT, all problems in NP would be now in P and therefore P = NP — which is still one of the most central questions of computer science still unanswered — would be a valid statement. We will learn more about complexity classes and P = NP in my next posts.