An algorithm has \(O(\sqrt{n})\) time complexity. If an input of size 100 takes 10 seconds to solve, how long will an input of size 400 take to solve?
\(\sqrt{4\times 100}=2\times \sqrt{100}\) so the program will take twice as long, or 20 seconds.
We have two algorithms for solving a problem. Algorithm A’s running time is \(O(n)\) or linear, and algorithm B’s running time is \(O(n^2)\) or quadratic. Does this mean that it would always be more efficient (faster) to use algorithm A?
No (we should always be a suspicious of absolute statements). Recall that Big-O notation describes the asymptotic worst-case run time of an algorithm (as the input grows very large). Thus while we would expect A to be faster than B for large inputs, there could be particular inputs where that is not the case or there could be large constant terms not accounted for in the Big-O description.
What are the Big-O time and space complexities of the following Python code?
def mystery(a_list):
if a_list == []:
return 0
else:
return a_list[0] + mystery(a_list[1:])
This code has \(O(n)\) time and space complexity because it
will perform n sum operations (and n recursive calls to mystery
), one
for every element in the input list.