Lecture 03 - 10 September 2013 Big news: assignments are due on Friday at 2pm, rather than start of class Thursday. The first in-class exam is September 17. Where we left off... Euclid's Rule: For integers x >= y > 0, gcd(x,y) = gcd(x mod y, y) function Euclid(a,b) // where a>=b if b === 0 then return a return Euclid(b, a mod b) end Finite-precision complexity in terms of the value of b: Euclid(a,b) = O(log b) Infinite-precision complexity in terms of N = log(a) + log(b) Why don't we just say N = log(b)? Because we also have to take the length of a into account when computing a mod b. Great! Now, suppose we are implementing some application where we need infinite precision numbers. Let N represent the LENGTH of the input numbers in binary notation, N = log(A) + log(B) Euclid iterates at most 2*N times - actually a bit less, since N includes the length of both arguments, but this is tight enough for a big-O bound There are two operations in the body of Eucide: a compare to 0, and a mod. To compare a number to zero you have to look at each bit, so that is N steps. What about computing mod? Let's start with some simplier operations. How long does binary addition of two N bit numbers take? 0 0 1 1 0 1 ----- 1 1 0 answer: O(N) What about multiplying two numbers? Binary multiplication is just like decimal, except you only have to multiply by 1 and 0! 0 0 1 x 1 0 1 ----- 0 0 1 0 0 0 0 0 1 --------- 0 0 1 0 1 answer: O(N^2) Here is an iterative algorithm for multiplication: function multiply(a,b) prod = 0 while b > 0: if b & 1 then prod = prod + a b = b >> 1 a = a << 1 return prod end Here is a recursive algorithm: function multiply(a,b) if b == 0 return 0 c = multiply(a, b>>1) // b>>1 == floor(b/2) if b & 1 then // b is odd return a + c<<1 // c<<2 == c*2 else return c<<1 end Integer division and mod can also be done in O(N^2). There is an algorithm in the textbook, and one of the homework problems is analyzing it. **************************************************************************** You intuition might tell you that n^2 is best one can do for multiplication. Surprisingly, this turns out not to be the case! The reason turns out to depend on a little trick with algebra. Suppose x and y are each n bits long. We can split each in half: x = [ x_L ] [ x_R ] = 2^(n/2) x_L + x_R y = [ y_L ] [ y_R ] = 2^(n/2) y_L + y_R xy = (2^(n/2) x_L + x_R) (2^(n/2) y_L + y_R) = 2^n x_L y_L + 2^(n/2) (x_L y_R + x_R y_L) + x_R y_R Let's consider using this expression recursively to evaluate xy. The multiplications by powers of 2 can be done with shifts, so they are O(n). The additions are also O(n). The other 4 products are computed recursively where the length of each number is 1/2 n. Let T(n) be the running time of this algorithm. T(n) = O(n) + 4T(n/2) Claim: T(n) = Theta(n^2) We'll see why later... The trick (Gauss's method): (x_L y_R + x_R y_L) = (x_L + x_R)(y_L + y_R) - x_L y_L - x_R y_R How does this help? Instead of computing a total of 4 products, you only have to compute 3 products! What are they? So the improved algorithm only has 3 recursive calls, not 4. So: T(n) = O(n) + 3T(n/2) Claim: T(n) = O(n^log(3)) = O(n^2.83) How do we get these solutions to running times expressed by recursive equations? Well, we could solve the recursive equation each time. Formally, this would be done by expanding the recursive part to a mathematical series, and then solving the series for its closed form. (This is the approach I was taught!) However, it is a lot of work! In most cases we can simply apply a MASTER THEOREM that summarizes the solutions. Before writing down the MASTER THEOREM, let's write down some recurrence relations and solutions we already know. We just said that T(n) = 4T(n/2) + O(n) ---> T = O(n^2) T(n) = 3T(n/2) + O(n) ---> T = O(n^log(3)) Consider binary search: T(n) = T(n/2) + O(1) ---> T = O(log n) How about merge sort? T(n) = 2T(n/2) + O(n) ---> T = O(n log n) Here is a general equation for breaking a problem down into "a" subproblems of size n/b and then combining the answers using O(n^d) work. where T(n) = a T(ceiling(n/b)) + O(n^d) for a>0, b>1, d>=0 (i) if d > log_b(a) ---> T(n) = O(n^d) (ii) if d = log_b(a) ---> T(n) = O(n^d log n) (iii) if d < log_b(a) ---> T(n) = O(n^log_b(a)) Let try using this theorem: gauss's method for multiplication T(n) = 3T(n/2) + O(n) a = ? b=? d=? case ? binary search T(n) = T(n/2) + O(1) a = ? b=? d=? case ? merge sort? T(n) = 2T(n/2) + O(n) a = ? b=? d=? case ? How do we prove the MASTER THEOREM? I won't try to go through all the details here on the whiteboard, but here is the intuition: - The size of subproblems descreases by a factor of b at each recursion - So the height of the recursion tree is log_b(n) - The branching factor is a - So the k-th level of the tree has a^k subproblems, each of size n/b^k - The total work at level k is a^k * O(n/b^k)^d --- why? - For the work in the entire tree, we take the sum of all the levels Sigma_k=0_to_log_b(n) a^k * O(n/b^k)^d This is a geometric series. Depending on the relation of d to log_b(a), it is either a decreasing, constant, or increasing series. Solving each case gives the three cases of the theorem. Will the MASTER THEOREM solve all of our recursive algorithms? Not if the algorithm does not break down into fractional pieces -- if it is not "divide and conquere". For example, consider a linear search through an array of size n: function find(A, x, i) // return true if x is in A at position i or later if i > length(A) then return false if A[i] == x then return true return find(A, x, i+1) end T(n) = O(1) + T(n-1) It is plain that T(N) = O(n), but we can't apply the MASTER EQUATION. T(0) = c' T(n) = c + T(n-1) = c + c + c + .... + c + c' ________________ n-1 times = c + (n-1)c' = O(n) Other cases where the MASTER THEOREM can't be used is when a, b, or d are not constants, e.g.: T(n) = n T(n/2) + O(n) -- a is not a constant! However, there are more general versions of the MASTER THEOREM that handle more cases.