Lecture 02 - 5 September 2013 There will be an assignment posted on the course's web site later today: http://www.cs.rochester.edu/u/www/courses/282/fall2013/ (If you loose this URL, you can get to through Blackboard.) The assignment is due one week from today. You should read Chapter 0 carefully and sections 1.1 and 1.2.3 in Chapter 1, and get all the problems on the assignment that are based on that material done over the weekend. This is mostly all review of what you learned about complexity in 172 plus a little refresher in doing the kinds of proofs you had back in MATH 150, so it should be pretty fast going. You should then go on and start reading Chapter 2 on Divide & Conquer algorithms, and get to work on the Chapter 2 problems on the assigment. However, if you do find yourself having trouble, don't worry -- instead get help! All three TA's as well I all have office hours - The hours will be posted on the web site by Friday afternoon - plus we'll start having workshops next week. So that is 6 scheduled opportunities for help outside of class every week! **************************************************************************** Big O notation Upper bound on running time: f(n) = O(g(n)) iff there exists a constant c>0 such that f(n) <= c * g(n) Alternative definition: adds and exists n' such that for all n>=n' Claim: the two definitions are equivalent. Def 1 => Def 2: trivial. Why? Def 2 => Def 1: given c, n' such that for all n>=n': f(n) <= c * g(n) show that there exists c' such that for all n>0: f(n) <= c' * g(n) In general, would c' be bigger than c? How much bigger? Let a = maximum value of f(n) for n 0 such that for n>=n' f(n) >= c * g(n) Tight bounds f(n) = Theta(g(n)) exists c1, c2, n' > 0 such that for n>=n' c1 * g(n) <= f(n) <= c2 * g(n) Claim: f(n) = Theta(g(n)) iff f(n) = O(g(n)) and f(n) = Omega(g(n)) World's easiest proof? Let c1 = c from Omega and c2 = c from O ! **************************************************************************** Simplification rules 1. Multiplicative constants can be omitted 10 n^2 --> n^2 2. n^a dominates n^b if a>b n^3 + n^2 --> n^2 3. any exponential dominates any polynomial 1.01^n + n^100 --> 1.01^n 4. any polynomial dominates any logarithm n^2 + nlog n --> n^2 Questions: f(n) = O(f(n)) ? TRUE n = O(n^2) ? TRUE n^2 = Omega(n) ? TRUE 3^n = O(2^n) ? FALSE Why? Because 3^n grows faster than c*2^2 for any c 2^n = O(2^(n/2)) ? FALSE Why? Because 2^n grows faster than 2^(n/2) for any c Exactly how much faster? Note that 2^n = (2^(n/2))^2 so it grows quadratically faster! log_2(n) = O(log_10(n)) ? TRUE Why? because log_2(n) = log_2(10) * log_10(n) just a multiplicative constant **************************************************************************** Running time calculations for program with nested iterations: insertion sort n is the length of the input array insertion-sort(A[n]) // assume first element array is index 1 for j = 2 to n: key = A[j] // insert A[j] into sorted sequence A[1..j-1] i = j-1 while i>0 and A[i] > key: A[i+1] = A[i] i = i-1 A[i+1] = key end Inner loop: runs j times, requires c1 work each time Outer loop: runs n times (really n-1 times, but no difference) each run requires c2 constant work, plus inner loop Sum_j=1..n (c2 + c1*j) --> Sum_j=1..n (j) What is this called? An arithmetic series. = n(n-1)/2 --> (n^2-n)/2 --> n^2 T(n) = O(n^2) **************************************************************************** In our analysis of Fib, n was the *value* of the input, but for insertion sort, n was the *length* of the input. In both cases, we assumed that basic numeric operations + - * took only constant time. When is this assumption realistic? When the algorithm is intended to be implemented using fixed precision numbers, e.g. 32 bit integers. This constant time assumption breaks down for algorithms that are intended to work with arbitrarily (infinite precision) numbers. E.g.: Crytographic algorithms commonly work with numbers that are hundreds or thousands of digits long! What kind of data structure could be used to actually implement infinite precision integers? variable length arrays linked lists Trick question: What is the relationship between the value A of an infinite precision number and its length N? It depends on how numbers are represented! - If input is in decimal, N = log_10 (A) - If input is in binary, N = log_2 (A) - If input is in unary, length = N If we assume that *some* non-unary base notation is used, does it matter for the purposes of asymptotic analysis exactly what the base is? No -- because the length is just changed by a multiplicative constant. **************************************************************************** Let's consider a mathematical function that has many applications, including crypotography: computing the greatest common divisor of two numbers. We will start with an analysis that assumes fixed precision numbers. Def: gcd(a,b) = ? largest integer c such that c evenly divides both a and b. How express in terms of the mod operator? a mod c = 0 and b mod c = 0 WLG (without loss of generality) let's assume a >= b. function brute_force_gcd(a,b) c = 1 for i = 1 to b if a mod c == 0 and b mod c == 0 then c = i return c end What is running time in terms of the value of b? brute_force_gcd(a,b) = Theta( b ) linear in b **************************************************************************** Linear is actually not so good in this case: We can do much better using Euclid's Theorem: For integers x >= y > 0, gcd(x,y) = gcd(x mod y, y) PROOF: Suppose z divides both x and y. Then for some constants c1, c2 x = c1*z y = c2*z So x - y = c1*z - c2*z = (c1-c2)*z Therefore z divides x-y. In particular, let z be the gcd(x,y). It is also a divisor of (x-y), so the gcd(x-y,y) must be at least as large. So, gcd(x,y) <= gcd(x-y,y) Now, suppose z divides both x-y and y. Then for some constants c1, c2 x-y = c1*z y = c2*z So x = x-y +y = c1*z + c2*z = (c1+c2)*z Thus, z also divides x, so gcd(x-y,y) <= gcd(x,y) Putting these two conclusions together gives gcd(x,y) = gcd(x-y, y) Now, just keeping out y from x until you can't anymore. gcd(x,y) = gcd(x mod y, y) QED We can turn this theorem directly into an efficient algorithm for gcd: function Euclid(a,b) // where a>=b if b === 0 then return a return Euclid(b, a mod b) end Why is this algorithm correct? - base case is true by definition - recursive case is true by Euclid's theorem - algorithm must terminate because argument b always decreases in size How many times will the algorithm recurse? Lemma: If a >= b, then a mod b < a/2 Note that the arguments a and b "swap" positions in each recursive call, and the second is always divided at least in half. So, in two iterations, both a and b are cut in half. How many times can you cut an integer b in half (rounding down) before you reach 0? log_2 b So the maximum number of iterations is at most 2 * log_2(b). A constant amount of time is done on each iteration, so Euclid(a,b) = O(log b) Much better than Theta(b) **************************************************************************** Great! Now, suppose we are implementing some application where we need infinite precision numbers. Let N represent the LENGTH of the input numbers in binary notation, N = log(A) + log(B) Euclid iterates at most 2*N times - actually a bit less, since N includes the length of both arguments, but this is tight enough for a big-O bound There are two operations in the body of Eucide: a compare to 0, and a mod. To compare a number to zero you have to look at each bit, so that is N steps. What about computing mod? Let's start with some simplier operations. How long does binary addition of two N bit numbers take? 0 1 1 0 1 0 ----- 1 0 1 answer: O(N) What about multiplying two numbers? 0 1 1 x 0 1 0 ----- 0 0 0 0 1 1 0 0 0 --------- 0 0 1 1 0 answer: O(N^2) Integer division can also be done in O(N^2). (There is an algorithm in the textbook) One way to compute (a mod b) is a - (a / b)*b where / rounds down. What is the big O complexity? O(N^2) So, we have N recursions, each takes O(N^2) time, so the overall complexity is euclid = O(N^3) where N = length input = log(a) + log(b) **************************************************************************** That's it for today. There will be an assignment posted on the course's web site later today: http://www.cs.rochester.edu/u/www/courses/282/fall2013/ (If you loose this URL, you can get to through Blackboard.) The assignment is due one week from today. In Chapter 2, We're going to concentrate on efficient infinite precision multiplication, the master theorem, matrix multiplication, and finding medians. Quick question: what is the complexity of MergeSort? O(n log n) Right, you've all see this! We'll skip MergeSort, and save the long section 2.6, on the Fast Fourier Transform until the end of the semester.