Lecture 04 - 12 September 2013 Big news: No final exam. Instead, there are 5 in-class exams and 5 assignments, each worth 10% of final grade. The first in-class exam is this coming Tuesday September 17. ********************************************************* Today we will be talking about what are called "selection" algorithms. These are used to "select" special elements from an unsorted array of elements. For example, there are applications where you want to select - the median element what is the definition of median? - the largest or smallest element - the element that marks the first or third quartiles what does "quartile" mean in this context? - the second largest element and so on. Selection is easy if the array is sorted. Sorting takes how long? O(n log n) Sometimes sorting can be overkill. We will develop selection algorithms that take less than O(n log n) time. The particular kind of algorithms we will discuss, however, do not have better *worse case* complexity. Instead, we will argue that they are fast on "average". So, we are introducing a new kind of complexity analysis today: average case complexity. We start by reviewing some elementary facts about probability. Don't be scared - this isn't hard core statistics, just the concepts you met in high school. A random variable is simply a variable that can take on different values by chance. For example, I might use X to represent the value that appears when I roll a die. The domain, or the outcomes, of a random variable are the possible values it could take on. What is the domain of X? {1, 2, 3, 4, 5, 6} A probability distibution assigns each value in the domain of a random variable a number in the range [0, 1]. We write, for example, P(X=1) = 1/6 P(X=2) = 1/6 ... P(X=6) = 1/6 if the die is fair. The probabilities of all of the outcomes must sum to what? One. If we have several random variables, then the probability distribution assigns a probability to all combinations of outcomes. Suppose we have two rolls of a dice - or one roll of two dice - call these random variables X and Y. The dice are fair, we have P(X=1, Y=1) = ? 1/36 P(X=1, Y=2) = 1/36 and so on. We just need one one more concept before we get back to algorithms. That is the idea of an "expected value". The expected value of variable is its "average" value: that is, the sum of each outcome times the probability of the outcome. For a fair die, that is E(X) = (1)(1/6) + 2(1/6) + 3(1/6) + 4(1/6) + 5(1/6) + 6(1/6) = ? 21/6 == 3.5 Suppose we had a crooked die, that half the time came up 1: E(Y) = (1)(1/2) + 2(1/10) + 3(1/10) + 4(1/10) + 5(1/10) + 6(1/10) = ? 2.5 Now suppose I ask you to calculate the expected value of the sum shown when I roll the fair die X and the crooked die Y: E(X + Y) = ? I could do this by computing the probability of all combinations of outcomes for X and Y: E(X + Y) = (1+1)P(X=1,Y=1) + (1+2)P(X=1,Y=2) + ... + (6+6)P(X=6,Y=6) However, an easier way is to take advantange of the following fact: the expectation of a sum == the sume of the expectations So E(X + Y) = E(X) + E(Y) = ? 3.5 + 2.5 == 6 So, how will be use the concept of expectation in algorithm analysis? We will try to find the expected value of the running time of the algorithm. So, instead of asking T = O(f(n)) we will ask E[T] = O(f(n)) The domain of T is what? The number of steps executed -- that is, the integers. The probability distribution over T is determined by the length of the input n and any other assumptions we make about the possible inputs. We will generally assume that all inputs of length n are equally likely. Aside: in the real world, some kinds of inputs may be more likely than others, and to be most useful we'd want our analysis to reflect that fact. However, that kind of complication goes beyond what we will do in this class. Let's start with computing the expected running time of a simple algorithm that doesn't read any input. Here it is: Flip a coin until it comes up heads. Everyone: look in your pocket for a coin. If you have one, flip it repeatedly until you get heads. If you have an extra coin, lend it to someone who doesn't have one. Ready, go! Now, let's collect the answers. Sum them up, and divide by the number of answers. What did we get? We just estimated the expected running time of our algorithm by simulation! Now, let's compute it exactly. E(T) = (1)P(H) + (2)P(TH) + (3)P(TTH) + 4P(TTTH) + ... = (1)(1/2) + 2(1/2)^2 + 2(1/2)^3 + 4(1/2)^4 + .... = Sum_i=1_to_inf i (1/2)^i This is a power series... so a little Googling tells us = (1/2) / (1 - 1/2)^2 = (1/2) / (1/4) = 2 Phew. Suppose we don't have Google handle? There is a short cut we can use to get the answer in this case. Let's write the algorithm as follows, where Flip() is a random function that returns H or T. function flip_until_heads() if Flip() = H then return else flip_until_heads() end We can write a recurrence formula for the expected running time: E(T) = (1)(1/2) + (1 + E(T))(1/2) We can solve this directly for E(T)! E(T) = 1/2 + 1/2 + (1/2)E(T) (1/2)E(T) = 1 E(T) = 2 Now, let's go on to our "selection" algorithms. Selection: input: a list of numbers S; an integer k ouput: the k-th smallest element of S To find the median, what should k be? Yes, 1/2 length(S) Here's the basic idea: We pick a random element v from the list. We then break the list into three lists: values less than v, values equal to v, and values greater than v. S: 9 2 8 3 5 3 1 Suppose we pick the value 3. What do we put in each list? S_L: 2 1 S_E: 3 3 S_R: 9 8 5 To find the median, k = 4 (we round 7/2 up) We see that S_L has 2 elements, and S_E has two elements. So, the 4th smallest is equal to v, and we are done. However, suppose we pick the value 5. Then S_L: 2 3 3 1 S_E: 5 S_R: 9 8 Now, S_L has 4 elements, so the 4th smallest must be in it. So, so recurse on just S_L. Here's the code: function selection(S,k) v = S[random() % k] // pick a random element create empty arrays S_L, S_E, S_R for i = 0 to length(S)-1: if S[i] < v then insert(S_L, S[i]) if S[i]==v then insert(S_E, S[i]) if S[i] > v then insert(S_R, S[i]) end if k <= length(S_L) then return selection(S_L,k) elsif k > length(S_L) + length(S_E) then return selection(S_R, k - length(S_L) - length(S_E)) else return v // must be in S_E end What is the running time of Selection? If we are lucky and |S_L| == |S_R| == (1/2) |S| Then T(n) = T(n/2) + O(n) why O(n)? the splitting work What does the master theorem say about this? a = 1 b = 2 d = 1 log_b(a) = log_2(1) = 0 d > log_b(a) so T = O(n^d) = O(n) However, the worst case is bad -- you happen to pick the biggest or smallest element: T(n) = T(n-1) + O(n) T = O(n^2) How often are we lucky? It turns out, very often! Let's say v is "good" if it falls between the 25th - 75th percentile of the elements in S. When this happens, S_L and S_R are at most 3/4 as long as S. How often does this happen? Half the time, because 1/2 of the elements are in this range by definition. What is the expected number of random splits do you need to make until you spit on a lucky value? This the flipping coins problem! Answer is 2! So what is E[time to reduce array size to 3n/4] = ? 2 O(n) since each split takes O(n). Ok! Now we are going to try to calculate the expected time of Selection! Time taken on an array of size n <= (time taken on an array of size 3n/4) + (time to reduce array size to 3n/4) E[T(n)] <= E[T(3n/4) + (time to reduce array size to 3n/4)] E[T(n)] <= E[T(3n/4) + O(n)] E[T(n)] <= E[T(3n/4)] + O(n) See what I just did? I replaced an expectation of a sum with a sum of expectations. How do we solve this? It looks like a recurrence relation, but contains expectations. Well, E[T(n)] is a function of n. Let's define T'(n) = E[T(n)] Now we can write T'(n) <= T'(3n/4) + O(n) Hey, now we can apply the MASTER THEOREM! a = 1 b = 4/3 d = 1 log_b(a) = log_(4/3)(1) = 0 1 > 0 so T'(n) = O(n^d) = O(n) That is E(T(n)) = O(n)