Notes for CSC 162, 16 Feb. 2010 ff Chapter 4 Second project due Tues. Feb. 22, noon. Third project now available; due Tues. Mar. 2, noon. ======================================== Big O The big idea: identify the values(s) that characterize the "difficulty" of the input -- the things on which the running time (or space consumption)is dependent, then figure out what function of those values the running time (or space consumption) is proportional to. So, for instance, linear search is O(n) where n is the length of the list. factorial(n) is proportional to the value of the (single) input. A function T(n) -- e.g., the running time of a program -- is O(f(n)) if there exists constants c0 and n0 such that T(n) <= c0 f(n) if n >= n0. That is, T(n) is bounded by a constant times f(n) if n is large enough. Some things are sort of two-way dependent. Verifying that a four-coloring is correct requires time O(max(n, m)) for n tiles and m borders (assuming we allow disjoint -- multi-piece -- maps). Note that O(max(n, m)) is the same as O(n+m). It's usually written the second way. Let's go back and consider the running time of various operations on abstractions built from arrays and linked lists: SET array SL-un SL-ord add O(n)* O(1) O(n) length O(1) O(n)+ O(n)+ search O(n) O(n) O(n) remove O(n) O(n) O(n) (includes search) isEmpty O(1) O(1) O(1) union O(n**2) O(n**2) O(n) (also intersection, diff) * O(1) amortized if we double the size on overflow + or O(1) if we maintain a separate length count BAG array SL-un DL add O(n)* O(1) O(1) length O(1) O(n)+ O(n)+ remove O(n) O(n) O(n) search O(n) O(n) O(n) isEmpty O(1) O(1) O(1) merge O(n) O(n)% O(n)^ * O(1) amortized if we double the size on overflow + or O(1) if we maintain a separate length count ^ O(1) if we merge them "destructively" % O(1) if we keep a separate tail pointer AND merge them "destructively" EQUIV CLASS (assuming every object has a reference to the class to which it belongs) array SL-un DL add O(n)* O(1) O(1) remove O(n) O(n) O(1) move O(n) O(n) O(1) length O(1) O(n)+ O(n)+ search O(1) O(1) O(1) isEmpty O(1) O(1) O(1) merge O(n) O(n)% O(n)^ * O(1) amortized if we double the size on overflow + or O(1) if we maintain a separate length count ^ O(1) if we merge them "destructively" % O(1) if we keep a separate tail pointer AND merge them "destructively" ---------------------------------------- Turns out there are some other data structure worth using for sets -- particularly when we want to do a lot of searches. Characteristic array one bit for every element of the "universe" set works only for countable universe sets -- e.g. ints, but not reals or strings O(1) add, remove, and search O(n) isEmpty, length (or O(1) if you maintain a separate count) O(m) union, intersection, difference, where m is the universe size Sorted array O(n) insert, remove O(1) isEmpty, length O(log n) search O(n) union, intersection, difference Hash table works for almost arbitrary types O(1) insert, remove, search O(n) isEmpty, length (or O(1) if you maintain a separate count) O(m) union, intersection, difference with "open chaining" and a common hash function, where m is the size of the table O(m*k) union, intersection, difference with "closed chaining" or with different hash functions or table sizes, where m and k are the sizes of the tables Skip list or balanced search tree O(1) isEmpty O(log n) insert, remove, search O(n) length (size -- or O(1) if you maintain a separate count) O(n log n) union, intersection, difference Note that any of our set implementations can be modified easily to create a dictionary -- a mapping. ---------------------------------------- Binary Search O(log n) Phone book example. Nice illustration of the notion of INVARIANTS: def binarySearch(alist, item): first = 0 last = len(alist)-1 while first <= last and not found: # first <= last and desired element, if present, is in the # range alist[first].. alist[last] inclusive midpoint = (first + last)/2 if alist[midpoint] == item: return True if item < alist[midpoint]: last = midpoint-1 else: first = midpoint+1 ---------------------------------------- Hashing O(1) with a decent hash function and load factor. Generalization of characteristic arrays. Hash function needs to be good at "randomizing" keys easy to calculate Collisions External chaining Internal chaining (open addressing) linear probing leads to clustering :-( guadratic probing hash function multiples (zero a special case) make the hash table size a prime ======================================== Sorting bubble sort is terrible; never use it selection sort << code in selection_sort.py >> insertion sort << code in selection_sort.py >> shell sort O(n**2) with power-of-two increments O(n sqrt(n)) with better choice of increments not widely used merge sort O(n log n) easy to understand widely used for external (too big to fit in memory) sorts requires scratch array << code in merge_sort.py >> quicksort in-place really good constants; tends to be the best (comparison-based) sort in practice O(n log n) expected O(n**2) worst case in practice, want to stop recursion when sublists get really small and use insertion sort instead also want to sample several elements (e.g., first, last, and middle) to choose pivot, rather than always using first << code in quicksort.py >> heap sort O(n log n) time in-place like shell sort, a little hard to understand worse constants than quicksort << code in heap_sort.py >> radix sort O(n) time for more-or-less random numbers of bounded size space management an issue << code in radix_sort.py >> ---------------------------------------- Proof of O(n log n) lower bound for comparison-based sorting: (Very) informally: there are n! permutations, which is O(n**n). Each comparison can, at best, divide the set of possibilities in half. log(n**n) = n log n. ---------------------------------------- Sorting in Python L.sort() works in-place. Returns None, to remind you that it's in-place. It's STABLE, meaning that elements of equal value remain in the order they previously were. This is important in some apps., and is a requirement for the sub-phases of radix sort. The method takes three optional arguments: cmp() is a comparison function. cmp(e1, e2) is <0 if e1 < e2, 0 if e1 == e2, >0 if e1 > e2 Default value is None, meaning use the builtin comparison operators. key() is a routine that extracts a key from an element, where the key supports the builtin comparison operators. reverse is a Boolean; if True it means sort from largest to smallest. It's generally faster to use key() or to specify reverse than to use a special cmp() routine, since cmp() has to be called about n log n times, but key() can be called just once, and reverse is trivially in-line. ---------------------------------------- Summary selection sort Theta(n**2) (best and worst case) very low constant good for small lists insertion sort O(n**2), but linear time if already sorted low constant does more copying/moving/swapping than selection sort does good for small lists merge sort Theta(n log n) requires extra space (not in-place) great for external (too big to fit in memory) sorts quicksort O(n log n) expected case; O(n**2) worst case low constant in-place generally the best all-around internal sort heap sort O(n log n) worst case in-place not so great constant radix sort not based on pairwise comparisons; requires keys equivalent to bounded-length integers or strings Theta(n) complex space management ideal for special cases